DNS for Tor Exit Relaying

One of the major pieces of infrastructure run by Tor Exit Nodes is DNS. DNS is the system that translates human readable names, like emeraldonion.org, into IP addresses. In Tor, the exit node is the location where this translation takes place. As such, DNS has been recognized as one of the places where centralization or attacks could be performed that could affect the integrity of the Tor network. To serve our users well, we want to mitigate the risks of compromise and surveillance as we resolve names on behalf of our users. It’s these principles that direct how we structure our DNS resolution.

 

Emerald Onion currently uses pfSense, which uses Unbound for DNS. Per our architectural design, we run our own recursive DNS server, meaning we query up to the root name servers for DNS resolution, and avoid the cache any upstream ISPs offering us DNS resolvers. This also means we query the authoritative resolvers directly, minimizing the number of additional parties able to observe domain resolutions coming from our users.

General Settings:

  • We use DNS Resolver and disable the DNS Forwarder
  • Only bind the DNS listener to the NIC the Tor server is connected to and localhost.
  • Only bind the DNS outgoing interface to the NIC that carries our public IP. If you use BGP, do NOT bind DNS to the interface used to connect to BGP peers.
  • Enable DNSSEC support
  • Disable DNS Query Forwarding
  • We don’t use DHCP, leave DHCP Registration disabled
  • Same goes with Static DHCP

Custom options:

prefer-ip6: yes
hide-trustanchor: yes
harden-large-queries: yes
harden-algo-downgrade: yes
qname-minimisation-strict: yes
ignore-cd-flag: yes

The most important of these options is qname-minimization, which means that when we perform a resolution like www.emeraldonion.org, we ask the root name servers only for who controls .org, the Org name servers, who controls emeraldonion.org, and only ask emerald onion’s name servers for the IP of www.emeraldonion.org. This helps to protect against our traffic resolutions being swept into the various “passive DNS” feeds that have been commoditized around the network.

Of the other custom options, the bulk are related to DNSSEC security.

 

Advanced Settings:

  • Hide Identity
  • Hide Version
  • Use Prefetch Support
  • Use Prefetch DNS Keys
  • Harden DNSSEC data
  • Leave the tuning values alone for now (Things like cache size, buffers, queues, jostle, etc)
  • Log Level is 1, which is pretty low.
  • Leave the rest alone.

Hiding the identity and version helps prevent the leakage of information that could be used in attacks against us. Prefetch Support changes how the DNS server fetches DNS records. Without it, it fetches the DNS record at the time of the request. With Prefetch Support, it refreshes the DNS entries as each record’s TTL expires helping to further obfuscate requests and makes it harder for specific Tor request correlation attacks.

Access Lists:

We don’t use this, but if you want to work with the Access Lists, that should be fine, just keep it in mind when troubleshooting.

OTF Concept Note

What is your idea?

Emerald Onion is a recently founded ISP/community non-profit organization based in Seattle seeded by a Tor Servers grant and personal donations. Our goal is to expand the privacy community by lowering the cost and learning curve for community organizations in the US to operate infrastructure.

Existing organizations in this space, Calyx and Riseup, have succeeded in rapidly becoming focal points for the community, but are inherently difficult to scale and are most effective in their local geographic communities. Thus, while Riseup has excelled at providing technical software, there is no easy path to establishing similar organizations. We want to change that!

In beginning the process of establishing ourselves, we have been documenting every step of the way. We are already running multiple unfiltered Tor exit nodes, and have both legally vetted abuse letters and educational material for law enforcement published. Now, we want to take this knowledge and export it to other communities. In particular, we want to focus on areas near Internet Exchange Points that are in at least 49 metro locations around the US, which can provide economical pricing and good connectivity for Internet traffic. We hope to spur wider deployment of Tor, onion services, and surrounding privacy technologies by helping other local groups follow our path.

Part of this mission is becoming a focal point for privacy operations work ourselves. Working with researchers at Berkeley, we are contributing to easier handling of abuse complaints. With academics in the Tor community, we are piloting new exit strategies to limit the impact of censorship. With the open source community, we are increasing platform diversity through the use and documentation of HardenedBSD as an alternative software stack. Our trajectory as an organization will include partnering with these organizations to improve usage and deployability, education of other network operators, and increasing our network presence and capacity.

What are your goals / long-term effects?

Our goal is that most major communities in the US have local organizations focused on operating privacy technology. Hackerspaces today are a mixed bag, and are often centered around physical rather than digital creation. Despite the presence of major IXPs, there are many urban centers today without community-driven organizations working to bolster privacy. We envision providing the groundwork and enthusiasm to support a network of ISPs around the country that can rectify this situation. Getting these entities in place will be important underlying infrastructure for future decentralized and federated networks to follow the model of Tor today.

Emerald Onion will directly evangelize operational best practices as it matures as a community organization and fits into the Seattle privacy community. To take advantage of Seattle’s position as one of the largest exchange points in the world, Emerald Onion will actively seek out peering agreements and aim to transit a significant amount of Tor traffic. Through our ISP operations, we hope for two primary longer term effects: First, that we are able to disseminate knowledge of peering agreements to make it significantly easier for other entities to understand how to enter into these negotiations. Second, that we can help Tor and other privacy enhancing networks gain capacity, reliability, and strategies for resilience to censorship. These technologies have focused on their software properties, but there are significant operational and networking challenges that need to be solved in tandem. We believe entities like Emerald Onion are the right complement to help privacy technologies succeed.

How will you do it?

Concretely, Emerald Onion will work to bring at least 10 new, independent, Privacy-focused ISPs into existence within the next 12 months. We will also speak in conferences, and use our existing communication pathways to advertise and publish our work. We will work to find other supporters and help them establish their own organizations. In addition, we will work to make shared support networks, like legal funds, peering databases, and abuse systems to be more accessible to these community groups.

Specifically, we will focus on the following areas of capacity:

  1. Funding and organizational stability strategies
  2. Nonprofit ISP incorporation setup and management
  3. Data center co-location setup and management
  4. ARIN registration and AS and IP scope management
  5. IP transit setup and management
  6. BGP setup and management
  7. Peering agreement setup and management
  8. Legal response setup and management

To stabilize ourselves as a focal point of the Seattle privacy community, Emerald Onion will continue to develop both its own sustainability model, and its infrastructure.

We expect to receive 501c3 status by the end of the year, and have already begun soliciting donations for our general operations. We have had initial success in approaching local members of the community to contribute the funds for one relay worth of cost in exchange for “naming rights” of that node. We believe direct community contributions can provide sustainability, and will complement this income stream with grant funding for growth.

We will also continue to increase our network presence to improve our fault tolerance and gain access to more network peers. Higher capacity will allow us to provide incubation for a larger range of privacy enhancing technologies.

Who is it for?

Emerald Onion is not just for the ~60 million monthly Tor users or the Seattle privacy community. We are not just a testing ground for encountering and solving operational issues in the deployment of privacy technologies. Emerald Onion is strategically developing as a model steward of privacy networks that is focused on quality and integrity. Our actions and relationships further legitimize Tor within communities that operate the backbones of the Internet and will help normalize the use of Tor for business-driven service providers.  We will continue to be an inspiration for community groups and other ethically-conscious ISPs alike.

Emerald Onion’s day-to-day work at present focuses on existing and new Tor router operators, who with the organizations that they create will immediately impact public perception. In Emerald Onion’s short existence, we have made direct, personal connections with at least 50 professional network administrators, datacenter operators, and Internet service providers. Imagine that happening in every major IXP community around the United States.

Emerald Onion has paid for professional legal services, and has already published our verified Legal FAQ and abuse complaint responses that are valid within the United States. Similarly, the organization is working with Academics to better understand the operational reality of abuse complaints, and to understand opportunities for making use of the IP space. These services benefit the larger privacy community both operationally, and as an incubator for projects.

What is the existing community?

The Tor relay community is already strong, but lacks strong US-based advocacy for growth. In Europe, TorServers.net has evolved into a grant-giving organization, which is able to provide advice and financial support to help new relays get started, but is not well positioned to support US-based relays. In Canada, Coldhak runs a valuable relay, but has not attempted to export its knowledge to external entities.

In the US, the largest relay presences come from Riseup.net and Calyx networks. Riseup is focused on services like email and VPNs in tandem with important education. This is valuable work, but does not extend to directly advocating for new groups to enter the ISP space. Calyx is supported through a cellular ISP model focusing on end-users but does not focus on supporting new relay operators.

Emerald Onion aims to fill this gap through direct advocacy to guide and support new relay operators and encourage the existence and creation of privacy supporting entities in a diverse set of IXPs around the country.

Complementary Efforts?

The Tor Project itself provides a basic level of support for new entities, particularly with technical support. In addition to a wide-reaching and engaging community, the presence of the Tor-relays mailing list provides a valuable community-wide support network between operators. The EFF has been a long-time supporter of the legal aspects of relay operation, and has helped with several legal papers supporting helping to establish the legal protections of Tor exit operation, along with providing counsel when new legal issues arise. It remains the case that new entities establishing Tor exit nodes in the US face thousands of dollars of legal fees to properly prepare themselves with the needed form letters for abuse and a tricky navigation of legal guidelines to establish themselves as legal entities with the authority to respond to complaints without fear of retribution.

Emerald Onion hopes to fill these gaps by making it easier for others by defining clear direction, freely published in the public domain, so that new operators don’t need to duplicate work that we’ve already performed. Building a shared legal defense fund and sharing how to navigate data center costs and contracts will allow groups to form with much less risk or uncertainty.

Why is it needed?

In building Emerald Onion so far, we have already found that many of the steps we are taking are undocumented, or rely on verbally communicated lore. That situation is not sustainable, and cannot scale or significantly improve the current state of the world.

More organizations are needed that focus on Internet privacy the same way hackerspaces have focused on hardware and technical development. Internet issues are inherently rooted in being part of the Internet and that barrier has so far been a high hurdle for community groups. We believe that this hurdle needs to be lower.

Without active development of these entities, we will continue to see even more centralization of the Internet and continued erosion of neutrality. Retaining a community presence in Internet operations is a key underlying infrastructure that we strongly believe has the potential to change the future development of the Internet.

 

Tor on HardenedBSD

In this post, we’ll detail how we set up Tor on HardenedBSD. We’ll use HardenedBSD 11-STABLE, which ships with LibreSSL as the default crypto library in base and in ports. The vast majority of Tor infrastructure nodes run Linux and OpenSSL. Emerald Onion believes running HardenedBSD will help improve the diversity and resiliency of the Tor network. Additionally, running HardenedBSD gives us peace of mind due to its expertly crafted, robust, and scalable exploit mitigations. Together, Emerald Onion and HardenedBSD are working towards a safer and more secure Tor network.

This article should be considered a living document. We’ll keep it up-to-date as HardenedBSD and Emerald Onion evolve.

Initial Steps

Downloading and installing HardenedBSD 11-STABLE is simple. Navigate to the latest build and download the installation media that suits your needs. The memstick image is suited for USB flash drives. Boot the installation media.

Installing HardenedBSD is simple. Follow the prompts. Sample screenshots are provided below:

  1. Select Install:
  2. Select your keymap. If you use a standard US English keyboard, the default is fine:
  3. Choose a hostname:
  4. Select the distribution sets to install:
  5. Choose your filesystem. For purposes of this article, we’ll use ZFS for full-disk encryption:
  6. Selecting the Pool Type will allow you to configure your ZFS pool the way you want. We will just use a single disk in this article:
  7. Since we’re using a single disk, we’ll select the Stripe option:
  8. Select the disks to use in the pool. Only a single disk for us:
  9. After selecting the disks, you’ll go back to the original ZFS setup menu. We’ve made a few changes (Encrypt Disks, Swap Size, Encrypt Swap):
  10. Review the changes:
  11. Set the password on your encrypted ZFS pool:
  12. Validate the password:
  13. Encrypted ZFS will initialize itself:
  14. HardenedBSD will now install distribution sets:
  15. Set the root password:
  16. If you want to set up networking, select the network device to configure. In this article, we’ll set up a dynamic (DHCP) network configuration:
  17. We want to use IPv4:
  18. We want to use DHCP:
  19. It will try to acquire a DHCP lease:
  20. At Emerald Onion, we put IPv6 first. However, in this example article, we won’t use IPv6 as it’s not currently available. So we’ll choose no when prompted to set up IPv6:
  21. Ensure the DNS information is correct and make any changes if needed:
  22. It’s now time to choose the system timezone. Select the region:
  23. We chose America. We’ll choose United States for the country next:
  24. Finally we’ll chose the actual timezone:
  25. Confirm the timezone:
  26. Because we use NTP, we’ll skip setting the date:
  27. We’ll also skip setting the time:
  28. Select the services to start at boot:
  29. Select the system hardening options. HardenedBSD sets options one through five by default, so there’s no need to set them here.
  30. We will go ahead and add an unprivileged user. Make sure to add the user to the “wheel” group for access to use the su program:
  31. Set the user’s details:
  32. HardenedBSD is now installed! Exit the installer. The installer will do things in the background so there may be some delay between exiting and the next prompt:
  33. We don’t want to make further modifications to the installation prior to rebooting:
  34. Go ahead and reboot:

The installation is now complete!

Installing Tor

Installing Tor is simple, too. Once HardenedBSD is installed and you’ve logged in, run the following command:

# pkg install tor

The Tor package on HardenedBSD, and its upstream FreeBSD, currently does not ship with a modified Tor configuration file, which can be found at /usr/local/etc/tor/torrc. Tor isn’t set up to log outside of initial startup messages. You will need to edit the Tor configuration file to suit your needs. Take a look at the tor(1) manpage for all the available configuration options.

In our set up, Tor listens on TCP ports 80 and 443 as an unprivileged user. We need to tell HardenedBSD to allow non-root users to be able to bind to ports that traditionally require root privileges:

# echo 'net.inet.ip.portrange.reservedhigh=0' >> /etc/sysctl.conf
# service sysctl start

Multi-Instance Tor

At Emerald Onion, we run multiple instances of Tor on the same server. This allows us to scale Tor to our needs. The following instructions detail how to set up multi-instance Tor. The same instructions can be used for single-instance Tor.

We named our instances simple names: instance-01, instance-02, instance-03, and so on. Each instance has its own configuration file, located at /usr/local/etc/tor/torrc@${instance_name}. We first set up a template config file:

Nickname EmeraldOnion%%INSTANCE%%
Address tor01.emeraldonion.org
ContactInfo abuse_at_emeraldonion_dot_org
Log notice file /var/log/tor/instance-%%INSTANCE%%/notices.log
OutboundBindAddressExit %%IP4ADDR%%
OutboundBindAddressOR %%IP4ADDR%%
DirPort %%IP4ADDR%%:80
ORPort %%IP4ADDR%%:443
ORPort %%IP6ADDR%%:443
RelayBandwidthRate 24 MBytes
RelayBandwidthBurst 125 MBytes
MyFamily %%FAMILY%%
IPv6Exit 1
ExitPolicy accept *:*
ExitPolicy accept6 *:*
SocksPort 0

The next script installs the appropriate config file based on the above template. Some things are sanitized. Shawn, who wrote the script, is a fan of zsh.

#!/usr/local/bin/zsh

ninstances=5

family=""

for ((i=1; i <= ${ninstances}; i++)); do
	instance=$(printf '%02d' ${i})

	family=""
	for ((k=1; k <= ${ninstances}; k++)); do
		[ ${k} -eq ${i} ] && continue
		[ ${#family} -gt 0 ] && family="${family},"
		family="${family}EmeraldOnion$(printf '%02d' ${k})"
	done

	sed -e "s/%%INSTANCE%%/${instance}/g" \
		-e "s/%%IP4ADDR%%/192.168.1.$((${i} + 10))/g" \
		-e "s/%%IP6ADDR%%/\[fe80::$((${i} + 10))\]/g" \
		-e "s/%%FAMILY%%/${family}/g" \
		tmpl.config > /usr/local/etc/tor/torrc@instance-${instance}
	mkdir -p /var/log/tor/instance-${instance}
	chown _tor:_tor /var/log/instance-${instance}
	chmod 700 /var/log/instance-${instance}
done

We then instructed the Tor rc script not to run the default instance of Tor:

# sysrc tor_disable_default_instance=YES

Then we tell the rc system which Tor instances we want to run and set Tor to start at boot:

# sysrc tor_instances="instance-01 instance-02 instance-03 instance-04 instance-05"
# sysrc tor_enable=YES

Then we start Tor. The first time the Tor rc script starts Tor, it will create the data and logging directories for you with the proper permissions.

# service tor start

Keeping HardenedBSD and Tor Up-To-Date

Updating HardenedBSD is simple with hbsd-update. We publish updates for base periodically. Feel free to use hbsd-update as often as you’d like to check for updates to the base operating system.

For example:

# hbsd-update
# shutdown -r now

To update your packages, including Tor, use:

# pkg upgrade

Tor Service Management Basics

The tor rc script uses SIGINT when shutting Tor down. This causes Tor to shutdown in an ungraceful manner, immediately halting connections from clients. Instead of using the traditional service tor stop command, directly issue SIGTERM to the instance you wish to stop.

# service tor status instance-01
tor is running as pid 70918.
# kill -SIGTERM 70918

If you’d like to stop all instances in a graceful way at the same time:

# killall -SIGTERM tor

In a multi-instance setup, you can tell the service command which instance you want to control by appending the instance name (the portion after the @ symbol of the torrc file) at the end of the command. For example, to reload the config file for instance-01, issue the following command:

# service tor reload instance-01

If you want to reload the config file for all instances, simply remove the instance name from the above command. The rc script will issue the reload command across all instances.

If you’d like to look at an instance’s log file, you can use the tail command:

# tail -f /var/log/tor/instance-01/notices.log

Future Work

In the future, we would like to further harden our Tor setup by having each instance deployed in its own HardenedBSD jail. Once that is complete, we will document and publish the steps we took.

Emerald Onion’s BGP Setup

This is a walk through of who our current peers are and our BGP setup.

Special thanks to DFRI, Paul English, Seattle Internet Exchange, and Theodore Baschak for your time and patience!

Current Peers

180 peers via the SIX Route Servers, 12 Direct Peers Peers via the SIX and 1 Transit Peer

6456   - Altopia Corporation
13335  - CloudFlare, Inc.
395823 - doof.net
36459  - Github
6939   - Hurricane Electric
57695  - Misaka Network LLC
3856   - Packet Clearing House
42     - WoodyNet (Also Packet Clearing House)
23265  - Pocketinet Communications, Inc.
16652  - Riseup Networks
33108  - Seattle Internet Exchange*
64241  - Wobscale Technologies, LLC
23033  - WowRack**
10310  - Yahoo! Inc.

Updated 9/7/2017

* The Seattle Internet Exchange (SIX) peer is for Route Servers
** WowRack is our current transit provider.

To see a list of all peers through the route servers:

BGP Setup

Since we currently use pfSense, we use openbgpd to peer with other Autonomous Systems.

In order to accomplish this, there are a few pre-requisites:

  1. An AS Number (ASN). Check out the list of Regional Internet Registries (RIR) for your respective geographical location on getting your ASN and Direct Allocation of IP Addresses (IPv6 & IPv4). They are listed at the bottom in the External Resources section of this page.
  2. If peering with an Internet Exchange Point (IXP) a dedicated IP address from them in order to peer (Both IPv6 & IPv4).
  3. Install the openbgpd package in pfSense (System > Package Manager > Available Packages) and then enter OpenBGPD.
  4. Submit a Letter of Agency (LOA) to your transit provider so they can announce your ASN thus IP space upstream.
  5. When switching from a typical router config to that of a BGP router, there are some fundamental changes in architecture that are required. Take a look at our Conversion Article here: https://emeraldonion.org/eo-pfsense-conversion-plan/

A fundamental aspect to this setup is touched on in the conversion plan linked in step 5. It is important to understand that a typical router setup is that the WAN links have default gateways but when setting up or switching to BGP connections, Default Gateways are not used and must be removed from the NIC config. If you want your transit provider to be your default route, you ask them to advertise that route to you and then through BGP you will get the 0.0.0.0 route. In our case, our transit provider is WowRack (AS23033) and they advertise the default route to us. The other ASNs that we peer with do not and it is BGP’s job to select the correct route based on AS length.

We found that after installing the openbgpd package in pfSense, it is best to just use the raw config tab (Services > OpenBGPD > Raw config). The issue we ran into is that after filling out the wizard, we needed to make some changes. Doing so through the wizard didn’t update the raw config which is what the service actually looks at (bgpd.conf). So, now we just manage it through the raw config.

 

Our BGP Config

At a high level, there are 3 major parts to the config:

Router Config

Such as ASN, Router ID, Network Info and Options (Like fib update and holdtime).

Groups and Neighbors

This will have a bunch of groups with neighbors in them. It can also have groups that contains two Neighbors. A group being a single AS and Neighbors being a couple of routers that Neighbor has (usually for redundancy).

We highly recommend peering with your local Internet Exchange’s (IX) route servers. This is an easy way to peer with a bunch of ASNs without having to setup direct peering. Route servers are however not a substitute for direct peering. When doing this, make sure in the bgpd.conf in the neighbor section of the group to tell bgpd not to enforce the neighbor as using “enforce neighbor-as no” so that it will accept routes from ASNs that aren’t the same as the route servers’ peering ASN.

Filtering Rules

This is how we allow or deny routes to come through from our peers. First we block everything, then we allow our peers, then we block specific networks like Martians (Such as RFC1918, etc).

We recently made some changes to this section to help protect against some poor practices seen in BGP configs. One thing is to append “inet prefixlen 8 – 24” for IPv4 and “inet6 prefixlen 16 – 48” for IPv6 to the end of the allow from and allow to statements. This states that we will only accept networks with a size of /8 to /24 (IPv4) and /16 to /48 (IPv6).

And we also made some updates to the bogon network list per the OpenBGPD standard config. These networks aren’t meant for Internet traffic so we filter them out.

bgpd.conf

AS 396507

fib-update yes
holdtime 90

router-id 206.81.81.158

# IPv4 network
network 23.129.64.0/24
# IPv6 network
network 2620:18C::/36

#### IPv4 neighbors ####
group "AS-WOWRACK-Transit-v4" {
	remote-as 23033
	neighbor 216.176.186.129 {
		descr "WOW_trans_rs1v4"
		announce self
		local-address 216.176.186.130
		max-prefix 1000000
}
}
group "AS-SIXRSv4" {
	remote-as 33108
	neighbor 206.81.80.2 {
		descr "SIXRS_rs2v4"
		announce self
		local-address 206.81.81.158
		enforce neighbor-as no
		max-prefix 200000
}
	neighbor 206.81.80.3 {
		descr "SIXRS_rs3v4"
		announce self
		local-address 206.81.81.158
		enforce neighbor-as no
		max-prefix 200000
}
}
group "AS-HURRICANEv4" {
	remote-as 6939
	neighbor 206.81.80.40 {
		descr "HE_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 152000
}
}
group "AS-ALTOPIAv4" {
	remote-as 6456
	neighbor 206.81.80.10 {
		descr "ALT_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 20 restart 30
}
	neighbor 206.81.81.41 {
		descr "ALT_rs2v4"
		announce self
		local-address 206.81.81.158
		max-prefix 20 restart 30
}
}
group "AS-POCKETINETv4" {
	remote-as 23265
	neighbor 206.81.80.88 {
		descr "POK_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 600
}
}
group "AS-DOOFv4" {
	remote-as 395823
	neighbor 206.81.81.125 {
		descr "DOOF_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 5
}
}
group "AS-PCHv4" {
	remote-as 3856
	neighbor 206.81.80.81 {
		descr "PCH_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 600
}
}
group "AS-PCHWNv4" {
	remote-as 42
	neighbor 206.81.80.80 {
		descr "PCHWN_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 600
}
}
group "AS-WOBv4" {
	remote-as 64241
	neighbor 206.81.81.87 {
		descr "WOB_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 5
}
}
group "AS-GOOGv4" {
	remote-as 15169
	neighbor 206.81.80.17 {
		descr "GOOG_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 15000
}
}
group "AS-MISAKAv4" {
	remote-as 57695
	neighbor 206.81.81.161 {
		descr "MISAKA_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 200
}
}
group "AS-RISUPv4" {
	remote-as 16652
	neighbor 206.81.81.74 {
		descr "RISUP_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 20
}
}
group "AS-AKAMAIv4" {
	remote-as 20940
	neighbor 206.81.80.113 {
		descr "AKAMAI_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 200
}
}
group "AS-CoSITv4" {
	remote-as 3401
	neighbor 206.81.80.202 {
		descr "CoSIT_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 10
}
}
group "AS-CLDFLRv4" {
	remote-as 13335
	neighbor 206.81.81.10 {
		descr "CLDFLR_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 1000
}
}
group "AS-DYNv4" {
	remote-as 33517
	neighbor 206.81.81.121 {
		descr "DYN_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 400
}
}
group "AS-FCBKv4" {
	remote-as 32934
	neighbor 206.81.80.181 {
		descr "FCBK_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 200
}
	neighbor 206.81.80.211 {
		descr "FCBK_rs2v4"
		announce self
		local-address 206.81.81.158
		max-prefix 200
}
}
group "AS-GITHUBv4" {
	remote-as 36459
	neighbor 206.81.81.89 {
		descr "GITHUB_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 100
}
	neighbor 206.81.81.90 {
		descr "GITHUB_rs2v4"
		announce self
		local-address 206.81.81.158
		max-prefix 100
}
}
group "AS-MSFTv4" {
	remote-as 8075
	neighbor 206.81.80.30 {
		descr "MSFT_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 2000
}
	neighbor 206.81.80.68 {
		descr "MSFT_rs2v4"
		announce self
		local-address 206.81.81.158
		max-prefix 2000
}
}
group "AS-OpenDNSv4" {
	remote-as 36692
	neighbor 206.81.80.53 {
		descr "OpenDNS_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 200
}
}
group "AS-SPLv4" {
	remote-as 21525
	neighbor 206.81.80.196 {
		descr "SPL_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 10
}
}
group "AS-TWITTERv4" {
	remote-as 13414
	neighbor 206.81.81.31 {
		descr "TWITTER_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 200
}
}
group "AS-VRISIGNv4" {
	remote-as 7342
	neighbor 206.81.80.133 {
		descr "VRISIGN_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 600
}
}
group "AS-YAHOOv4" {
	remote-as 10310
	neighbor 206.81.80.98 {
		descr "YAHOO_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 2000
}
	neighbor 206.81.81.50 {
		descr "YAHOO_rs2v4"
		announce self
		local-address 206.81.81.158
		max-prefix 2000
}
}
group "AS-INTEGRAv4" {
	remote-as 7385
	neighbor 206.81.80.102 {
		descr "INTEGRA_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 2000
}
}
group "AS-PNWGPv4" {
	remote-as 101
	neighbor 206.81.80.84 {
		descr "PNWGP_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 500
}
}
group "AS-WAVEv4" {
	remote-as 11404
	neighbor 206.81.80.56 {
		descr "WAVE_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 6000
}
}
group "AS-AMAZONv4" {
	remote-as 16509
	neighbor 206.81.80.147 {
		descr "AMAZON_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 4000
}
	neighbor 206.81.80.248 {
		descr "AMAZON_rs2v4"
		announce self
		local-address 206.81.81.158
		max-prefix 4000
}
}
group "AS-SYMTECv4" {
	remote-as 27471
	neighbor 206.81.81.169 {
		descr "SYMTEC_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 40
}
	neighbor 206.81.81.170 {
		descr "SYMTEC_rs2v4"
		announce self
		local-address 206.81.81.158
		max-prefix 40
}
}

#### IPv6 neighbors ####
group "AS-WOWRACK-Transit-v6" {
	remote-as 23033
	neighbor 2607:F8F8:2F0:811:2::1 {
		descr "WOW_trans_rs1v6"
		announce self
		local-address 2607:F8F8:2F0:811:2::2
		max-prefix 100000
}
}
group "AS-SIXRSv6" {
	remote-as 33108
	neighbor 2001:504:16::2 {
		descr "SIXRS_rs2v6"
		announce self
		local-address 2001:504:16::6:cdb
		enforce neighbor-as no
		max-prefix 60000
}
	neighbor 2001:504:16::3 {
		descr "SIXRS_rs3v6"
		announce self
		local-address 2001:504:16::6:cdb
		enforce neighbor-as no
		max-prefix 60000
}
}
group "AS-HURRICANEv6" {
	remote-as 6939
	neighbor 2001:504:16::1b1b {
		descr "HE_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 41000
}
}
group "AS-ALTOPIAv6" {
	remote-as 6456
	neighbor 2001:504:16::1938 {
		descr "ALT_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 20 restart 30
}
	neighbor 2001:504:16::297:0:1938 {
		descr "ALT_rs2v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 20 restart 30
}
}
group "AS-POCKETINETv6" {
	remote-as 23265
	neighbor 2001:504:16::5ae1 {
		descr "POK_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 600
}
}
group "AS-DOOFv6" {
	remote-as 395823
	neighbor 2001:504:16::6:a2f {
		descr "DOOF_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 5
}
}
group "AS-PCHv6" {
	remote-as 3856
	neighbor 2001:504:16::f10 {
		descr "PCH_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 600
}
}
group "AS-PCHWNv6" {
	remote-as 42
	neighbor 2001:504:16::2a {
		descr "PCHWN_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 600
}
}
group "AS-WOBv6" {
	remote-as 64241
	neighbor 2001:504:16::faf1 {
		descr "WOB_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 5
}
}
group "AS-GOOGv6" {
	remote-as 15169
	neighbor 2001:504:16::3b41 {
		descr "GOOG_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 750
}
}
group "AS-MISAKAv6" {
	remote-as 57695
	neighbor 2001:504:16::e15f {
		descr "MISAKA_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 150
}
}
group "AS-RISUPv6" {
	remote-as 16652
	neighbor 2001:504:16::410c {
		descr "RISUP_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 10
}
}
group "AS-AKAMAIv6" {
	remote-as 20940
	neighbor 2001:504:16::51cc {
		descr "AKAMAI_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 40
}
}
group "AS-CLDFLRv6" {
	remote-as 13335
	neighbor 2001:504:16::3417 {
		descr "CLDFLR_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 200
}
}
group "AS-DYNv6" {
	remote-as 33517
	neighbor 2001:504:16::82ed {
		descr "DYN_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 200
}
}
group "AS-FCBKv6" {
	remote-as 32934
	neighbor 2001:504:16::80a6 {
		descr "FCBK_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 200
}
	neighbor 2001:504:16::211:0:80a6 {
		descr "FCBK_rs2v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 200
}
}
group "AS-GITHUBv6" {
	remote-as 36459
	neighbor 2001:504:16::8e6b {
		descr "GITHUB_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 20
}
	neighbor 2001:504:16::346:0:8e6b {
		descr "GITHUB_rs2v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 20
}
}
group "AS-MSFTv6" {
	remote-as 8075
	neighbor 2001:504:16::1f8b {
		descr "MSFT_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 500
}
	neighbor 2001:504:16::68:0:1f8b {
		descr "MSFT_rs2v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 500
}
}
group "AS-OpenDNSv6" {
	remote-as 36692
	neighbor 2001:504:16::8f54 {
		descr "OpenDNS_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 40
}
}
group "AS-SPLv6" {
	remote-as 21525
	neighbor 2001:504:16::5415 {
		descr "SPL_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 10
}
}
group "AS-TWITTERv6" {
	remote-as 13414
	neighbor 2001:504:16::3466 {
		descr "TWITTER_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 10
}
}
group "AS-VRISIGNv6" {
	remote-as 7342
	neighbor 2001:504:16::1cae {
		descr "VRISIGN_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 100
}
}
group "AS-YAHOOv6" {
	remote-as 10310
	neighbor 2001:504:16::2846 {
		descr "YAHOO_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 200
}
	neighbor 2001:504:16::306:0:2846 {
		descr "YAHOO_rs2v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 200
}
}
group "AS-INTEGRAv6" {
	remote-as 7385
	neighbor 2001:504:16::1cd9 {
		descr "INTEGRA_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 100
}
}
group "AS-PNWGPv6" {
	remote-as 101
	neighbor 2001:504:16::65 {
		descr "PNWGP_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 20
}
}
group "AS-WAVEv6" {
	remote-as 11404
	neighbor 2001:504:16::2c8c {
		descr "WAVE_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 500
}
}
group "AS-AMAZONv6" {
	remote-as 16509
	neighbor 2001:504:16::407d {
		descr "AMAZON_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 1000
}
	neighbor 2001:504:16::248:0:407d {
		descr "AMAZON_rs2v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 1000
}
}

#### Filtering Rules ####

deny from any
deny to any

# https://www.arin.net/announcements/2014/20140130.html
# This block will be subject to a minimum size allocation of /28 and a
# maximum size allocation of /24. ARIN should use sparse allocation when
# possible within that /10 block.
allow from any prefix 23.128.0.0/10 prefixlen 24 - 28   # ARIN IPv6 transition

## IPv4 ##
# WOW_trans_rs1v4
allow from 216.176.186.129
allow to 216.176.186.129
# SIXRS_rs2v4
allow from 206.81.80.2 inet prefixlen 8 - 24
allow to 206.81.80.2 inet prefixlen 8 - 24
# SIXRS_rs3v4
allow from 206.81.80.3 inet prefixlen 8 - 24
allow to 206.81.80.3 inet prefixlen 8 - 24
# HE_rs1v4
allow from 206.81.80.40
allow to 206.81.80.40
# ALT_rs1v4
allow from 206.81.80.10 inet prefixlen 8 - 24
allow to 206.81.80.10 inet prefixlen 8 - 24
# ALT_rs2v4
allow from 206.81.81.41 inet prefixlen 8 - 24
allow to 206.81.81.41 inet prefixlen 8 - 24
# POK_rs1v4
allow from 206.81.80.88 inet prefixlen 8 - 24
allow to 206.81.80.88 inet prefixlen 8 - 24
# DOOF_rs1v4
allow from 206.81.81.125 inet prefixlen 8 - 24
allow to 206.81.81.125 inet prefixlen 8 - 24
# PCH_rs1v4
allow from 206.81.80.81 inet prefixlen 8 - 24
allow to 206.81.80.81 inet prefixlen 8 - 24
# PCHWN_rs1v4
allow from 206.81.80.80 inet prefixlen 8 - 24
allow to 206.81.80.80 inet prefixlen 8 - 24
# WOB_rs1v4
allow from 206.81.81.87 inet prefixlen 8 - 24
allow to 206.81.81.87 inet prefixlen 8 - 24
# GOOG_rs1v4
allow from 206.81.80.17
allow to 206.81.80.17
# MISAKA_rs1v4
allow from 206.81.81.161 inet prefixlen 8 - 24
allow to 206.81.81.161 inet prefixlen 8 - 24
# RISUP_rs1v4
allow from 206.81.81.74 inet prefixlen 8 - 24
allow to 206.81.81.74 inet prefixlen 8 - 24
# AKAMAI_rs1v4
allow from 206.81.80.113 inet prefixlen 8 - 24
allow to 206.81.80.113 inet prefixlen 8 - 24
# CoSIT_rs1v4
allow from 206.81.80.202 inet prefixlen 8 - 24
allow to 206.81.80.202 inet prefixlen 8 - 24
# CLDFLR_rs1v4
allow from 206.81.81.10 inet prefixlen 8 - 24
allow to 206.81.81.10 inet prefixlen 8 - 24
# DYN_rs1v4
allow from 206.81.81.121 inet prefixlen 8 - 24
allow to 206.81.81.121 inet prefixlen 8 - 24
# FCBK_rs1v4
allow from 206.81.80.181 inet prefixlen 8 - 24
allow to 206.81.80.181 inet prefixlen 8 - 24
# FCBK_rs2v4
allow from 206.81.80.211 inet prefixlen 8 - 24
allow to 206.81.80.211 inet prefixlen 8 - 24
# GITHUB_rs1v4
allow from 206.81.81.89 inet prefixlen 8 - 24
allow to 206.81.81.89 inet prefixlen 8 - 24
# GITHUB_rs2v4
allow from 206.81.81.90 inet prefixlen 8 - 24
allow to 206.81.81.90 inet prefixlen 8 - 24
# MSFT_rs1v4
allow from 206.81.80.30 inet prefixlen 8 - 24
allow to 206.81.80.30 inet prefixlen 8 - 24
# MSFT_rs2v4
allow from 206.81.80.68 inet prefixlen 8 - 24
allow to 206.81.80.68 inet prefixlen 8 - 24
# OpenDNS_rs1v4
allow from 206.81.80.53 inet prefixlen 8 - 24
allow to 206.81.80.53 inet prefixlen 8 - 24
# SPL_rs1v4
allow from 206.81.80.196 inet prefixlen 8 - 24
allow to 206.81.80.196 inet prefixlen 8 - 24
# TWITTER_rs1v4
allow from 206.81.81.31 inet prefixlen 8 - 24
allow to 206.81.81.31 inet prefixlen 8 - 24
# VRISIGN_rs1v4
allow from 206.81.80.133 inet prefixlen 8 - 24
allow to 206.81.80.133 inet prefixlen 8 - 24
# YAHOO_rs1v4
allow from 206.81.80.98 inet prefixlen 8 - 24
allow to 206.81.80.98 inet prefixlen 8 - 24
# YAHOO_rs2v4
allow from 206.81.81.50 inet prefixlen 8 - 24
allow to 206.81.81.50 inet prefixlen 8 - 24
# INTEGRA_rs1v4
allow from 206.81.80.102 inet prefixlen 8 - 24
allow to 206.81.80.102 inet prefixlen 8 - 24
# PNWGP_rs1v4
allow from 206.81.80.84 inet prefixlen 8 - 24
allow to 206.81.80.84 inet prefixlen 8 - 24
# WAVE_rs1v4
allow from 206.81.80.56 inet prefixlen 8 - 24
allow to 206.81.80.56 inet prefixlen 8 - 24
# AMAZON_rs1v4
allow from 206.81.80.147 inet prefixlen 8 - 24
allow to 206.81.80.147 inet prefixlen 8 - 24
# AMAZON_rs2v4
allow from 206.81.80.248 inet prefixlen 8 - 24
allow to 206.81.80.248 inet prefixlen 8 - 24
# SYMTEC_rs1v4
allow from 206.81.81.169 inet prefixlen 8 - 24
allow to 206.81.81.169 inet prefixlen 8 - 24
# SYMTEC_rs2v4
allow from 206.81.81.170 inet prefixlen 8 - 24
allow to 206.81.81.170 inet prefixlen 8 - 24

## IPv6 ##
# WOW_trans_rs1v6
allow from 2607:F8F8:2F0:811:2::1
allow to 2607:F8F8:2F0:811:2::1
# SIXRS_rs2v6
allow from 2001:504:16::2 inet6 prefixlen 16 - 48
allow to 2001:504:16::2 inet6 prefixlen 16 - 48
# SIXRS_rs3v6
allow from 2001:504:16::3 inet6 prefixlen 16 - 48
allow to 2001:504:16::3 inet6 prefixlen 16 - 48
# HE_rs1v6
allow from 2001:504:16::1b1b
allow to 2001:504:16::1b1b
# ALT_rs1v6
allow from 2001:504:16::1938 inet6 prefixlen 16 - 48
allow to 2001:504:16::1938 inet6 prefixlen 16 - 48
# ALT_rs2v6
allow from 2001:504:16::297:0:1938 inet6 prefixlen 16 - 48
allow to 2001:504:16::297:0:1938 inet6 prefixlen 16 - 48
# POK_rs1v6
allow from 2001:504:16::5ae1 inet6 prefixlen 16 - 48
allow to 2001:504:16::5ae1 inet6 prefixlen 16 - 48
# DOOF_rs1v6
allow from 2001:504:16::6:a2f inet6 prefixlen 16 - 48
allow to 2001:504:16::6:a2f inet6 prefixlen 16 - 48
# PCH_rs1v6
allow from 2001:504:16::f10 inet6 prefixlen 16 - 48
allow to 2001:504:16::f10 inet6 prefixlen 16 - 48
# PCHWN_rs1v6
allow from 2001:504:16::2a inet6 prefixlen 16 - 48
allow to 2001:504:16::2a inet6 prefixlen 16 - 48
# WOB_rs1v6
allow from 2001:504:16::faf1 inet6 prefixlen 16 - 48
allow to 2001:504:16::faf1 inet6 prefixlen 16 - 48
# GOOG_rs1v6
allow from 2001:504:16::3b41
allow to 2001:504:16::3b41
# MISAKA_rs1v6
allow from 2001:504:16::e15f inet6 prefixlen 16 - 48
allow to 2001:504:16::e15f inet6 prefixlen 16 - 48
# RISUP_rs1v6
allow from 2001:504:16::410c inet6 prefixlen 16 - 48
allow to 2001:504:16::410c inet6 prefixlen 16 - 48
# AKAMAI_rs1v6
allow from 2001:504:16::51cc inet6 prefixlen 16 - 48
allow to 2001:504:16::51cc inet6 prefixlen 16 - 48
# CLDFLR_rs1v6
allow from 2001:504:16::3417 inet6 prefixlen 16 - 48
allow to 2001:504:16::3417 inet6 prefixlen 16 - 48
# DYN_rs1v6
allow from 2001:504:16::82ed inet6 prefixlen 16 - 48
allow to 2001:504:16::82ed inet6 prefixlen 16 - 48
# FCBK_rs1v6
allow from 2001:504:16::80a6 inet6 prefixlen 16 - 48
allow to 2001:504:16::80a6 inet6 prefixlen 16 - 48
# FCBK_rs2v6
allow from 2001:504:16::211:0:80a6 inet6 prefixlen 16 - 48
allow to 2001:504:16::211:0:80a6 inet6 prefixlen 16 - 48
# GITHUB_rs1v6
allow from 2001:504:16::8e6b inet6 prefixlen 16 - 48
allow to 2001:504:16::8e6b inet6 prefixlen 16 - 48
# GITHUB_rs2v6
allow from 2001:504:16::346:0:8e6b inet6 prefixlen 16 - 48
allow to 2001:504:16::346:0:8e6b inet6 prefixlen 16 - 48
# MSFT_rs1v6
allow from 2001:504:16::1f8b inet6 prefixlen 16 - 48
allow to 2001:504:16::1f8b inet6 prefixlen 16 - 48
# MSFT_rs2v6
allow from 2001:504:16::68:0:1f8b inet6 prefixlen 16 - 48
allow to 2001:504:16::68:0:1f8b inet6 prefixlen 16 - 48
# OpenDNS_rs1v6
allow from 2001:504:16::8f54 inet6 prefixlen 16 - 48
allow to 2001:504:16::8f54 inet6 prefixlen 16 - 48
# SPL_rs1v6
allow from 2001:504:16::5415 inet6 prefixlen 16 - 48
allow to 2001:504:16::5415 inet6 prefixlen 16 - 48
# TWITTER_rs1v6
allow from 2001:504:16::3466 inet6 prefixlen 16 - 48
allow to 2001:504:16::3466 inet6 prefixlen 16 - 48
# VRISIGN_rs1v6
allow from 2001:504:16::1cae inet6 prefixlen 16 - 48
allow to 2001:504:16::1cae inet6 prefixlen 16 - 48
# YAHOO_rs1v6
allow from 2001:504:16::2846 inet6 prefixlen 16 - 48
allow to 2001:504:16::2846 inet6 prefixlen 16 - 48
# YAHOO_rs2v6
allow from 2001:504:16::306:0:2846 inet6 prefixlen 16 - 48
allow to 2001:504:16::306:0:2846 inet6 prefixlen 16 - 48
# INTEGRA_rs1v6
allow from 2001:504:16::1cd9 inet6 prefixlen 16 - 48
allow to 2001:504:16::1cd9 inet6 prefixlen 16 - 48
# PNWGP_rs1v6
allow from 2001:504:16::65 inet6 prefixlen 16 - 48
allow to 2001:504:16::65 inet6 prefixlen 16 - 48
# WAVE_rs1v6
allow from 2001:504:16::2c8c inet6 prefixlen 16 - 48
allow to 2001:504:16::2c8c inet6 prefixlen 16 - 48
# AMAZON_rs1v6
allow from 2001:504:16::407d inet6 prefixlen 16 - 48
allow to 2001:504:16::407d inet6 prefixlen 16 - 48
# AMAZON_rs2v6
allow from 2001:504:16::248:0:407d inet6 prefixlen 16 - 48
allow to 2001:504:16::248:0:407d inet6 prefixlen 16 - 48

# filter bogus networks according to RFC5735
deny from any prefix 0.0.0.0/8 prefixlen >= 8           # 'this' network [RFC1122]
deny from any prefix 10.0.0.0/8 prefixlen >= 8          # private space [RFC1918]
deny from any prefix 100.64.0.0/10 prefixlen >= 10      # CGN Shared [RFC6598]
deny from any prefix 127.0.0.0/8 prefixlen >= 8         # localhost [RFC1122]
deny from any prefix 169.254.0.0/16 prefixlen >= 16     # link local [RFC3927]
deny from any prefix 172.16.0.0/12 prefixlen >= 12      # private space [RFC1918]
deny from any prefix 192.0.2.0/24 prefixlen >= 24       # TEST-NET-1 [RFC5737]
deny from any prefix 192.168.0.0/16 prefixlen >= 16     # private space [RFC1918]
deny from any prefix 198.18.0.0/15 prefixlen >= 15      # benchmarking [RFC2544]
deny from any prefix 198.51.100.0/24 prefixlen >= 24    # TEST-NET-2 [RFC5737]
deny from any prefix 203.0.113.0/24 prefixlen >= 24     # TEST-NET-3 [RFC5737]
deny from any prefix 224.0.0.0/4 prefixlen >= 4         # multicast
deny from any prefix 240.0.0.0/4 prefixlen >= 4         # reserved

# filter bogus IPv6 networks according to IANA
deny from any prefix ::/8 prefixlen >= 8
deny from any prefix 0100::/64 prefixlen >= 64          # Discard-Only [RFC6666]
deny from any prefix 2001:2::/48 prefixlen >= 48        # BMWG [RFC5180]
deny from any prefix 2001:10::/28 prefixlen >= 28       # ORCHID [RFC4843]
deny from any prefix 2001:db8::/32 prefixlen >= 32      # docu range [RFC3849]
deny from any prefix 3ffe::/16 prefixlen >= 16          # old 6bone
deny from any prefix fc00::/7 prefixlen >= 7            # unique local unicast
deny from any prefix fe80::/10 prefixlen >= 10          # link local unicast
deny from any prefix fec0::/10 prefixlen >= 10          # old site local unicast
deny from any prefix ff00::/8 prefixlen >= 8            # multicast

Updated 9/5/2017

We’ll update this as we make changes.

External Resources

Here are a few references we leveraged when building our config:

RIRs:

African Network Information Center (AFRINIC) for Africa
https://www.afrinic.net/

American Registry for Internet Numbers (ARIN) for the United States, Canada, several parts of the Caribbean region, and Antarctica.
https://www.arin.net/

Asia-Pacific Network Information Centre (APNIC) for Asia, Australia, New Zealand, and neighboring countries
https://www.apnic.net/

Latin America and Caribbean Network Information Centre (LACNIC) for Latin America and parts of the Caribbean region
https://www.lacnic.net/

Réseaux IP Européens Network Coordination Centre (RIPE) for Europe, Russia, the Middle East, and Central Asia
https://www.ripe.net/

DNSSEC is now fully implemented for our forward and reverse lookup zones

Last month (July 2017) we moved our DNS zone management to the Google Cloud Platform since our domains were already registered with Google. After applying for the DNSSEC alpha, we were granted access and turned on DNSSEC for all three of our forward (domain) and reverse (IPv6 and IPv4 scopes) lookup zones. Google’s alpha products come with no SLA, so we took a risk implementing DNSSEC through Google.

Turning on DNSSEC was as easy flipping a switch in the control panel. The last part is adding the DS entries at the Registrar.

In the upper-right hand corner of Zone Details is Registrar Setup. This is where we got our DS entry information.

This DS information translates to a specific Key Tag, Algorithm, Digest Type, and Digest that needs to go into Google Domains (the actual Registrar).

This completed the domain setup. Now we needed to configure DNSSEC for our reverse lookup zones. Because they are direct allocations from ARIN, we needed to copy over the DS details over to ARIN.

View and Manage Your Networks > View & Manage Network (for both our IPv6 and IPv4 scopes) > Actions > Manage Reverse DNS > (select the delegation) > Modify DS Records

String (for our IPv6) parsed:

3600 DS 46756 8 2 5396635C919BAF34F24011FAB2DE251630AE2B8C17F1B69D05BCFDD603510014

String (for our IPv4) parsed:

3600 DS 40286 8 2 54686118794BD67CC76295F3D7F1C269D70EB5646F5DA130CC590AE14B33935F

This completed the ARIN DNSSEC configuration. While Google provided a quick DNS update for validation, ARIN took over 12 hours.

Internet Exchange Points in the United States

Emerald Onion is researching IXPs in the U.S.A. in order to identify areas of priority as it concerns increasing global Tor network capacity by way of putting Tor routers directly on these highly interconnected networks. Putting Tor exit routers in IXPs, for example, may reduce network latency to end points. It may also reduce network hops, potentially minimizing the possibility of third-party surveillance. Emerald Onion envisions a future where the Tor network is composed of much larger and more stable network operators, globally.

Questions

  1. Are there any Tor routers connected to any United States-based IXPs? If so, which ones and who operates them?
  2. Is this IXP friendly to Tor?
  3. What is the organizational structure of this IXP? Such as corporate-run or community-driven, etc.
  4. What qualities of an IXP should impact how meaningful it would be for the Tor network?
    • Number of participants?
    • Access to specific participants?
    • Nonprofit?
    • Community driven?
    • Affordability?
    • Geolocation?
    • Prohibits network surveillance?

A top 20 list of cities to focus on for Tor development?

  1. Chicago, IL has at least 12 IXPs
  2. New York City, NY has at least 9 IXPs (and has Calyx Institute)
  3. Dallas, TX has at least 6 IXPs
  4. Los Angeles, CA has at least 6 IXPs
  5. Miami, FL has at least 6 IXPs
  6. Seattle, WA has at least 5 IXPs (and has Riseup and Emerald Onion)
  7. San Jose, CA has at least 5 IXPs
  8. Phoenix, AZ has at least 5 IXPs
  9. Ashburn, VA has at least 3 IXPs
  10. Reston, VA has at least 3 IXPs
  11. Boston, MA has at least 3 IXPs
  12. Atlanta, GA has at least 3 IXPs
  13. Portland, OR has at least 3 IXPs
  14. Honolulu, HI has at least 2 IXPs
  15. Denver, CO has at least 2 IXPs
  16. Vienna, VA has at least 2 IXPs
  17. Palo Alto, CA has at least 1 IXP
  18. Salt Lake City, UT has at least 1 IXP (and has XMission)
  19. Minneapolis, MN has at least 1 IXP
  20. Detroit, MI has at least 1 IXP

IXPs in the United States

Ashburn, VA

    1. Equinix Ashburn Exchange (Equinix Ashburn)
    2. LINX Northern Virginia (LINX)
    3. MAE East

Ashland, VA

    1. Richmond Virginia Internet Exchange (RVA-IX)

Atlanta, GA

    1. Digital Realty / Telx Internet Exchange (TIE)
    2. Equinix Internet Exchange Atlanta (Equinix Atlanta)
    3. Southeast Network Access Point (SNAP)

Austin, TX

    1. CyrusOne Internet Exchange (CyrusOne IX)

Billings, MT

    1. Yellowstone Regional Internet eXchange (YRIX)

Boston, MA

    1. Boston Internet Exchange
    2. Massachusetts eXchange Point (MXP)
    3. CoreSite – Any2 Boston

Buffalo, NY

    1. Buffalo Niagara International Internet Exchange (BNIIX)

Chicago, IL

    1. AMS-IX Chicago
    2. CyrusOne Internet Exchange (CyrusOne IX)
    3. Equinix Chicago Exchange (Equinix Chicago)
    4. Greater Chicago International Internet Exchange (GCIIX)
    5. United IX – Chicago (ChIX)
    6. CoreSite – Any2 Chicago
    7. MAE Central

Columbus, OH

    1. OhioIX

Dallas, TX

    1. CyrusOne Internet Exchange (CyrusOne IX)
    2. DE-CIX, the Dallas Internet Exchange (DE-CIX Dallas)
    3. Digital Realty / Telx Internet Exchange (TIE)
    4. Equinix Dallas Exchange (Equinix Dallas)
    5. MAE Central
    6. Megaport MegaIX Dallas (MegaIX Dallas)

Denver, CO

    1. CoreSite – Any2 Denver
    2. Interconnection eXchange Denver (IX-Denver)

Detroit, MI

    1. Detroit Internet Exchange (DET-IX)

Duluth, NM

    1. Twin Ports Internet Exchange (TP-IX)

Gillette, WY

    1. BigHorn Fiber Internet Exchang (BFIX)

Hagåtña, Guam

    1. Guam Internet Exchange (GU-IX)

Honolulu, HI

    1. DRFortress Exchange (DRF IX)
    2. Hawaii Internet eXchange (HIX)

Houston, TX

    1. CyrusOne Internet Exchange (CyrusOne IX)

Indianapolis, IN

    1. Midwest Internet Exchange (MidWest-IX – Indy)

Jacksonville, FL

    1. Jacksonville Internet Exchange (JXIX)

Kansas City, MO

    1. Kansas City Internet eXchange (KCIX)

Los Angeles, CA

    1. CENIC International Internet eXchange (CIIX)
    2. Equinix Los Angeles Exchange (Equinix Los Angeles)
    3. Los Angeles International Internet eXchange (LAIIX)
    4. MAE West
    5. Pacific Wave Exchange in Los Angeles and Seattle (PacificWave)
    6. CoreSite – Any2 California

Madison, WI

    1. Madison Internet Exchange (MadIX)

Manassas, VA

    1. LINX Northern Virginia (LINX)

Medford, OR

    1. Southern Oregon Access Exchange (SOAX)

Miami, FL

    1. Equinix Internet Exchange Miami (Equinix Miami)
    2. MAE East
    3. Miami Internet Exchange (MiamiIX)
    4. NAP of the Americas (NOTA)
    5. The South Florida Internet Exchange (FL-IX)
    6. CoreSite – Any2 Miami

Milwaukee, WI

    1. The Milwaukee IX (MKE-IX)

Minneapolis, MN

    1. Midwest Internet Cooperative Exchange (MICE)

Moffett Field, CA

    1. NGIX West

Nashville, TN

    1. Nashville Internet Exchange (NashIX)

New York, NY

    1. AMS-IX New York (AMS-IX NY)
    2. Big Apple Peering Exchange (BigApe)
    3. Digital Realty / Telx Internet Exchange (TIE)
    4. Equinix Internet Exchange New York (Equinix New York)
    5. Free NYIIX Alternative (NYCX)
    6. New York, NY – (CoreSite – Any2 New York)
    7. DE-CIX, the New York / New Jersey Internet Exchange (DE-CIX New York)
    8. New York International Internet eXchange (NYIIX)
    9. MAE East

Omaha, NE

    1. Omaha Internet Exchange (OmahaIX)

Palo Alto, CA

    1. Equinix Internet Exchange Palo Alto (Equinix Palo Alto)

Philadelphia, PA

    1. Philadelphia Internet Exchange (PHILAIX)

Phoenix, AZ

    1. Arizona Internet Exchange (AZIX)
    2. Digital Realty / Telx Internet Exchange (TIE)
    3. Phoenix Internet Exchange, LLC (PHX-IX)
    4. Phoenix IX
    5. CyrusOne Internet Exchange (CyrusOne IX)

Portland, OR

    1. Central Oregon Internet eXchange (COIX)
    2. Northwest Access Exchange, Inc. (NWAX)
    3. Oregon Internet Exchange (OIX)

Reno, NV

    1. Tahoe Internet Exchange (TahoeIX)

Reston, VA

    1. LINX Northern Virginia (LINX)
    2. MAE East
    3. CoreSite – Any2 NorthEast

Saint George, UT

    1. Southern Utah Peering Regional Network (SUPRnet)

Salt Lake City, UT

    1. Salt Lake Internet Exchange (SLIX)

San Antonio, TX

    1. CyrusOne Internet Exchange (CyrusOne IX)

San Diego, CA

    1. San Diego NAP (SD-NAP)

San Francisco, CA

    1. San Francisco Internet Exchange (SFIX)
    2. San Francisco Metropolitan Internet Exchange (SFMIX)

San Jose, CA

    1. AMS-IX Bay Area (AMS-IX BA
    2. CoreSite – Any2 Northern California)
    3. Equinix San Jose / Bay Area Exchange (Equinix San Jose)
    4. NASA Ames Internet eXchange (AIX)
    5. MAE West

San Juan, Puerto Rico

    1. Internet Exchange of Puerto Rico (IX.PR)
    2. Puerto Rico Bridge Initiative (PRBI-IX)

Seattle, WA

    1. Megaport MegaIX Seattle (MegaIX Seattle)
    2. Pacific Wave Exchange in Los Angeles and Seattle (PacificWave)
    3. Seattle Internet Exchange (SIX)
    4. Seattle Internet Exchange (9000 MTU) (SIX Seattle (Jumbo))
    5. Seattle, WA Equinix Internet Exchange Seattle (Equinix Seattle)

Sterling, VA

    1. CyrusOne Internet Exchange (CyrusOne IX)

Tampa, FL

    1. Tampa Internet Exchange (TampaIX)
    2. Tampa Internet Exchange (TPAIX)

Tulsa, OK

    1. LiveAir Tulsa IX

Vienna, VA

  1. Equinix Internet Exchange Vienna, VA (Equinix Vienna (VA))
  2. MAE East

We’re back after a 6.5 day outage

Today (8/1/2017) @ 23:22 Pacific Time, we came back online after being down for 6 days and 12 hours. Our previous configuration where we had two physically separate systems (1 x pfSense router and 1 x Tor router) is gone. The server that was running the Tor router started to experience hardware errors, as reported by kern.log. These errors were traced back to the system board, which eventually caused issues with the disk.

While all of this was happening, we were also down an admin as he was out at defcon. So, juggling that, wanting to restore service and limited funds because to replace the Tor router, we would’ve had to wait for our refund check from the RMA before buying a new system board, we decided to virtualize our infrastructure.

We are now operating on a single server (12 x Core Intel + 32GB RAM) with 1 x pfSense 2.3.4 VM and 1 x Tor Ubuntu 16.04 VM. The system is up and passing Tor traffic.

Tor router configuration v3

01 August 2017

We experienced a catastrophic hardware failure recently which will be detailed in an upcoming blog post. We are back online today with new router IDs and we added two more routers for a total of six Tor routers.

We moved to Google Cloud DNS recently to be able to manage our PTR records for reverse DNS since we have our own IP scopes now. We also moved our forward-lookup zone to Google Cloud DNS. Next on the agenda is setting up DNSSEC.

IPv6 PTR

1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.1.1.0.0.0.0.c.8.1.0.0.2.6.2.ip6.arpa.
1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.2.1.0.0.0.0.c.8.1.0.0.2.6.2.ip6.arpa.
1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.3.1.0.0.0.0.c.8.1.0.0.2.6.2.ip6.arpa.
1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.4.1.0.0.0.0.c.8.1.0.0.2.6.2.ip6.arpa.
1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.5.1.0.0.0.0.c.8.1.0.0.2.6.2.ip6.arpa.
1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.6.1.0.0.0.0.c.8.1.0.0.2.6.2.ip6.arpa.

DNS

2620:18c:0:1100::1 tor01.emeraldonion.org
2620:18c:0:1200::1 tor02.emeraldonion.org
2620:18c:0:1300::1 tor03.emeraldonion.org
2620:18c:0:1400::1 tor04.emeraldonion.org
2620:18c:0:1500::1 tor05.emeraldonion.org
2620:18c:0:1600::1 tor06.emeraldonion.org
23.129.64.11 tor01.emeraldonion.org
23.129.64.12 tor02.emeraldonion.org
23.129.64.13 tor03.emeraldonion.org
23.129.64.14 tor04.emeraldonion.org
23.129.64.15 tor05.emeraldonion.org
23.129.64.16 tor06.emeraldonion.org

Tor router #1

Nickname EmeraldOnion01
Address tor01.emeraldonion.org
ContactInfo abuse_at_emeraldonion_dot_org
OutboundBindAddressExit 23.129.64.11
OutboundBindAddressOR 23.129.64.11
DirPort 23.129.64.11:80
ORPort 23.129.64.11:443
ORPort [2620:18c:0:1100::1]:443
RelayBandwidthRate 18 MBytes
RelayBandwidthBurst 18 MBytes
IPv6Exit 1
Exitpolicy accept *:*
ExitPolicy accept6 *:*

Tor router #2

Nickname EmeraldOnion02
Address tor02.emeraldonion.org
ContactInfo abuse_at_emeraldonion_dot_org
OutboundBindAddressExit 23.129.64.12
OutboundBindAddressOR 23.129.64.12
DirPort 23.129.64.12:80
ORPort 23.129.64.12:443
ORPort [2620:18c:0:1200::1]:443
RelayBandwidthRate 18 MBytes
RelayBandwidthBurst 18 MBytes
IPv6Exit 1
Exitpolicy accept *:*
ExitPolicy accept6 *:*

Tor router #3

Nickname EmeraldOnion03
Address tor03.emeraldonion.org
ContactInfo abuse_at_emeraldonion_dot_org
OutboundBindAddressExit 23.129.64.13
OutboundBindAddressOR 23.129.64.13
DirPort 23.129.64.13:80
ORPort 23.129.64.13:443
ORPort [2620:18c:0:1300::1]:443
RelayBandwidthRate 18 MBytes
RelayBandwidthBurst 18 MBytes
IPv6Exit 1
Exitpolicy accept *:*
ExitPolicy accept6 *:*

Tor router #4

Nickname EmeraldOnion04
Address tor04.emeraldonion.org
ContactInfo abuse_at_emeraldonion_dot_org
OutboundBindAddressExit 23.129.64.14
OutboundBindAddressOR 23.129.64.14
DirPort 23.129.64.14:80
ORPort 23.129.64.14:443
ORPort [2620:18c:0:1400::1]:443
RelayBandwidthRate 18 MBytes
RelayBandwidthBurst 18 MBytes
IPv6Exit 1
Exitpolicy accept *:*
ExitPolicy accept6 *:*

Tor router #5

Nickname EmeraldOnion05
Address tor05.emeraldonion.org
ContactInfo abuse_at_emeraldonion_dot_org
OutboundBindAddressExit 23.129.64.15
OutboundBindAddressOR 23.129.64.15
DirPort 23.129.64.15:80
ORPort 23.129.64.15:443
ORPort [2620:18c:0:1500::1]:443
RelayBandwidthRate 18 MBytes
RelayBandwidthBurst 18 MBytes
IPv6Exit 1
Exitpolicy accept *:*
ExitPolicy accept6 *:*

Tor router #6

Nickname EmeraldOnion06
Address tor06.emeraldonion.org
ContactInfo abuse_at_emeraldonion_dot_org
OutboundBindAddressExit 23.129.64.16
OutboundBindAddressOR 23.129.64.16
DirPort 23.129.64.16:80
ORPort 23.129.64.16:443
ORPort [2620:18c:0:1600::1]:443
RelayBandwidthRate 18 MBytes
RelayBandwidthBurst 18 MBytes
IPv6Exit 1
Exitpolicy accept *:*
ExitPolicy accept6 *:*

Starting the processes

sudo service tor@tor01 start
sudo service tor@tor02 start
sudo service tor@tor03 start
sudo service tor@tor04 start
sudo service tor@tor05 start
sudo service tor@tor06 start

Tor router configuration v2

22 July 2017

We are rearchitecting our network by eliminating the use of our ISP-provisioned /27 IP scope in order to utilize our ARIN-assigned /24. Doing so allows us to route across multiple networks with the same ASN, a requirement in order to use our ARIN-assigned IPv6 scope. For network simplification, we are also eliminating the use of NAT.

Tor router #1

Nickname EmeraldOnion01
Address tor01.emeraldonion.org
ContactInfo abuse_at_emeraldonion_dot_org
OutboundBindAddressExit 23.129.64.11
OutboundBindAddressOR 23.129.64.11
DirPort 23.129.64.11:80
ORPort 23.129.64.11:443
ORPort [2620:18c:0:1100::1]:443
RelayBandwidthRate 27.5 MBytes
RelayBandwidthBurst 27.5 MBytes
IPv6Exit 1
Exitpolicy accept *:*
ExitPolicy accept6 *:*

Tor router #2

Nickname EmeraldOnion02
Address tor02.emeraldonion.org
ContactInfo abuse_at_emeraldonion_dot_org
OutboundBindAddressExit 23.129.64.12
OutboundBindAddressOR 23.129.64.12
DirPort 23.129.64.12:80
ORPort 23.129.64.12:443
ORPort [2620:18c:0:1200::1]:443
RelayBandwidthRate 27.5 MBytes
RelayBandwidthBurst 27.5 MBytes
IPv6Exit 1
Exitpolicy accept *:*
ExitPolicy accept6 *:*

Tor router #3

Nickname EmeraldOnion03
Address tor03.emeraldonion.org
ContactInfo abuse_at_emeraldonion_dot_org
OutboundBindAddressExit 23.129.64.13
OutboundBindAddressOR 23.129.64.13
DirPort 23.129.64.13:80
ORPort 23.129.64.13:443
ORPort [2620:18c:0:1300::1]:443
RelayBandwidthRate 27.5 MBytes
RelayBandwidthBurst 27.5 MBytes
IPv6Exit 1
Exitpolicy accept *:*
ExitPolicy accept6 *:*

Tor router #4

Nickname EmeraldOnion04
Address tor04.emeraldonion.org
ContactInfo abuse_at_emeraldonion_dot_org
OutboundBindAddressExit 23.129.64.14
OutboundBindAddressOR 23.129.64.14
DirPort 23.129.64.14:80
ORPort 23.129.64.14:443
ORPort [2620:18c:0:1400::1]:443
RelayBandwidthRate 27.5 MBytes
RelayBandwidthBurst 27.5 MBytes
IPv6Exit 1
Exitpolicy accept *:*
ExitPolicy accept6 *:*

Tor router configuration v1

2 July 2017

We have started four Tor exit routers to saturate our unmetered-1Gbps, 10Gbps-burstable link.

DNS

216.176.186.131 tor01.emeraldonion.org
216.176.186.132 tor02.emeraldonion.org
216.176.186.133 tor03.emeraldonion.org
216.176.186.134 tor04.emeraldonion.org

Create instances

sudo tor-instance-create tor01
sudo tor-instance-create tor02
sudo tor-instance-create tor03
sudo tor-instance-create tor04

Tor router #1

sudo vim /etc/tor/instances/tor01/torrc
Nickname EmeraldOnion01
Address tor01.emeraldonion.org
ContactInfo abuse@emeraldonion.org
OutboundBindAddressExit 10.10.10.101
OutboundBindAddressOR 10.10.10.101
DirPort 216.176.186.131:80 NoListen
DirPort 10.10.10.101:80 NoAdvertise
ORPort 216.176.186.131:443 NoListen
ORPort 10.10.10.101:443 NoAdvertise
#ORPort [2620:18c:0:100::1]:443
RelayBandwidthRate 50 MBytes
RelayBandwidthBurst 50 MBytes
#IPv6Exit 1
Exitpolicy accept *:*
#ExitPolicy accept6 *:*

Tor router #2

sudo vim /etc/tor/instances/tor02/torrc
Nickname EmeraldOnion02
Address tor02.emeraldonion.org
ContactInfo abuse@emeraldonion.org
OutboundBindAddressExit 10.10.10.102
OutboundBindAddressOR 10.10.10.102
DirPort 216.176.186.132:80 NoListen
DirPort 10.10.10.102:80 NoAdvertise
ORPort 216.176.186.132:443 NoListen
ORPort 10.10.10.102:443 NoAdvertise
#ORPort [2620:18c:0:200::1]:443
RelayBandwidthRate 20 MBytes
RelayBandwidthBurst 20 MBytes
#IPv6Exit 1
Exitpolicy accept *:*
#ExitPolicy accept6 *:*

Tor router #3

sudo vim /etc/tor/instances/tor03/torrc
Nickname EmeraldOnion03
Address tor03.emeraldonion.org
ContactInfo abuse@emeraldonion.org
OutboundBindAddressExit 10.10.10.103
OutboundBindAddressOR 10.10.10.103
DirPort 216.176.186.133:80 NoListen
DirPort 10.10.10.103:80 NoAdvertise
ORPort 216.176.186.133:443 NoListen
ORPort 10.10.10.103:443 NoAdvertise
#ORPort [2620:18c:0:300::1]:443
RelayBandwidthRate 20 MBytes
RelayBandwidthBurst 20 MBytes
#IPv6Exit 1
Exitpolicy accept *:*
#ExitPolicy accept6 *:*

Tor router #4

sudo vim /etc/tor/instances/tor04/torrc
Nickname EmeraldOnion04
Address tor04.emeraldonion.org
ContactInfo abuse@emeraldonion.org
OutboundBindAddressExit 10.10.10.104
OutboundBindAddressOR 10.10.10.104
DirPort 216.176.186.134:80 NoListen
DirPort 10.10.10.104:80 NoAdvertise
ORPort 216.176.186.134:443 NoListen
ORPort 10.10.10.104:443 NoAdvertise
#ORPort [2620:18c:0:400::1]:443
RelayBandwidthRate 20 MBytes
RelayBandwidthBurst 20 MBytes
#IPv6Exit 1
Exitpolicy accept *:*
#ExitPolicy accept6 *:*

Start instances

sudo systemctl start tor@tor01
sudo systemctl start tor@tor02
sudo systemctl start tor@tor03
sudo systemctl start tor@tor04

Check logs

sudo journalctl --boot -u tor@tor01.service
sudo journalctl --boot -u tor@tor02.service
sudo journalctl --boot -u tor@tor03.service
sudo journalctl --boot -u tor@tor04.service

Tor changes + reloading

sudo service tor@tor01 reload
sudo service tor@tor02 reload
sudo service tor@tor03 reload
sudo service tor@tor04 reload