Tor as infrastructure: challenges and opportunities

Infrastructure as a Service (IaaS) is commonly used within enterprise environments whereby organizations pay for someone else’s physical or virtual resources, including networking resources. Individuals also pay for IaaS when, for example, they need to deploy a VPS or VPN for personal use. From Gartner:

Infrastructure as a service (IaaS) is a standardized, highly automated offering, where compute resources, complemented by storage and networking capabilities are owned and hosted by a service provider and offered to customers on-demand.

 

Tor, the network of operators that make up the network, is IaaS. What is unusual about this terminology is that the Tor network is distributed, and utilization of this networking resource is free. Companies like Duck Duck Go, Facebook, New York Times, and every organization utilizing SecureDrop including the Associated Press, is using this IaaS in order to best support their users’ privacy rights.

A problem that Tor network operators have always faced is that setting up and maintaining the network is not free. Tor is free-to-use IaaS on purpose — people and services need to be able to use the network without attribution in order for Tor to provide specific guarantees of privacy.

If privacy infrastructure operators had better funding, we would be in a better position to support larger infrastructure needs. For example, if Mozilla or Brave ever wanted to collect browser telemetry over Tor onion services to best support the privacy of their users, the Tor network would likely need to pivot towards larger and more stable network operators. In this article, we will look at some of the challenges and opportunities for operators of privacy infrastructure.

Challenges

Emerald Onion was created, in part, to be able to demonstrate a successful Transit Internet Service Provider that only supports privacy IaaS. It was designed to best leverage existing laws in the United States in tandem with operational designs that require privacy-focused security properties. There have been and continue to be serious challenges facing Emerald Onion along with any other organization that is dedicated to privacy IaaS.

IP transit service is exclusively a for-profit business

IP transit is a required part of an ISP. Emerald Onion, Riseup, Calyx, Quintex, etc, need to pay upstream providers who provide the physical transport of the encrypted packets that we transit as part of the Tor network. This transit service is expensive. For example, 1Gbps of service in a residential setting can cost around $80 per month in Seattle, and 1Gbps of service in a datacenter can easily cost $800 per month. This dramatic cost difference is because of capitalism — it is presumed that a service provider in a datacenter environment is going to be profiting off of this service. Upstream providers don’t care one bit that Emerald Onion is a 501(c)3 not-for-profit supporting human rights.

Little options for trustworthy, open source hardware, particularly networking equipment

Emerald Onion is using general-purpose computing devices (currently low-power Intel Xeon D) with BSD operating systems. It is a priority for us to be using trustworthy compute infrastructure, so we are at least ensuring that the kernels and applications that we use are free/libre open source software. We hope to transition to free/libre open source hardware and firmware as soon as we can, but we also have to be concerned with compatibility and stability with HardenedBSD/FreeBSD, and the cost of this hardware. We know that options exist for free/libre open source hardware, but this is still a very new and maturing market. To further complicate this prioritized need for trustworthy compute infrastructure, Emerald Onion has particular interest in 10Gbps networking for both the LAN and WAN.

One day, we’d like to be able to support 40Gbps and 100Gbps; however, we are not aware of any free/libre open source hardware and firmware that supports 40Gbps or 100Gbps networking.

High cost of network redundancy

Our proof-of-concept work has focused on low-cost options. This means we do not currently have redundancy at our LAN or WAN layers. Network redundancy for Emerald Onion, at minimum, would entail having not just one expensive IP transit link, but two, ideally from different upstream providers, which means two edge routers and two links to each of our Internet Exchange Points. This would also mean that we have to add redundant LAN switching, and all of this means increasing our rack space and power requirements. Basically, we would have to more than double our recurring costs to be able to have this level of infrastructure stability. While Tor itself is highly resistant to network changes, the more capacity that Emerald Onion, and other large Tor operators support, the greater the negative impact we would impose on the Tor network whenever we have to perform hardware, firmware, kernel, and application updates.

IPv4 scopes for exit operators

As an exit relay operator, Emerald Onion must own and operate our own IPv4 address space for efficiently handling abuse communications from other service providers and law enforcement. Additionally, relay operators who peer directly with other service providers in Internet Exchange Points (IXPs) who have their own Autonomous System Number (ASNs) also require their own IP space. The entire world ran out of IPv4 address scopes to hand out to new and existing service providers a few years ago, and this is a blocker for any new Tor operator who is working to achieve the same level of stability that Emerald Onion is working to achieve.

Tor exit relaying currently depends in IPv4 connectivity between Tor routers (middle relay to exit relay traffic, for example). To be given an exit flag, a static IPv4 address is ideal for the Tor network (dynamic IP addresses would require a few hours delay of client discovery in the consensus).

Tor exit operators would not need their own Regional Internet Registry (RIR) -provisioned IPv4 address space if the exit flag could be given to IPv6-only operators, but this is not currently possible. ORPort connections (inter-tor circuit connections from middle relays, for example) cannot usually generate abuse, so IPv4 scope leasing is an easier option if IPv6-only exiting was a possibility.

One idea that Emerald Onion has had is that it may be possible to make proposals to large organizations, including universities, that are sitting on very large IPv4 scopes. We think that these organizations might be willing to donate small (/24) scopes to not-for-profit Tor network operators.

Opportunities

Surveillance and latency minimization

Seattle is home to a very large telecommunications hub called the Westin Building Exchange (WBE). We know that this building has National Security Agency (NSA) taps on I/O connections that are likely to facilitate traffic to regions like China and Russia. Additionally, WBE hosts several of the Internet’s DNS Root Servers, several of which are part of the Seattle Internet Exchange (SIX).

Emerald Onion went through the process of securing our own ASN, IPv6, and IPv4 scopes from American Registry of Internet Numbers (ARIN). We needed these things to connect to the SIX. Connecting to the SIX means that we are physically and directly connected to as many as 280 other service providers. We made this a priority because direct peering, using Border Gateway Protocol (BGP), minimizes the amount of clear-net switches and routers that a Tor user’s exit traffic has to travel through to reach its final destination. Every switch or router that Tor traffic has to traverse is an opportunity for surveillance and adds latency.

This strategy for Tor exit router placement is also ideal considering DNS. Being that multiple DNS Root Servers are directly peered with Emerald Onion, this further minimizes a global persistent adversary’s ability to spy on what Tor users are doing.

Statistically, due to requirements in the Tor protocol, individual Tor circuits bounce around multiple countries before they exit the network. This means that a non-trivial amount of the traffic that Emerald Onion, and any other United States exit operator facilitates, comes from a middle relay not within the United States. In tandem, generally speaking, a non-trivial amount of Tor exit traffic is destined to American services like Akamai, Cloudflare, Facebook, Google, and DNS Root Servers. These two likelihoods, together, means the following:

Tor exit traffic that is destined to service providers in the United States is best served, in terms of surveillance and latency minimization, by Tor exit operators that have exit relays connected to IXPs in datacenters along the coasts of the United States where undersea cables physically terminate, presuming that popular service providers like Akamai, Cloudflare, Facebook, Google, and DNS Root Servers are direct peers. This, in theory, minimizes the opportunity for American-sponsored traffic analysis, data retention, and surveillance, in addition to any other global persistent adversary who may have compromised network equipment within IXPs.

Emerald Onion has already compiled an initial list of IXPs around the United States. We continue to work on a list of qualities that an IXP should have that is a ideal for a Tor network operator:

  • Number of participants — This is important because if there is a peering link (using BGP) with another service provider, the opportunities for surveillance are minimized. For obvious reasons, the amount of direct peers is as important as the popularity of said services.
  • Access to specific participants — This is important because, for example, peering agreements with large providers such as Akamai, Cloudflare, Facebook, Google, and DNS Root Servers minimize the opportunity for surveillance while minimizing latency.
  • Nonprofit and affordable — A large number of IXPs are for-profit and thus have high up-front and high recurring costs for connectivity, in addition to setup fees and recurring fees for copper or fiber maintenance.
  • Geo-location — This is important because of, at least, location diversity, peer diversity, and direct connections with international undersea cables are focal points for the facilitators of global passive surveillance.
  • Prohibits network surveillance — The Seattle Internet Exchange, for example, has a stated policy that prohibits surveillance on peering links. One day, we hope that the SIX, and other public-benefit IXPs, will also publish a regular transparency report.

Funding

Emerald Onion has been in operation for 10 months. We wouldn’t exist, as we are today, without the generous startup grant of $5,000 from Tor Servers. We also would not still be around without the continuous donations from our Directors who personally donate as much as $350 each month. We currently require roughly $700 per month to operate, largely due to our service contract with our co-location provider who is also our upstream transit ISP.

Going back to the beginning of this article, the Tor network is a privacy-focused IaaS. Sustainability is a constant issue for Tor network operators, especially for operators who preemptively tackle legal and long-term operational challenges. We need help. There is no easy answer for funding. Grant writing and grant management is not a trivial task, nor is sustaining a 501(c)3 not-for-profit purely based on part-time volunteer work. Emerald Onion is incredibly lucky to have a few people who regularly donate large amounts of money and time to keep the organization online, but this is not sustainable.

The operations model that Emerald Onion has created, however, is scalable if properly funded. If we were provided between between $7,000 to $10,000 per month, we could multiply our capacity by a factor of 10. If we had a pool of funding that supported 10 independent Tor network operators in the United States (there are over 100 IXPs in the United States), we could dramatically bolster the capacity and stability of the Tor network while also minimizing network surveillance opportunities and network latency.

Conclusion

I hope that this article begins to shed light on the challenges facing privacy IaaS providers like the thousands of operators that make up the Tor network. Emerald Onion is going to continue to educate others on these topics, attempt to find and create solutions for these challenges, and continue to encourage hacker communities around the United States to build their own privacy-focused not-for-profit ISP.

Introducing gibson

Artwork by Mike Finch (CC BY 4.0)

 

As you may know, Emerald Onion systems run HardenedBSD. BSD systems in general, and HBSD in particular, provide numerous advantages to our team in operating secure and highly performant Tor relays. But BSD systems make up only a very small percentage of the Tor network. There are many similarities between BSD and Linux, with which many users may be more familiar, but the differences can be intimidating. We’re addressing this by launching gibson, a project to develop a suite of tools to address the needs of Tor service operators. The Tor network is more robust when it is diverse, and this is one way that we can encourage a more diverse Tor network and enhance our community.

It is important to say that while our initial focus is on BSD systems, our plan is to extend gibson to serve the Tor community regardless of platform. We’re starting with HBSD because it’s an obvious and natural choice for us; we believe in “dogfood“, and we want you to be assured that the code we share is used by our team in real deployments. In our mind it isn’t enough to make running Tor services easy, our tools must also help make services secure and reliable. At Emerald Onion, we do this by example.

What is gibson?

A generous description is that gibson is a no-dependency suite of cross-functional tools for creating and maintaining secure and robust Tor services. We currently support HardenedBSD systems, but plan to extend our support to FreeBSD, OpenBSD, and Linux in future releases.

We say that this is a generous description because we want our tools to take as light a touch as possible. To an experienced user, gibson may not appear to do much of anything at all. This is an intentional design decision. An experienced user might say, “Using gibson to do an update achieves the same outcome as just running these three commands I already know”. Truly, nothing would make us happier than this outcome. We want for everyone in the Tor community to be an expert of the tools that make the network operate, from Tor itself to the operating system, hardware, and everything in between. That said, we hope that experienced users will continue to use gibson, and that they will propose new solutions and new functionality.

The goal of gibson isn’t to make difficult Tor management tasks trivial. Rather, the goal of gibson is to make Tor management tasks consistent. We believe that secure systems are built around reproducible, auditable processes. Most of our maintenance is simple and mundane, but a small configuration error or a missed security patch could be catastrophic to the anonymity and security of Tor users. We want to eliminate possible sources of human error and to share our best practices with the community.

We also believe that users should be able to adopt gibson on existing systems without onerous effort, and should also be able to walk away from it whenever they wish without any lock-in. We want our tools to promote the adoption of correct processes and best practices. We also hope that they will be educational. Users new to BSD systems, new to Linux, and new to Tor should be able to look at our code and with minimal effort be able to understand what it does and why.

Finally, we say that gibson is cross-functional because our solution space is defined by the needs of a secure and robust Tor deployment. We do not seek to replace virtualization and jail tools (for instance, bhyve, virtualbox, ezjail, iocell, etc.). We do not seek to replace disk encryption tools (geli, LUKS, etc.). We’re not replacing any web servers, either (nginx, Apache, Caddy, etc.). What we are doing is providing tools to streamline the implementation of these other projects into a complete solution which addresses the needs of Tor administrators.

What does gibson do now?

Currently, gibson updates and controls Tor services running in HBSD jails. A simple and mundane task, but one that we want to make sure is done consistently during each of our maintenance windows. Our initial release of gibson is version 0.1, which is derived from a handful of scripts currently used in maintenance of Emerald Onion systems.

  • 0.0.1 Initial scripts used for EmeraldOnion system maintenance
  • 0.1.0 First version 0.0.1 scripts refactored into gibson: apply system and package updates in jails; start, stop, restart tor services in jails;

Where can I get gibson?

We’re working on getting gibson into the HardenedBSD ports repository, and it should land there shortly.  It will take a little more time for binary packages to become available for users who prefer to use pkg.

In the meantime, gibson is available at our GitHub, as is the files you need to sideload it as a port.  Detailed installation instructions are available there.

Roadmap (or: what will gibson do in the future?)

0.2 -> 0.5

  • Create and maintain template jail(s); clone templates into new jails and deploy tor services:
    • middle relays
    • exit relays
    • bridges
    • onion services (with nginix, initially)
  • Create and manage geli-encrypted ZFS pools
  • Initial creation of encrypted providers and pool members from specified devices
  • Replacement of devices due to failure or capacity expansion
  • Good ideas suggested (or implemented and submitted) by our community!

0.6 -> 0.9

  • Support for FreeBSD systems
  • Good ideas suggested (or implemented and submitted) by our community!

1.5+

  • Support for Linux systems
  • Support for non-geli encrypted filesystems
  • Support for non-ZFS storage pools
  • Good ideas suggested (or implemented and submitted) by our community!

House Style

gibson is always written in all-lowercase.

Logo

The gibson logo was generously donated by Mike Finch and is licensed by Emerald Onion as Creative Commons Attribution 4.0 International.

DNS for Tor Exit Relaying

One of the major pieces of infrastructure run by Tor Exit Nodes is DNS. DNS is the system that translates human readable names, like emeraldonion.org, into IP addresses. In Tor, the exit node is the location where this translation takes place. As such, DNS has been recognized as one of the places where centralization or attacks could be performed that could affect the integrity of the Tor network. To serve our users well, we want to mitigate the risks of compromise and surveillance as we resolve names on behalf of our users. It’s these principles that direct how we structure our DNS resolution.

 

Emerald Onion currently uses pfSense, which uses Unbound for DNS. Per our architectural design, we run our own recursive DNS server, meaning we query up to the root name servers for DNS resolution, and avoid the cache any upstream ISPs offering us DNS resolvers. This also means we query the authoritative resolvers directly, minimizing the number of additional parties able to observe domain resolutions coming from our users.

General Settings:

  • We use DNS Resolver and disable the DNS Forwarder
  • Only bind the DNS listener to the NIC the Tor server is connected to and localhost.
  • Only bind the DNS outgoing interface to the NIC that carries our public IP. If you use BGP, do NOT bind DNS to the interface used to connect to BGP peers.
  • Enable DNSSEC support
  • Disable DNS Query Forwarding
  • We don’t use DHCP, leave DHCP Registration disabled
  • Same goes with Static DHCP

Custom options:

prefer-ip6: yes
hide-trustanchor: yes
harden-large-queries: yes
harden-algo-downgrade: yes
qname-minimisation-strict: yes
ignore-cd-flag: yes

The most important of these options is qname-minimization, which means that when we perform a resolution like www.emeraldonion.org, we ask the root name servers only for who controls .org, the Org name servers, who controls emeraldonion.org, and only ask emerald onion’s name servers for the IP of www.emeraldonion.org. This helps to protect against our traffic resolutions being swept into the various “passive DNS” feeds that have been commoditized around the network.

Of the other custom options, the bulk are related to DNSSEC security.

 

Advanced Settings:

  • Hide Identity
  • Hide Version
  • Use Prefetch Support
  • Use Prefetch DNS Keys
  • Harden DNSSEC data
  • Leave the tuning values alone for now (Things like cache size, buffers, queues, jostle, etc)
  • Log Level is 1, which is pretty low.
  • Leave the rest alone.

Hiding the identity and version helps prevent the leakage of information that could be used in attacks against us. Prefetch Support changes how the DNS server fetches DNS records. Without it, it fetches the DNS record at the time of the request. With Prefetch Support, it refreshes the DNS entries as each record’s TTL expires helping to further obfuscate requests and makes it harder for specific Tor request correlation attacks.

Access Lists:

We don’t use this, but if you want to work with the Access Lists, that should be fine, just keep it in mind when troubleshooting.

OTF Concept Note

What is your idea?

Emerald Onion is a recently founded ISP/community non-profit organization based in Seattle seeded by a Tor Servers grant and personal donations. Our goal is to expand the privacy community by lowering the cost and learning curve for community organizations in the US to operate infrastructure.

Existing organizations in this space, Calyx and Riseup, have succeeded in rapidly becoming focal points for the community, but are inherently difficult to scale and are most effective in their local geographic communities. Thus, while Riseup has excelled at providing technical software, there is no easy path to establishing similar organizations. We want to change that!

In beginning the process of establishing ourselves, we have been documenting every step of the way. We are already running multiple unfiltered Tor exit nodes, and have both legally vetted abuse letters and educational material for law enforcement published. Now, we want to take this knowledge and export it to other communities. In particular, we want to focus on areas near Internet Exchange Points that are in at least 49 metro locations around the US, which can provide economical pricing and good connectivity for Internet traffic. We hope to spur wider deployment of Tor, onion services, and surrounding privacy technologies by helping other local groups follow our path.

Part of this mission is becoming a focal point for privacy operations work ourselves. Working with researchers at Berkeley, we are contributing to easier handling of abuse complaints. With academics in the Tor community, we are piloting new exit strategies to limit the impact of censorship. With the open source community, we are increasing platform diversity through the use and documentation of HardenedBSD as an alternative software stack. Our trajectory as an organization will include partnering with these organizations to improve usage and deployability, education of other network operators, and increasing our network presence and capacity.

What are your goals / long-term effects?

Our goal is that most major communities in the US have local organizations focused on operating privacy technology. Hackerspaces today are a mixed bag, and are often centered around physical rather than digital creation. Despite the presence of major IXPs, there are many urban centers today without community-driven organizations working to bolster privacy. We envision providing the groundwork and enthusiasm to support a network of ISPs around the country that can rectify this situation. Getting these entities in place will be important underlying infrastructure for future decentralized and federated networks to follow the model of Tor today.

Emerald Onion will directly evangelize operational best practices as it matures as a community organization and fits into the Seattle privacy community. To take advantage of Seattle’s position as one of the largest exchange points in the world, Emerald Onion will actively seek out peering agreements and aim to transit a significant amount of Tor traffic. Through our ISP operations, we hope for two primary longer term effects: First, that we are able to disseminate knowledge of peering agreements to make it significantly easier for other entities to understand how to enter into these negotiations. Second, that we can help Tor and other privacy enhancing networks gain capacity, reliability, and strategies for resilience to censorship. These technologies have focused on their software properties, but there are significant operational and networking challenges that need to be solved in tandem. We believe entities like Emerald Onion are the right complement to help privacy technologies succeed.

How will you do it?

Concretely, Emerald Onion will work to bring at least 10 new, independent, Privacy-focused ISPs into existence within the next 12 months. We will also speak in conferences, and use our existing communication pathways to advertise and publish our work. We will work to find other supporters and help them establish their own organizations. In addition, we will work to make shared support networks, like legal funds, peering databases, and abuse systems to be more accessible to these community groups.

Specifically, we will focus on the following areas of capacity:

  1. Funding and organizational stability strategies
  2. Nonprofit ISP incorporation setup and management
  3. Data center co-location setup and management
  4. ARIN registration and AS and IP scope management
  5. IP transit setup and management
  6. BGP setup and management
  7. Peering agreement setup and management
  8. Legal response setup and management

To stabilize ourselves as a focal point of the Seattle privacy community, Emerald Onion will continue to develop both its own sustainability model, and its infrastructure.

We expect to receive 501c3 status by the end of the year, and have already begun soliciting donations for our general operations. We have had initial success in approaching local members of the community to contribute the funds for one relay worth of cost in exchange for “naming rights” of that node. We believe direct community contributions can provide sustainability, and will complement this income stream with grant funding for growth.

We will also continue to increase our network presence to improve our fault tolerance and gain access to more network peers. Higher capacity will allow us to provide incubation for a larger range of privacy enhancing technologies.

Who is it for?

Emerald Onion is not just for the ~60 million monthly Tor users or the Seattle privacy community. We are not just a testing ground for encountering and solving operational issues in the deployment of privacy technologies. Emerald Onion is strategically developing as a model steward of privacy networks that is focused on quality and integrity. Our actions and relationships further legitimize Tor within communities that operate the backbones of the Internet and will help normalize the use of Tor for business-driven service providers.  We will continue to be an inspiration for community groups and other ethically-conscious ISPs alike.

Emerald Onion’s day-to-day work at present focuses on existing and new Tor router operators, who with the organizations that they create will immediately impact public perception. In Emerald Onion’s short existence, we have made direct, personal connections with at least 50 professional network administrators, datacenter operators, and Internet service providers. Imagine that happening in every major IXP community around the United States.

Emerald Onion has paid for professional legal services, and has already published our verified Legal FAQ and abuse complaint responses that are valid within the United States. Similarly, the organization is working with Academics to better understand the operational reality of abuse complaints, and to understand opportunities for making use of the IP space. These services benefit the larger privacy community both operationally, and as an incubator for projects.

What is the existing community?

The Tor relay community is already strong, but lacks strong US-based advocacy for growth. In Europe, TorServers.net has evolved into a grant-giving organization, which is able to provide advice and financial support to help new relays get started, but is not well positioned to support US-based relays. In Canada, Coldhak runs a valuable relay, but has not attempted to export its knowledge to external entities.

In the US, the largest relay presences come from Riseup.net and Calyx networks. Riseup is focused on services like email and VPNs in tandem with important education. This is valuable work, but does not extend to directly advocating for new groups to enter the ISP space. Calyx is supported through a cellular ISP model focusing on end-users but does not focus on supporting new relay operators.

Emerald Onion aims to fill this gap through direct advocacy to guide and support new relay operators and encourage the existence and creation of privacy supporting entities in a diverse set of IXPs around the country.

Complementary Efforts?

The Tor Project itself provides a basic level of support for new entities, particularly with technical support. In addition to a wide-reaching and engaging community, the presence of the Tor-relays mailing list provides a valuable community-wide support network between operators. The EFF has been a long-time supporter of the legal aspects of relay operation, and has helped with several legal papers supporting helping to establish the legal protections of Tor exit operation, along with providing counsel when new legal issues arise. It remains the case that new entities establishing Tor exit nodes in the US face thousands of dollars of legal fees to properly prepare themselves with the needed form letters for abuse and a tricky navigation of legal guidelines to establish themselves as legal entities with the authority to respond to complaints without fear of retribution.

Emerald Onion hopes to fill these gaps by making it easier for others by defining clear direction, freely published in the public domain, so that new operators don’t need to duplicate work that we’ve already performed. Building a shared legal defense fund and sharing how to navigate data center costs and contracts will allow groups to form with much less risk or uncertainty.

Why is it needed?

In building Emerald Onion so far, we have already found that many of the steps we are taking are undocumented, or rely on verbally communicated lore. That situation is not sustainable, and cannot scale or significantly improve the current state of the world.

More organizations are needed that focus on Internet privacy the same way hackerspaces have focused on hardware and technical development. Internet issues are inherently rooted in being part of the Internet and that barrier has so far been a high hurdle for community groups. We believe that this hurdle needs to be lower.

Without active development of these entities, we will continue to see even more centralization of the Internet and continued erosion of neutrality. Retaining a community presence in Internet operations is a key underlying infrastructure that we strongly believe has the potential to change the future development of the Internet.

 

Tor on HardenedBSD

In this post, we’ll detail how we set up Tor on HardenedBSD. We’ll use HardenedBSD 11-STABLE, which ships with LibreSSL as the default crypto library in base and in ports. The vast majority of Tor infrastructure nodes run Linux and OpenSSL. Emerald Onion believes running HardenedBSD will help improve the diversity and resiliency of the Tor network. Additionally, running HardenedBSD gives us peace of mind due to its expertly crafted, robust, and scalable exploit mitigations. Together, Emerald Onion and HardenedBSD are working towards a safer and more secure Tor network.

This article should be considered a living document. We’ll keep it up-to-date as HardenedBSD and Emerald Onion evolve.

Initial Steps

Downloading and installing HardenedBSD 11-STABLE is simple. Navigate to the latest build and download the installation media that suits your needs. The memstick image is suited for USB flash drives. Boot the installation media.

Installing HardenedBSD is simple. Follow the prompts. Sample screenshots are provided below:

  1. Select Install:
  2. Select your keymap. If you use a standard US English keyboard, the default is fine:
  3. Choose a hostname:
  4. Select the distribution sets to install:
  5. Choose your filesystem. For purposes of this article, we’ll use ZFS for full-disk encryption:
  6. Selecting the Pool Type will allow you to configure your ZFS pool the way you want. We will just use a single disk in this article:
  7. Since we’re using a single disk, we’ll select the Stripe option:
  8. Select the disks to use in the pool. Only a single disk for us:
  9. After selecting the disks, you’ll go back to the original ZFS setup menu. We’ve made a few changes (Encrypt Disks, Swap Size, Encrypt Swap):
  10. Review the changes:
  11. Set the password on your encrypted ZFS pool:
  12. Validate the password:
  13. Encrypted ZFS will initialize itself:
  14. HardenedBSD will now install distribution sets:
  15. Set the root password:
  16. If you want to set up networking, select the network device to configure. In this article, we’ll set up a dynamic (DHCP) network configuration:
  17. We want to use IPv4:
  18. We want to use DHCP:
  19. It will try to acquire a DHCP lease:
  20. At Emerald Onion, we put IPv6 first. However, in this example article, we won’t use IPv6 as it’s not currently available. So we’ll choose no when prompted to set up IPv6:
  21. Ensure the DNS information is correct and make any changes if needed:
  22. It’s now time to choose the system timezone. Select the region:
  23. We chose America. We’ll choose United States for the country next:
  24. Finally we’ll chose the actual timezone:
  25. Confirm the timezone:
  26. Because we use NTP, we’ll skip setting the date:
  27. We’ll also skip setting the time:
  28. Select the services to start at boot:
  29. Select the system hardening options. HardenedBSD sets options one through five by default, so there’s no need to set them here.
  30. We will go ahead and add an unprivileged user. Make sure to add the user to the “wheel” group for access to use the su program:
  31. Set the user’s details:
  32. HardenedBSD is now installed! Exit the installer. The installer will do things in the background so there may be some delay between exiting and the next prompt:
  33. We don’t want to make further modifications to the installation prior to rebooting:
  34. Go ahead and reboot:

The installation is now complete!

Installing Tor

Installing Tor is simple, too. Once HardenedBSD is installed and you’ve logged in, run the following command:

# pkg install tor

The Tor package on HardenedBSD, and its upstream FreeBSD, currently does not ship with a modified Tor configuration file, which can be found at /usr/local/etc/tor/torrc. Tor isn’t set up to log outside of initial startup messages. You will need to edit the Tor configuration file to suit your needs. Take a look at the tor(1) manpage for all the available configuration options.

In our set up, Tor listens on TCP ports 80 and 443 as an unprivileged user. We need to tell HardenedBSD to allow non-root users to be able to bind to ports that traditionally require root privileges:

# echo 'net.inet.ip.portrange.reservedhigh=0' >> /etc/sysctl.conf
# service sysctl start

Multi-Instance Tor

At Emerald Onion, we run multiple instances of Tor on the same server. This allows us to scale Tor to our needs. The following instructions detail how to set up multi-instance Tor. The same instructions can be used for single-instance Tor.

We named our instances simple names: instance-01, instance-02, instance-03, and so on. Each instance has its own configuration file, located at /usr/local/etc/tor/torrc@${instance_name}. We first set up a template config file:

Nickname EmeraldOnion%%INSTANCE%%
Address tor01.emeraldonion.org
ContactInfo abuse_at_emeraldonion_dot_org
Log notice file /var/log/tor/instance-%%INSTANCE%%/notices.log
OutboundBindAddressExit %%IP4ADDR%%
OutboundBindAddressOR %%IP4ADDR%%
DirPort %%IP4ADDR%%:80
ORPort %%IP4ADDR%%:443
ORPort %%IP6ADDR%%:443
RelayBandwidthRate 24 MBytes
RelayBandwidthBurst 125 MBytes
MyFamily %%FAMILY%%
IPv6Exit 1
ExitPolicy accept *:*
ExitPolicy accept6 *:*
SocksPort 0

The next script installs the appropriate config file based on the above template. Some things are sanitized. Shawn, who wrote the script, is a fan of zsh.

#!/usr/local/bin/zsh

ninstances=5

family=""

for ((i=1; i <= ${ninstances}; i++)); do
	instance=$(printf '%02d' ${i})

	family=""
	for ((k=1; k <= ${ninstances}; k++)); do
		[ ${k} -eq ${i} ] && continue
		[ ${#family} -gt 0 ] && family="${family},"
		family="${family}EmeraldOnion$(printf '%02d' ${k})"
	done

	sed -e "s/%%INSTANCE%%/${instance}/g" \
		-e "s/%%IP4ADDR%%/192.168.1.$((${i} + 10))/g" \
		-e "s/%%IP6ADDR%%/\[fe80::$((${i} + 10))\]/g" \
		-e "s/%%FAMILY%%/${family}/g" \
		tmpl.config > /usr/local/etc/tor/torrc@instance-${instance}
	mkdir -p /var/log/tor/instance-${instance}
	chown _tor:_tor /var/log/instance-${instance}
	chmod 700 /var/log/instance-${instance}
done

We then instructed the Tor rc script not to run the default instance of Tor:

# sysrc tor_disable_default_instance=YES

Then we tell the rc system which Tor instances we want to run and set Tor to start at boot:

# sysrc tor_instances="instance-01 instance-02 instance-03 instance-04 instance-05"
# sysrc tor_enable=YES

Then we start Tor. The first time the Tor rc script starts Tor, it will create the data and logging directories for you with the proper permissions.

# service tor start

Keeping HardenedBSD and Tor Up-To-Date

Updating HardenedBSD is simple with hbsd-update. We publish updates for base periodically. Feel free to use hbsd-update as often as you’d like to check for updates to the base operating system.

For example:

# hbsd-update
# shutdown -r now

To update your packages, including Tor, use:

# pkg upgrade

Tor Service Management Basics

The tor rc script uses SIGINT when shutting Tor down. This causes Tor to shutdown in an ungraceful manner, immediately halting connections from clients. Instead of using the traditional service tor stop command, directly issue SIGTERM to the instance you wish to stop.

# service tor status instance-01
tor is running as pid 70918.
# kill -SIGTERM 70918

If you’d like to stop all instances in a graceful way at the same time:

# killall -SIGTERM tor

In a multi-instance setup, you can tell the service command which instance you want to control by appending the instance name (the portion after the @ symbol of the torrc file) at the end of the command. For example, to reload the config file for instance-01, issue the following command:

# service tor reload instance-01

If you want to reload the config file for all instances, simply remove the instance name from the above command. The rc script will issue the reload command across all instances.

If you’d like to look at an instance’s log file, you can use the tail command:

# tail -f /var/log/tor/instance-01/notices.log

Future Work

In the future, we would like to further harden our Tor setup by having each instance deployed in its own HardenedBSD jail. Once that is complete, we will document and publish the steps we took.

Emerald Onion’s BGP Setup

This is a walk through of who our current peers are and our BGP setup.

Special thanks to DFRI, Paul English, Seattle Internet Exchange, and Theodore Baschak for your time and patience!

Current Peers

180 peers via the SIX Route Servers, 12 Direct Peers Peers via the SIX and 1 Transit Peer

6456   - Altopia Corporation
13335  - CloudFlare, Inc.
395823 - doof.net
36459  - Github
6939   - Hurricane Electric
57695  - Misaka Network LLC
3856   - Packet Clearing House
42     - WoodyNet (Also Packet Clearing House)
23265  - Pocketinet Communications, Inc.
16652  - Riseup Networks
33108  - Seattle Internet Exchange*
64241  - Wobscale Technologies, LLC
23033  - WowRack**
10310  - Yahoo! Inc.

Updated 9/7/2017

* The Seattle Internet Exchange (SIX) peer is for Route Servers
** WowRack is our current transit provider.

To see a list of all peers through the route servers:

BGP Setup

Since we currently use pfSense, we use openbgpd to peer with other Autonomous Systems.

In order to accomplish this, there are a few pre-requisites:

  1. An AS Number (ASN). Check out the list of Regional Internet Registries (RIR) for your respective geographical location on getting your ASN and Direct Allocation of IP Addresses (IPv6 & IPv4). They are listed at the bottom in the External Resources section of this page.
  2. If peering with an Internet Exchange Point (IXP) a dedicated IP address from them in order to peer (Both IPv6 & IPv4).
  3. Install the openbgpd package in pfSense (System > Package Manager > Available Packages) and then enter OpenBGPD.
  4. Submit a Letter of Agency (LOA) to your transit provider so they can announce your ASN thus IP space upstream.
  5. When switching from a typical router config to that of a BGP router, there are some fundamental changes in architecture that are required. Take a look at our Conversion Article here: https://emeraldonion.org/eo-pfsense-conversion-plan/

A fundamental aspect to this setup is touched on in the conversion plan linked in step 5. It is important to understand that a typical router setup is that the WAN links have default gateways but when setting up or switching to BGP connections, Default Gateways are not used and must be removed from the NIC config. If you want your transit provider to be your default route, you ask them to advertise that route to you and then through BGP you will get the 0.0.0.0 route. In our case, our transit provider is WowRack (AS23033) and they advertise the default route to us. The other ASNs that we peer with do not and it is BGP’s job to select the correct route based on AS length.

We found that after installing the openbgpd package in pfSense, it is best to just use the raw config tab (Services > OpenBGPD > Raw config). The issue we ran into is that after filling out the wizard, we needed to make some changes. Doing so through the wizard didn’t update the raw config which is what the service actually looks at (bgpd.conf). So, now we just manage it through the raw config.

 

Our BGP Config

At a high level, there are 3 major parts to the config:

Router Config

Such as ASN, Router ID, Network Info and Options (Like fib update and holdtime).

Groups and Neighbors

This will have a bunch of groups with neighbors in them. It can also have groups that contains two Neighbors. A group being a single AS and Neighbors being a couple of routers that Neighbor has (usually for redundancy).

We highly recommend peering with your local Internet Exchange’s (IX) route servers. This is an easy way to peer with a bunch of ASNs without having to setup direct peering. Route servers are however not a substitute for direct peering. When doing this, make sure in the bgpd.conf in the neighbor section of the group to tell bgpd not to enforce the neighbor as using “enforce neighbor-as no” so that it will accept routes from ASNs that aren’t the same as the route servers’ peering ASN.

Filtering Rules

This is how we allow or deny routes to come through from our peers. First we block everything, then we allow our peers, then we block specific networks like Martians (Such as RFC1918, etc).

We recently made some changes to this section to help protect against some poor practices seen in BGP configs. One thing is to append “inet prefixlen 8 – 24” for IPv4 and “inet6 prefixlen 16 – 48” for IPv6 to the end of the allow from and allow to statements. This states that we will only accept networks with a size of /8 to /24 (IPv4) and /16 to /48 (IPv6).

And we also made some updates to the bogon network list per the OpenBGPD standard config. These networks aren’t meant for Internet traffic so we filter them out.

bgpd.conf

AS 396507

fib-update yes
holdtime 90

router-id 206.81.81.158

# IPv4 network
network 23.129.64.0/24
# IPv6 network
network 2620:18C::/36

#### IPv4 neighbors ####
group "AS-WOWRACK-Transit-v4" {
	remote-as 23033
	neighbor 216.176.186.129 {
		descr "WOW_trans_rs1v4"
		announce self
		local-address 216.176.186.130
		max-prefix 1000000
}
}
group "AS-SIXRSv4" {
	remote-as 33108
	neighbor 206.81.80.2 {
		descr "SIXRS_rs2v4"
		announce self
		local-address 206.81.81.158
		enforce neighbor-as no
		max-prefix 200000
}
	neighbor 206.81.80.3 {
		descr "SIXRS_rs3v4"
		announce self
		local-address 206.81.81.158
		enforce neighbor-as no
		max-prefix 200000
}
}
group "AS-HURRICANEv4" {
	remote-as 6939
	neighbor 206.81.80.40 {
		descr "HE_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 152000
}
}
group "AS-ALTOPIAv4" {
	remote-as 6456
	neighbor 206.81.80.10 {
		descr "ALT_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 20 restart 30
}
	neighbor 206.81.81.41 {
		descr "ALT_rs2v4"
		announce self
		local-address 206.81.81.158
		max-prefix 20 restart 30
}
}
group "AS-POCKETINETv4" {
	remote-as 23265
	neighbor 206.81.80.88 {
		descr "POK_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 600
}
}
group "AS-DOOFv4" {
	remote-as 395823
	neighbor 206.81.81.125 {
		descr "DOOF_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 5
}
}
group "AS-PCHv4" {
	remote-as 3856
	neighbor 206.81.80.81 {
		descr "PCH_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 600
}
}
group "AS-PCHWNv4" {
	remote-as 42
	neighbor 206.81.80.80 {
		descr "PCHWN_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 600
}
}
group "AS-WOBv4" {
	remote-as 64241
	neighbor 206.81.81.87 {
		descr "WOB_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 5
}
}
group "AS-GOOGv4" {
	remote-as 15169
	neighbor 206.81.80.17 {
		descr "GOOG_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 15000
}
}
group "AS-MISAKAv4" {
	remote-as 57695
	neighbor 206.81.81.161 {
		descr "MISAKA_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 200
}
}
group "AS-RISUPv4" {
	remote-as 16652
	neighbor 206.81.81.74 {
		descr "RISUP_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 20
}
}
group "AS-AKAMAIv4" {
	remote-as 20940
	neighbor 206.81.80.113 {
		descr "AKAMAI_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 200
}
}
group "AS-CoSITv4" {
	remote-as 3401
	neighbor 206.81.80.202 {
		descr "CoSIT_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 10
}
}
group "AS-CLDFLRv4" {
	remote-as 13335
	neighbor 206.81.81.10 {
		descr "CLDFLR_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 1000
}
}
group "AS-DYNv4" {
	remote-as 33517
	neighbor 206.81.81.121 {
		descr "DYN_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 400
}
}
group "AS-FCBKv4" {
	remote-as 32934
	neighbor 206.81.80.181 {
		descr "FCBK_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 200
}
	neighbor 206.81.80.211 {
		descr "FCBK_rs2v4"
		announce self
		local-address 206.81.81.158
		max-prefix 200
}
}
group "AS-GITHUBv4" {
	remote-as 36459
	neighbor 206.81.81.89 {
		descr "GITHUB_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 100
}
	neighbor 206.81.81.90 {
		descr "GITHUB_rs2v4"
		announce self
		local-address 206.81.81.158
		max-prefix 100
}
}
group "AS-MSFTv4" {
	remote-as 8075
	neighbor 206.81.80.30 {
		descr "MSFT_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 2000
}
	neighbor 206.81.80.68 {
		descr "MSFT_rs2v4"
		announce self
		local-address 206.81.81.158
		max-prefix 2000
}
}
group "AS-OpenDNSv4" {
	remote-as 36692
	neighbor 206.81.80.53 {
		descr "OpenDNS_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 200
}
}
group "AS-SPLv4" {
	remote-as 21525
	neighbor 206.81.80.196 {
		descr "SPL_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 10
}
}
group "AS-TWITTERv4" {
	remote-as 13414
	neighbor 206.81.81.31 {
		descr "TWITTER_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 200
}
}
group "AS-VRISIGNv4" {
	remote-as 7342
	neighbor 206.81.80.133 {
		descr "VRISIGN_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 600
}
}
group "AS-YAHOOv4" {
	remote-as 10310
	neighbor 206.81.80.98 {
		descr "YAHOO_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 2000
}
	neighbor 206.81.81.50 {
		descr "YAHOO_rs2v4"
		announce self
		local-address 206.81.81.158
		max-prefix 2000
}
}
group "AS-INTEGRAv4" {
	remote-as 7385
	neighbor 206.81.80.102 {
		descr "INTEGRA_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 2000
}
}
group "AS-PNWGPv4" {
	remote-as 101
	neighbor 206.81.80.84 {
		descr "PNWGP_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 500
}
}
group "AS-WAVEv4" {
	remote-as 11404
	neighbor 206.81.80.56 {
		descr "WAVE_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 6000
}
}
group "AS-AMAZONv4" {
	remote-as 16509
	neighbor 206.81.80.147 {
		descr "AMAZON_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 4000
}
	neighbor 206.81.80.248 {
		descr "AMAZON_rs2v4"
		announce self
		local-address 206.81.81.158
		max-prefix 4000
}
}
group "AS-SYMTECv4" {
	remote-as 27471
	neighbor 206.81.81.169 {
		descr "SYMTEC_rs1v4"
		announce self
		local-address 206.81.81.158
		max-prefix 40
}
	neighbor 206.81.81.170 {
		descr "SYMTEC_rs2v4"
		announce self
		local-address 206.81.81.158
		max-prefix 40
}
}

#### IPv6 neighbors ####
group "AS-WOWRACK-Transit-v6" {
	remote-as 23033
	neighbor 2607:F8F8:2F0:811:2::1 {
		descr "WOW_trans_rs1v6"
		announce self
		local-address 2607:F8F8:2F0:811:2::2
		max-prefix 100000
}
}
group "AS-SIXRSv6" {
	remote-as 33108
	neighbor 2001:504:16::2 {
		descr "SIXRS_rs2v6"
		announce self
		local-address 2001:504:16::6:cdb
		enforce neighbor-as no
		max-prefix 60000
}
	neighbor 2001:504:16::3 {
		descr "SIXRS_rs3v6"
		announce self
		local-address 2001:504:16::6:cdb
		enforce neighbor-as no
		max-prefix 60000
}
}
group "AS-HURRICANEv6" {
	remote-as 6939
	neighbor 2001:504:16::1b1b {
		descr "HE_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 41000
}
}
group "AS-ALTOPIAv6" {
	remote-as 6456
	neighbor 2001:504:16::1938 {
		descr "ALT_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 20 restart 30
}
	neighbor 2001:504:16::297:0:1938 {
		descr "ALT_rs2v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 20 restart 30
}
}
group "AS-POCKETINETv6" {
	remote-as 23265
	neighbor 2001:504:16::5ae1 {
		descr "POK_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 600
}
}
group "AS-DOOFv6" {
	remote-as 395823
	neighbor 2001:504:16::6:a2f {
		descr "DOOF_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 5
}
}
group "AS-PCHv6" {
	remote-as 3856
	neighbor 2001:504:16::f10 {
		descr "PCH_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 600
}
}
group "AS-PCHWNv6" {
	remote-as 42
	neighbor 2001:504:16::2a {
		descr "PCHWN_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 600
}
}
group "AS-WOBv6" {
	remote-as 64241
	neighbor 2001:504:16::faf1 {
		descr "WOB_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 5
}
}
group "AS-GOOGv6" {
	remote-as 15169
	neighbor 2001:504:16::3b41 {
		descr "GOOG_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 750
}
}
group "AS-MISAKAv6" {
	remote-as 57695
	neighbor 2001:504:16::e15f {
		descr "MISAKA_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 150
}
}
group "AS-RISUPv6" {
	remote-as 16652
	neighbor 2001:504:16::410c {
		descr "RISUP_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 10
}
}
group "AS-AKAMAIv6" {
	remote-as 20940
	neighbor 2001:504:16::51cc {
		descr "AKAMAI_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 40
}
}
group "AS-CLDFLRv6" {
	remote-as 13335
	neighbor 2001:504:16::3417 {
		descr "CLDFLR_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 200
}
}
group "AS-DYNv6" {
	remote-as 33517
	neighbor 2001:504:16::82ed {
		descr "DYN_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 200
}
}
group "AS-FCBKv6" {
	remote-as 32934
	neighbor 2001:504:16::80a6 {
		descr "FCBK_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 200
}
	neighbor 2001:504:16::211:0:80a6 {
		descr "FCBK_rs2v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 200
}
}
group "AS-GITHUBv6" {
	remote-as 36459
	neighbor 2001:504:16::8e6b {
		descr "GITHUB_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 20
}
	neighbor 2001:504:16::346:0:8e6b {
		descr "GITHUB_rs2v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 20
}
}
group "AS-MSFTv6" {
	remote-as 8075
	neighbor 2001:504:16::1f8b {
		descr "MSFT_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 500
}
	neighbor 2001:504:16::68:0:1f8b {
		descr "MSFT_rs2v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 500
}
}
group "AS-OpenDNSv6" {
	remote-as 36692
	neighbor 2001:504:16::8f54 {
		descr "OpenDNS_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 40
}
}
group "AS-SPLv6" {
	remote-as 21525
	neighbor 2001:504:16::5415 {
		descr "SPL_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 10
}
}
group "AS-TWITTERv6" {
	remote-as 13414
	neighbor 2001:504:16::3466 {
		descr "TWITTER_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 10
}
}
group "AS-VRISIGNv6" {
	remote-as 7342
	neighbor 2001:504:16::1cae {
		descr "VRISIGN_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 100
}
}
group "AS-YAHOOv6" {
	remote-as 10310
	neighbor 2001:504:16::2846 {
		descr "YAHOO_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 200
}
	neighbor 2001:504:16::306:0:2846 {
		descr "YAHOO_rs2v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 200
}
}
group "AS-INTEGRAv6" {
	remote-as 7385
	neighbor 2001:504:16::1cd9 {
		descr "INTEGRA_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 100
}
}
group "AS-PNWGPv6" {
	remote-as 101
	neighbor 2001:504:16::65 {
		descr "PNWGP_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 20
}
}
group "AS-WAVEv6" {
	remote-as 11404
	neighbor 2001:504:16::2c8c {
		descr "WAVE_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 500
}
}
group "AS-AMAZONv6" {
	remote-as 16509
	neighbor 2001:504:16::407d {
		descr "AMAZON_rs1v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 1000
}
	neighbor 2001:504:16::248:0:407d {
		descr "AMAZON_rs2v6"
		announce self
		local-address 2001:504:16::6:cdb
		max-prefix 1000
}
}

#### Filtering Rules ####

deny from any
deny to any

# https://www.arin.net/announcements/2014/20140130.html
# This block will be subject to a minimum size allocation of /28 and a
# maximum size allocation of /24. ARIN should use sparse allocation when
# possible within that /10 block.
allow from any prefix 23.128.0.0/10 prefixlen 24 - 28   # ARIN IPv6 transition

## IPv4 ##
# WOW_trans_rs1v4
allow from 216.176.186.129
allow to 216.176.186.129
# SIXRS_rs2v4
allow from 206.81.80.2 inet prefixlen 8 - 24
allow to 206.81.80.2 inet prefixlen 8 - 24
# SIXRS_rs3v4
allow from 206.81.80.3 inet prefixlen 8 - 24
allow to 206.81.80.3 inet prefixlen 8 - 24
# HE_rs1v4
allow from 206.81.80.40
allow to 206.81.80.40
# ALT_rs1v4
allow from 206.81.80.10 inet prefixlen 8 - 24
allow to 206.81.80.10 inet prefixlen 8 - 24
# ALT_rs2v4
allow from 206.81.81.41 inet prefixlen 8 - 24
allow to 206.81.81.41 inet prefixlen 8 - 24
# POK_rs1v4
allow from 206.81.80.88 inet prefixlen 8 - 24
allow to 206.81.80.88 inet prefixlen 8 - 24
# DOOF_rs1v4
allow from 206.81.81.125 inet prefixlen 8 - 24
allow to 206.81.81.125 inet prefixlen 8 - 24
# PCH_rs1v4
allow from 206.81.80.81 inet prefixlen 8 - 24
allow to 206.81.80.81 inet prefixlen 8 - 24
# PCHWN_rs1v4
allow from 206.81.80.80 inet prefixlen 8 - 24
allow to 206.81.80.80 inet prefixlen 8 - 24
# WOB_rs1v4
allow from 206.81.81.87 inet prefixlen 8 - 24
allow to 206.81.81.87 inet prefixlen 8 - 24
# GOOG_rs1v4
allow from 206.81.80.17
allow to 206.81.80.17
# MISAKA_rs1v4
allow from 206.81.81.161 inet prefixlen 8 - 24
allow to 206.81.81.161 inet prefixlen 8 - 24
# RISUP_rs1v4
allow from 206.81.81.74 inet prefixlen 8 - 24
allow to 206.81.81.74 inet prefixlen 8 - 24
# AKAMAI_rs1v4
allow from 206.81.80.113 inet prefixlen 8 - 24
allow to 206.81.80.113 inet prefixlen 8 - 24
# CoSIT_rs1v4
allow from 206.81.80.202 inet prefixlen 8 - 24
allow to 206.81.80.202 inet prefixlen 8 - 24
# CLDFLR_rs1v4
allow from 206.81.81.10 inet prefixlen 8 - 24
allow to 206.81.81.10 inet prefixlen 8 - 24
# DYN_rs1v4
allow from 206.81.81.121 inet prefixlen 8 - 24
allow to 206.81.81.121 inet prefixlen 8 - 24
# FCBK_rs1v4
allow from 206.81.80.181 inet prefixlen 8 - 24
allow to 206.81.80.181 inet prefixlen 8 - 24
# FCBK_rs2v4
allow from 206.81.80.211 inet prefixlen 8 - 24
allow to 206.81.80.211 inet prefixlen 8 - 24
# GITHUB_rs1v4
allow from 206.81.81.89 inet prefixlen 8 - 24
allow to 206.81.81.89 inet prefixlen 8 - 24
# GITHUB_rs2v4
allow from 206.81.81.90 inet prefixlen 8 - 24
allow to 206.81.81.90 inet prefixlen 8 - 24
# MSFT_rs1v4
allow from 206.81.80.30 inet prefixlen 8 - 24
allow to 206.81.80.30 inet prefixlen 8 - 24
# MSFT_rs2v4
allow from 206.81.80.68 inet prefixlen 8 - 24
allow to 206.81.80.68 inet prefixlen 8 - 24
# OpenDNS_rs1v4
allow from 206.81.80.53 inet prefixlen 8 - 24
allow to 206.81.80.53 inet prefixlen 8 - 24
# SPL_rs1v4
allow from 206.81.80.196 inet prefixlen 8 - 24
allow to 206.81.80.196 inet prefixlen 8 - 24
# TWITTER_rs1v4
allow from 206.81.81.31 inet prefixlen 8 - 24
allow to 206.81.81.31 inet prefixlen 8 - 24
# VRISIGN_rs1v4
allow from 206.81.80.133 inet prefixlen 8 - 24
allow to 206.81.80.133 inet prefixlen 8 - 24
# YAHOO_rs1v4
allow from 206.81.80.98 inet prefixlen 8 - 24
allow to 206.81.80.98 inet prefixlen 8 - 24
# YAHOO_rs2v4
allow from 206.81.81.50 inet prefixlen 8 - 24
allow to 206.81.81.50 inet prefixlen 8 - 24
# INTEGRA_rs1v4
allow from 206.81.80.102 inet prefixlen 8 - 24
allow to 206.81.80.102 inet prefixlen 8 - 24
# PNWGP_rs1v4
allow from 206.81.80.84 inet prefixlen 8 - 24
allow to 206.81.80.84 inet prefixlen 8 - 24
# WAVE_rs1v4
allow from 206.81.80.56 inet prefixlen 8 - 24
allow to 206.81.80.56 inet prefixlen 8 - 24
# AMAZON_rs1v4
allow from 206.81.80.147 inet prefixlen 8 - 24
allow to 206.81.80.147 inet prefixlen 8 - 24
# AMAZON_rs2v4
allow from 206.81.80.248 inet prefixlen 8 - 24
allow to 206.81.80.248 inet prefixlen 8 - 24
# SYMTEC_rs1v4
allow from 206.81.81.169 inet prefixlen 8 - 24
allow to 206.81.81.169 inet prefixlen 8 - 24
# SYMTEC_rs2v4
allow from 206.81.81.170 inet prefixlen 8 - 24
allow to 206.81.81.170 inet prefixlen 8 - 24

## IPv6 ##
# WOW_trans_rs1v6
allow from 2607:F8F8:2F0:811:2::1
allow to 2607:F8F8:2F0:811:2::1
# SIXRS_rs2v6
allow from 2001:504:16::2 inet6 prefixlen 16 - 48
allow to 2001:504:16::2 inet6 prefixlen 16 - 48
# SIXRS_rs3v6
allow from 2001:504:16::3 inet6 prefixlen 16 - 48
allow to 2001:504:16::3 inet6 prefixlen 16 - 48
# HE_rs1v6
allow from 2001:504:16::1b1b
allow to 2001:504:16::1b1b
# ALT_rs1v6
allow from 2001:504:16::1938 inet6 prefixlen 16 - 48
allow to 2001:504:16::1938 inet6 prefixlen 16 - 48
# ALT_rs2v6
allow from 2001:504:16::297:0:1938 inet6 prefixlen 16 - 48
allow to 2001:504:16::297:0:1938 inet6 prefixlen 16 - 48
# POK_rs1v6
allow from 2001:504:16::5ae1 inet6 prefixlen 16 - 48
allow to 2001:504:16::5ae1 inet6 prefixlen 16 - 48
# DOOF_rs1v6
allow from 2001:504:16::6:a2f inet6 prefixlen 16 - 48
allow to 2001:504:16::6:a2f inet6 prefixlen 16 - 48
# PCH_rs1v6
allow from 2001:504:16::f10 inet6 prefixlen 16 - 48
allow to 2001:504:16::f10 inet6 prefixlen 16 - 48
# PCHWN_rs1v6
allow from 2001:504:16::2a inet6 prefixlen 16 - 48
allow to 2001:504:16::2a inet6 prefixlen 16 - 48
# WOB_rs1v6
allow from 2001:504:16::faf1 inet6 prefixlen 16 - 48
allow to 2001:504:16::faf1 inet6 prefixlen 16 - 48
# GOOG_rs1v6
allow from 2001:504:16::3b41
allow to 2001:504:16::3b41
# MISAKA_rs1v6
allow from 2001:504:16::e15f inet6 prefixlen 16 - 48
allow to 2001:504:16::e15f inet6 prefixlen 16 - 48
# RISUP_rs1v6
allow from 2001:504:16::410c inet6 prefixlen 16 - 48
allow to 2001:504:16::410c inet6 prefixlen 16 - 48
# AKAMAI_rs1v6
allow from 2001:504:16::51cc inet6 prefixlen 16 - 48
allow to 2001:504:16::51cc inet6 prefixlen 16 - 48
# CLDFLR_rs1v6
allow from 2001:504:16::3417 inet6 prefixlen 16 - 48
allow to 2001:504:16::3417 inet6 prefixlen 16 - 48
# DYN_rs1v6
allow from 2001:504:16::82ed inet6 prefixlen 16 - 48
allow to 2001:504:16::82ed inet6 prefixlen 16 - 48
# FCBK_rs1v6
allow from 2001:504:16::80a6 inet6 prefixlen 16 - 48
allow to 2001:504:16::80a6 inet6 prefixlen 16 - 48
# FCBK_rs2v6
allow from 2001:504:16::211:0:80a6 inet6 prefixlen 16 - 48
allow to 2001:504:16::211:0:80a6 inet6 prefixlen 16 - 48
# GITHUB_rs1v6
allow from 2001:504:16::8e6b inet6 prefixlen 16 - 48
allow to 2001:504:16::8e6b inet6 prefixlen 16 - 48
# GITHUB_rs2v6
allow from 2001:504:16::346:0:8e6b inet6 prefixlen 16 - 48
allow to 2001:504:16::346:0:8e6b inet6 prefixlen 16 - 48
# MSFT_rs1v6
allow from 2001:504:16::1f8b inet6 prefixlen 16 - 48
allow to 2001:504:16::1f8b inet6 prefixlen 16 - 48
# MSFT_rs2v6
allow from 2001:504:16::68:0:1f8b inet6 prefixlen 16 - 48
allow to 2001:504:16::68:0:1f8b inet6 prefixlen 16 - 48
# OpenDNS_rs1v6
allow from 2001:504:16::8f54 inet6 prefixlen 16 - 48
allow to 2001:504:16::8f54 inet6 prefixlen 16 - 48
# SPL_rs1v6
allow from 2001:504:16::5415 inet6 prefixlen 16 - 48
allow to 2001:504:16::5415 inet6 prefixlen 16 - 48
# TWITTER_rs1v6
allow from 2001:504:16::3466 inet6 prefixlen 16 - 48
allow to 2001:504:16::3466 inet6 prefixlen 16 - 48
# VRISIGN_rs1v6
allow from 2001:504:16::1cae inet6 prefixlen 16 - 48
allow to 2001:504:16::1cae inet6 prefixlen 16 - 48
# YAHOO_rs1v6
allow from 2001:504:16::2846 inet6 prefixlen 16 - 48
allow to 2001:504:16::2846 inet6 prefixlen 16 - 48
# YAHOO_rs2v6
allow from 2001:504:16::306:0:2846 inet6 prefixlen 16 - 48
allow to 2001:504:16::306:0:2846 inet6 prefixlen 16 - 48
# INTEGRA_rs1v6
allow from 2001:504:16::1cd9 inet6 prefixlen 16 - 48
allow to 2001:504:16::1cd9 inet6 prefixlen 16 - 48
# PNWGP_rs1v6
allow from 2001:504:16::65 inet6 prefixlen 16 - 48
allow to 2001:504:16::65 inet6 prefixlen 16 - 48
# WAVE_rs1v6
allow from 2001:504:16::2c8c inet6 prefixlen 16 - 48
allow to 2001:504:16::2c8c inet6 prefixlen 16 - 48
# AMAZON_rs1v6
allow from 2001:504:16::407d inet6 prefixlen 16 - 48
allow to 2001:504:16::407d inet6 prefixlen 16 - 48
# AMAZON_rs2v6
allow from 2001:504:16::248:0:407d inet6 prefixlen 16 - 48
allow to 2001:504:16::248:0:407d inet6 prefixlen 16 - 48

# filter bogus networks according to RFC5735
deny from any prefix 0.0.0.0/8 prefixlen >= 8           # 'this' network [RFC1122]
deny from any prefix 10.0.0.0/8 prefixlen >= 8          # private space [RFC1918]
deny from any prefix 100.64.0.0/10 prefixlen >= 10      # CGN Shared [RFC6598]
deny from any prefix 127.0.0.0/8 prefixlen >= 8         # localhost [RFC1122]
deny from any prefix 169.254.0.0/16 prefixlen >= 16     # link local [RFC3927]
deny from any prefix 172.16.0.0/12 prefixlen >= 12      # private space [RFC1918]
deny from any prefix 192.0.2.0/24 prefixlen >= 24       # TEST-NET-1 [RFC5737]
deny from any prefix 192.168.0.0/16 prefixlen >= 16     # private space [RFC1918]
deny from any prefix 198.18.0.0/15 prefixlen >= 15      # benchmarking [RFC2544]
deny from any prefix 198.51.100.0/24 prefixlen >= 24    # TEST-NET-2 [RFC5737]
deny from any prefix 203.0.113.0/24 prefixlen >= 24     # TEST-NET-3 [RFC5737]
deny from any prefix 224.0.0.0/4 prefixlen >= 4         # multicast
deny from any prefix 240.0.0.0/4 prefixlen >= 4         # reserved

# filter bogus IPv6 networks according to IANA
deny from any prefix ::/8 prefixlen >= 8
deny from any prefix 0100::/64 prefixlen >= 64          # Discard-Only [RFC6666]
deny from any prefix 2001:2::/48 prefixlen >= 48        # BMWG [RFC5180]
deny from any prefix 2001:10::/28 prefixlen >= 28       # ORCHID [RFC4843]
deny from any prefix 2001:db8::/32 prefixlen >= 32      # docu range [RFC3849]
deny from any prefix 3ffe::/16 prefixlen >= 16          # old 6bone
deny from any prefix fc00::/7 prefixlen >= 7            # unique local unicast
deny from any prefix fe80::/10 prefixlen >= 10          # link local unicast
deny from any prefix fec0::/10 prefixlen >= 10          # old site local unicast
deny from any prefix ff00::/8 prefixlen >= 8            # multicast

Updated 9/5/2017

We’ll update this as we make changes.

External Resources

Here are a few references we leveraged when building our config:

RIRs:

African Network Information Center (AFRINIC) for Africa
https://www.afrinic.net/

American Registry for Internet Numbers (ARIN) for the United States, Canada, several parts of the Caribbean region, and Antarctica.
https://www.arin.net/

Asia-Pacific Network Information Centre (APNIC) for Asia, Australia, New Zealand, and neighboring countries
https://www.apnic.net/

Latin America and Caribbean Network Information Centre (LACNIC) for Latin America and parts of the Caribbean region
https://www.lacnic.net/

Réseaux IP Européens Network Coordination Centre (RIPE) for Europe, Russia, the Middle East, and Central Asia
https://www.ripe.net/

DNSSEC is now fully implemented for our forward and reverse lookup zones

Last month (July 2017) we moved our DNS zone management to the Google Cloud Platform since our domains were already registered with Google. After applying for the DNSSEC alpha, we were granted access and turned on DNSSEC for all three of our forward (domain) and reverse (IPv6 and IPv4 scopes) lookup zones. Google’s alpha products come with no SLA, so we took a risk implementing DNSSEC through Google.

Turning on DNSSEC was as easy flipping a switch in the control panel. The last part is adding the DS entries at the Registrar.

In the upper-right hand corner of Zone Details is Registrar Setup. This is where we got our DS entry information.

This DS information translates to a specific Key Tag, Algorithm, Digest Type, and Digest that needs to go into Google Domains (the actual Registrar).

This completed the domain setup. Now we needed to configure DNSSEC for our reverse lookup zones. Because they are direct allocations from ARIN, we needed to copy over the DS details over to ARIN.

View and Manage Your Networks > View & Manage Network (for both our IPv6 and IPv4 scopes) > Actions > Manage Reverse DNS > (select the delegation) > Modify DS Records

String (for our IPv6) parsed:

3600 DS 46756 8 2 5396635C919BAF34F24011FAB2DE251630AE2B8C17F1B69D05BCFDD603510014

String (for our IPv4) parsed:

3600 DS 40286 8 2 54686118794BD67CC76295F3D7F1C269D70EB5646F5DA130CC590AE14B33935F

This completed the ARIN DNSSEC configuration. While Google provided a quick DNS update for validation, ARIN took over 12 hours.

Internet Exchange Points in the United States

Emerald Onion is researching IXPs in the U.S.A. in order to identify areas of priority as it concerns increasing global Tor network capacity by way of putting Tor routers directly on these highly interconnected networks. Putting Tor exit routers in IXPs, for example, may reduce network latency to end points. It may also reduce network hops, potentially minimizing the possibility of third-party surveillance. Emerald Onion envisions a future where the Tor network is composed of much larger and more stable network operators, globally.

Questions

  1. Are there any Tor routers connected to any United States-based IXPs? If so, which ones and who operates them?
  2. Is this IXP friendly to Tor?
  3. What is the organizational structure of this IXP? Such as corporate-run or community-driven, etc.
  4. What qualities of an IXP should impact how meaningful it would be for the Tor network?
    • Number of participants?
    • Access to specific participants?
    • Nonprofit?
    • Community driven?
    • Affordability?
    • Geolocation?
    • Prohibits network surveillance?

A top 20 list of cities to focus on for Tor development?

  1. Chicago, IL has at least 12 IXPs
  2. New York City, NY has at least 9 IXPs (and has Calyx Institute)
  3. Dallas, TX has at least 6 IXPs
  4. Los Angeles, CA has at least 6 IXPs
  5. Miami, FL has at least 6 IXPs
  6. Seattle, WA has at least 5 IXPs (and has Riseup and Emerald Onion)
  7. San Jose, CA has at least 5 IXPs
  8. Phoenix, AZ has at least 5 IXPs
  9. Ashburn, VA has at least 3 IXPs
  10. Reston, VA has at least 3 IXPs
  11. Boston, MA has at least 3 IXPs
  12. Atlanta, GA has at least 3 IXPs
  13. Portland, OR has at least 3 IXPs
  14. Honolulu, HI has at least 2 IXPs
  15. Denver, CO has at least 2 IXPs
  16. Vienna, VA has at least 2 IXPs
  17. Palo Alto, CA has at least 1 IXP
  18. Salt Lake City, UT has at least 1 IXP (and has XMission)
  19. Minneapolis, MN has at least 1 IXP
  20. Detroit, MI has at least 1 IXP

IXPs in the United States

Ashburn, VA

    1. Equinix Ashburn Exchange (Equinix Ashburn)
    2. LINX Northern Virginia (LINX)
    3. MAE East

Ashland, VA

    1. Richmond Virginia Internet Exchange (RVA-IX)

Atlanta, GA

    1. Digital Realty / Telx Internet Exchange (TIE)
    2. Equinix Internet Exchange Atlanta (Equinix Atlanta)
    3. Southeast Network Access Point (SNAP)

Austin, TX

    1. CyrusOne Internet Exchange (CyrusOne IX)

Billings, MT

    1. Yellowstone Regional Internet eXchange (YRIX)

Boston, MA

    1. Boston Internet Exchange
    2. Massachusetts eXchange Point (MXP)
    3. CoreSite – Any2 Boston

Buffalo, NY

    1. Buffalo Niagara International Internet Exchange (BNIIX)

Chicago, IL

    1. AMS-IX Chicago
    2. CyrusOne Internet Exchange (CyrusOne IX)
    3. Equinix Chicago Exchange (Equinix Chicago)
    4. Greater Chicago International Internet Exchange (GCIIX)
    5. United IX – Chicago (ChIX)
    6. CoreSite – Any2 Chicago
    7. MAE Central

Columbus, OH

    1. OhioIX

Dallas, TX

    1. CyrusOne Internet Exchange (CyrusOne IX)
    2. DE-CIX, the Dallas Internet Exchange (DE-CIX Dallas)
    3. Digital Realty / Telx Internet Exchange (TIE)
    4. Equinix Dallas Exchange (Equinix Dallas)
    5. MAE Central
    6. Megaport MegaIX Dallas (MegaIX Dallas)

Denver, CO

    1. CoreSite – Any2 Denver
    2. Interconnection eXchange Denver (IX-Denver)

Detroit, MI

    1. Detroit Internet Exchange (DET-IX)

Duluth, NM

    1. Twin Ports Internet Exchange (TP-IX)

Gillette, WY

    1. BigHorn Fiber Internet Exchang (BFIX)

Hagåtña, Guam

    1. Guam Internet Exchange (GU-IX)

Honolulu, HI

    1. DRFortress Exchange (DRF IX)
    2. Hawaii Internet eXchange (HIX)

Houston, TX

    1. CyrusOne Internet Exchange (CyrusOne IX)

Indianapolis, IN

    1. Midwest Internet Exchange (MidWest-IX – Indy)

Jacksonville, FL

    1. Jacksonville Internet Exchange (JXIX)

Kansas City, MO

    1. Kansas City Internet eXchange (KCIX)

Los Angeles, CA

    1. CENIC International Internet eXchange (CIIX)
    2. Equinix Los Angeles Exchange (Equinix Los Angeles)
    3. Los Angeles International Internet eXchange (LAIIX)
    4. MAE West
    5. Pacific Wave Exchange in Los Angeles and Seattle (PacificWave)
    6. CoreSite – Any2 California

Madison, WI

    1. Madison Internet Exchange (MadIX)

Manassas, VA

    1. LINX Northern Virginia (LINX)

Medford, OR

    1. Southern Oregon Access Exchange (SOAX)

Miami, FL

    1. Equinix Internet Exchange Miami (Equinix Miami)
    2. MAE East
    3. Miami Internet Exchange (MiamiIX)
    4. NAP of the Americas (NOTA)
    5. The South Florida Internet Exchange (FL-IX)
    6. CoreSite – Any2 Miami

Milwaukee, WI

    1. The Milwaukee IX (MKE-IX)

Minneapolis, MN

    1. Midwest Internet Cooperative Exchange (MICE)

Moffett Field, CA

    1. NGIX West

Nashville, TN

    1. Nashville Internet Exchange (NashIX)

New York, NY

    1. AMS-IX New York (AMS-IX NY)
    2. Big Apple Peering Exchange (BigApe)
    3. Digital Realty / Telx Internet Exchange (TIE)
    4. Equinix Internet Exchange New York (Equinix New York)
    5. Free NYIIX Alternative (NYCX)
    6. New York, NY – (CoreSite – Any2 New York)
    7. DE-CIX, the New York / New Jersey Internet Exchange (DE-CIX New York)
    8. New York International Internet eXchange (NYIIX)
    9. MAE East

Omaha, NE

    1. Omaha Internet Exchange (OmahaIX)

Palo Alto, CA

    1. Equinix Internet Exchange Palo Alto (Equinix Palo Alto)

Philadelphia, PA

    1. Philadelphia Internet Exchange (PHILAIX)

Phoenix, AZ

    1. Arizona Internet Exchange (AZIX)
    2. Digital Realty / Telx Internet Exchange (TIE)
    3. Phoenix Internet Exchange, LLC (PHX-IX)
    4. Phoenix IX
    5. CyrusOne Internet Exchange (CyrusOne IX)

Portland, OR

    1. Central Oregon Internet eXchange (COIX)
    2. Northwest Access Exchange, Inc. (NWAX)
    3. Oregon Internet Exchange (OIX)

Reno, NV

    1. Tahoe Internet Exchange (TahoeIX)

Reston, VA

    1. LINX Northern Virginia (LINX)
    2. MAE East
    3. CoreSite – Any2 NorthEast

Saint George, UT

    1. Southern Utah Peering Regional Network (SUPRnet)

Salt Lake City, UT

    1. Salt Lake Internet Exchange (SLIX)

San Antonio, TX

    1. CyrusOne Internet Exchange (CyrusOne IX)

San Diego, CA

    1. San Diego NAP (SD-NAP)

San Francisco, CA

    1. San Francisco Internet Exchange (SFIX)
    2. San Francisco Metropolitan Internet Exchange (SFMIX)

San Jose, CA

    1. AMS-IX Bay Area (AMS-IX BA
    2. CoreSite – Any2 Northern California)
    3. Equinix San Jose / Bay Area Exchange (Equinix San Jose)
    4. NASA Ames Internet eXchange (AIX)
    5. MAE West

San Juan, Puerto Rico

    1. Internet Exchange of Puerto Rico (IX.PR)
    2. Puerto Rico Bridge Initiative (PRBI-IX)

Seattle, WA

    1. Megaport MegaIX Seattle (MegaIX Seattle)
    2. Pacific Wave Exchange in Los Angeles and Seattle (PacificWave)
    3. Seattle Internet Exchange (SIX)
    4. Seattle Internet Exchange (9000 MTU) (SIX Seattle (Jumbo))
    5. Seattle, WA Equinix Internet Exchange Seattle (Equinix Seattle)

Sterling, VA

    1. CyrusOne Internet Exchange (CyrusOne IX)

Tampa, FL

    1. Tampa Internet Exchange (TampaIX)
    2. Tampa Internet Exchange (TPAIX)

Tulsa, OK

    1. LiveAir Tulsa IX

Vienna, VA

  1. Equinix Internet Exchange Vienna, VA (Equinix Vienna (VA))
  2. MAE East

We’re back after a 6.5 day outage

Today (8/1/2017) @ 23:22 Pacific Time, we came back online after being down for 6 days and 12 hours. Our previous configuration where we had two physically separate systems (1 x pfSense router and 1 x Tor router) is gone. The server that was running the Tor router started to experience hardware errors, as reported by kern.log. These errors were traced back to the system board, which eventually caused issues with the disk.

While all of this was happening, we were also down an admin as he was out at defcon. So, juggling that, wanting to restore service and limited funds because to replace the Tor router, we would’ve had to wait for our refund check from the RMA before buying a new system board, we decided to virtualize our infrastructure.

We are now operating on a single server (12 x Core Intel + 32GB RAM) with 1 x pfSense 2.3.4 VM and 1 x Tor Ubuntu 16.04 VM. The system is up and passing Tor traffic.

Tor router configuration v3

01 August 2017

We experienced a catastrophic hardware failure recently which will be detailed in an upcoming blog post. We are back online today with new router IDs and we added two more routers for a total of six Tor routers.

We moved to Google Cloud DNS recently to be able to manage our PTR records for reverse DNS since we have our own IP scopes now. We also moved our forward-lookup zone to Google Cloud DNS. Next on the agenda is setting up DNSSEC.

IPv6 PTR

1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.1.1.0.0.0.0.c.8.1.0.0.2.6.2.ip6.arpa.
1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.2.1.0.0.0.0.c.8.1.0.0.2.6.2.ip6.arpa.
1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.3.1.0.0.0.0.c.8.1.0.0.2.6.2.ip6.arpa.
1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.4.1.0.0.0.0.c.8.1.0.0.2.6.2.ip6.arpa.
1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.5.1.0.0.0.0.c.8.1.0.0.2.6.2.ip6.arpa.
1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.6.1.0.0.0.0.c.8.1.0.0.2.6.2.ip6.arpa.

DNS

2620:18c:0:1100::1 tor01.emeraldonion.org
2620:18c:0:1200::1 tor02.emeraldonion.org
2620:18c:0:1300::1 tor03.emeraldonion.org
2620:18c:0:1400::1 tor04.emeraldonion.org
2620:18c:0:1500::1 tor05.emeraldonion.org
2620:18c:0:1600::1 tor06.emeraldonion.org
23.129.64.11 tor01.emeraldonion.org
23.129.64.12 tor02.emeraldonion.org
23.129.64.13 tor03.emeraldonion.org
23.129.64.14 tor04.emeraldonion.org
23.129.64.15 tor05.emeraldonion.org
23.129.64.16 tor06.emeraldonion.org

Tor router #1

Nickname EmeraldOnion01
Address tor01.emeraldonion.org
ContactInfo abuse_at_emeraldonion_dot_org
OutboundBindAddressExit 23.129.64.11
OutboundBindAddressOR 23.129.64.11
DirPort 23.129.64.11:80
ORPort 23.129.64.11:443
ORPort [2620:18c:0:1100::1]:443
RelayBandwidthRate 18 MBytes
RelayBandwidthBurst 18 MBytes
IPv6Exit 1
Exitpolicy accept *:*
ExitPolicy accept6 *:*

Tor router #2

Nickname EmeraldOnion02
Address tor02.emeraldonion.org
ContactInfo abuse_at_emeraldonion_dot_org
OutboundBindAddressExit 23.129.64.12
OutboundBindAddressOR 23.129.64.12
DirPort 23.129.64.12:80
ORPort 23.129.64.12:443
ORPort [2620:18c:0:1200::1]:443
RelayBandwidthRate 18 MBytes
RelayBandwidthBurst 18 MBytes
IPv6Exit 1
Exitpolicy accept *:*
ExitPolicy accept6 *:*

Tor router #3

Nickname EmeraldOnion03
Address tor03.emeraldonion.org
ContactInfo abuse_at_emeraldonion_dot_org
OutboundBindAddressExit 23.129.64.13
OutboundBindAddressOR 23.129.64.13
DirPort 23.129.64.13:80
ORPort 23.129.64.13:443
ORPort [2620:18c:0:1300::1]:443
RelayBandwidthRate 18 MBytes
RelayBandwidthBurst 18 MBytes
IPv6Exit 1
Exitpolicy accept *:*
ExitPolicy accept6 *:*

Tor router #4

Nickname EmeraldOnion04
Address tor04.emeraldonion.org
ContactInfo abuse_at_emeraldonion_dot_org
OutboundBindAddressExit 23.129.64.14
OutboundBindAddressOR 23.129.64.14
DirPort 23.129.64.14:80
ORPort 23.129.64.14:443
ORPort [2620:18c:0:1400::1]:443
RelayBandwidthRate 18 MBytes
RelayBandwidthBurst 18 MBytes
IPv6Exit 1
Exitpolicy accept *:*
ExitPolicy accept6 *:*

Tor router #5

Nickname EmeraldOnion05
Address tor05.emeraldonion.org
ContactInfo abuse_at_emeraldonion_dot_org
OutboundBindAddressExit 23.129.64.15
OutboundBindAddressOR 23.129.64.15
DirPort 23.129.64.15:80
ORPort 23.129.64.15:443
ORPort [2620:18c:0:1500::1]:443
RelayBandwidthRate 18 MBytes
RelayBandwidthBurst 18 MBytes
IPv6Exit 1
Exitpolicy accept *:*
ExitPolicy accept6 *:*

Tor router #6

Nickname EmeraldOnion06
Address tor06.emeraldonion.org
ContactInfo abuse_at_emeraldonion_dot_org
OutboundBindAddressExit 23.129.64.16
OutboundBindAddressOR 23.129.64.16
DirPort 23.129.64.16:80
ORPort 23.129.64.16:443
ORPort [2620:18c:0:1600::1]:443
RelayBandwidthRate 18 MBytes
RelayBandwidthBurst 18 MBytes
IPv6Exit 1
Exitpolicy accept *:*
ExitPolicy accept6 *:*

Starting the processes

sudo service tor@tor01 start
sudo service tor@tor02 start
sudo service tor@tor03 start
sudo service tor@tor04 start
sudo service tor@tor05 start
sudo service tor@tor06 start