TomAmmon.net – Network Infrastructure

This is Part 2 in an 7-part series discussing the www.tomammon.net online resume application. Check out the architectural overview for some context, or see the links at the end of the article to navigate to other posts in this series.

The Network Base Layer

The network infrastructure surrounding tomammon.net is pretty simple. All of the components to the left of the Internet cloud in the diagram below run on my home network infrastructure, but only the parts of my home network relevant to our topic are shown. Let’s start with a connectivity-focused perspective.

Network Connectivity Design

Most of the connectivity is delivered with physical appliances, while the nodes that run the application code are all virtualized. The load balancer is the only device that straddles these two worlds. This device is a VNF running the following software:

  • CentOS Linux for the operating system
  • HAProxy as the load balancing engine
  • The¬†Free Range Routing suite to provide connectivity into the OSPF routing domain

The load balancer advertises a loopback, taken from PA space, into the core network. This public IP address is one of the addresses configured in the DNS A record for the application. We’ll discuss the DNS components in Part 4: Public Cloud Integration.

OSPF Design

A simple single-area OSPF design keeps things clean. Since the routing domain is quite small, there is no need to run multiple areas or turn nerd knobs. Here are a few things to keep in mind:

  • The Internet Router unconditionally redistributes a default route into the OSPF domain to provide Internet access to the core network. Since there are no other exit points that do not depend on this single internet circuit, there’s no need to make the advertisement of the Type 5 LSAs conditional on a received default route.
  • The VPN headend (A Cisco ASA 5505) is running as an Anyconnect Remote Access VPN. It injects /32 static routes for remote clients, which are then redistributed into OSPF to provide connectivity for remote Content Nodes. The remote Content Nodes run OpenConnect, an open source SSL VPN client, to connect to the VPN headend.
  • The load balancer is not technically an ABR since all it is really doing, from an OSPF perspective, is advertising a stub network that represents its public loopback IP.
  • The load balancer is only required because other public websites run on this same hypervisor, using the same public IP address. This first load balancer simply directs traffic intended for www.tomammon.net to the local Content Node using an “ACL” – HAProxy’s equivalent of an F5 iRule.

Compute Virtualization and Containers

The hypervisors running on the local network provide the Data Warehouse containing all of the dynamic content for the application, one of the content nodes, and the load balancer. Since the data warehouse resides on a shared database host, it is containerized. From the perspective of the resume application, there is no requirement for the data warehouse to run in a container – it was just a convenient way to provide an isolated environment on a compute resource that was already in service. We’ll go in to much more detail on the use of containers in Part 3.

The links below dive into the details of www.tomammon.net.

Leave a Reply

Your email address will not be published. Required fields are marked *