Networking Our Venues

By Giles Moss on June 17th

We started theSpaceUK before most people had mobile phones and when the need to be constantly connected to the Internet was as unthinkable a concept as having custard with your fish fingers. I fondly remember crouching in the draughty entranceway to Venue 45 plugged into the only phone line, dialled up to the Internet on my laptop and hoping customers didn’t want to be calling us at that moment to reserve tickets.

Things have moved on since. Nowadays our theatres are supported by a pop-up office operation for over 100 staff that has to handle selling tickets between our venues, accessing show production information, let us print stuff, supports our office phones, lets our staff check Instagram and TikTok with their iPhones and more. Fundamentally, this means an Internet connection.

Our venues are situated across four main sites, Venue 45, the Hilton Edinburgh Carlton Hotel, the Radisson Hotel and the Royal College of Surgeons. Each of the four sites has a fast Internet connection, ranging from a FTTC VDSL service to leased business fibre circuits depending on what our host can supply.

The Internet connection lands on a firewall router managed by theSpaceUK; we love the kit from Mikrotik because it offers an extensive feature set at a pleasantly low price (OK, they’re great for us techie geeks who like to fiddle). We use various models from the 2011 and 3011 series which have proven to be perfectly able to handle our requirements.

Our networks within each venue are split up into separate subnets - effectively networks - depending on the use: The production kit (box office ticketing system, phones, printers) sits on one, there's one for staff WiFi access and a third for the credit card machines. In some venues we offer services to third parties so there's a fourth subnet for them. It’s good practice to segregate networks up in this manner and we firewall between the subnets; there’s no need for an iPhone to be able to access our box office software, for example.

All the subnets are distributed as VLANs on the same physical fabric to simplify cabling through the buildings in which we operate. We generally use HP managed switches, from 8 to 24 ports, or the host venue provides the trunk switching fabric for us and we add our own access switches on the edge.

We’re a seasonal operation on site, so the majority of our network spends 11 months in storage, is plugged back together during the get-in and hastily dismantled at the end of the Fringe. The core of the network, however, is installed year-round. The venue routers themselves stay in Edinburgh - indeed we offer guest WiFi services to some sites - and we also have a significant presence in a pair of data centres around the UK where we host a number of servers providing the back end to our website, ticketing and phone systems.

The Mikrotik routers in each venue connect via VPN to core routers in our two data centres. Our network is based on a dual star topology with every site connecting back via an encrypted network link across the Internet to each core. This means any venue has a resilient route to any other and we avoid reliance on a single data centre. Our inter-site links are primarily IPSec VPNs, with OSPF running over GRE tunnels. This is a new topology for 2019; previously we ran a mesh of IPSec links where each venue would have a link to all others, but this didn't scale well and required each venue to have a static public IP address. With the new topology we can adapt the VPN technology to suit the link - we use IPSec to our major sites and OpenVPN to minor sites where we need to traverse other peoples’ firewalls or manage dynamic IPs.

Our venues need their Internet connections for a few primary reasons:

  1. For ticket sales updates. We can sell tickets without a working Internet connection but information on sales from the Fringe and any changes to the on-sale details (e.g. ticket price updates) comes via the systems in our data centres.
  2. For our phone system.
  3. To allow our IT team to remotely support all our kit.

To keep our staff in touch with the wider world (and, increasingly, each other - WhatsApp is heavily used by the on site teams), we provide staff WiFi in our venues. This is either on our own platform (we use UniFi access points), or we broadcast our own SSIDs via the host’s existing access points. We run a pair of SSIDs in every venue, one for the staff and another for the IT support team (who, typically, like to be different!).

Our IT estate is managed by the Senior Production Team from across the country. During the Fringe we have IT specialists on site, but third-line (very in-depth) support for our software and systems comes from members of the team who, due to other commitments, can’t be in Edinburgh for the whole festival. They’re never far away though - they can access all our PCs remotely. In fact they often give the box office a shock as they can grab control of PC being used by whoever has the problem and can move the mouse from afar!

Our IT Team on site is rather nomadic and roves between different venues, chasing problems or, if nothing is going wrong at the time - it has been known to happen! - just catching up with the staff. The next attention-demanding issue is typically is at a different venue to that which they’re currently in so to provide a quicker response they utilise the same remote access. Once, our on-site IT guy was seen walking along the Royal Mile trying to juggle a large vat of coffee from Starbucks while remotely connected to a box office PC on his iPad, helping a box office assistant with a ticket sales issue while the customer was waiting. We're nothing if not dedicated!

As is the case in every modern computer network, we can't keep an eye on everything at once. To help us out, our IT infrastructure is continually monitored by Zabbix, an IT monitoring tool, to check it is all working correctly. Zabbix keeps an eye on loads of important things for us and lets us know when something needs our attention. Perhaps the Internet connection to a venue has become unreliable, or that the toner is running low in a particular laser printer; we've got alerts set up to warn us so we can take action before - hopefully - too many users notice.

It all amounts to what we reckon is a fairly well managed corporate network that wouldn’t be out of place in a full-time company, let alone one that we assemble out of storage crates in late July and only runs for a month. It’s certainly an interesting challenge as there is frequently little time to waste when something isn’t working: faults need to be resolved quickly or the operation will suffer. The nightmare scenario for us is that we’re unable to sell tickets and our focus on resilience stems from trying to minimise the likelihood of this.