This article was originally published on TechCrunch.

I recently had a scheduled video conference call with a Fortune 100 company.

Everything on my end was ready to go; my presentation was prepared and well-practiced. I was set to talk to 30 business leaders who were ready to learn more about how they could become more resilient to major outages.

Unfortunately, their side hadn’t set up the proper permissions in Zoom to add new people to a trusted domain, so I wasn’t able to share my slides. We scrambled to find a workaround at the last minute while the assembled VPs and CTOs sat around waiting. I ended up emailing my presentation to their coordinator, calling in from my mobile and verbally indicating to the coordinator when the next slide needed to be brought up. Needless to say, it wasted a lot of time and wasn’t the most effective way to present.

At the end of the meeting, I said pointedly that if there was one thing they should walk away with, it’s that they had a vital need to run an online fire drill with their engineering team as soon as possible. Because if a team is used to working together in an office — with access to tools and proper permissions in place — it can be quite a shock to find out in the middle of a major outage that they can’t respond quickly and adequately. Issues like these can turn a brief outage into one that lasts for hours.

Quick context about me: I carried a pager for a decade at Amazon and Netflix, and what I can tell you is that when either of these services went down, a lot of people were unhappy. There were many nights where I had to spring out of bed at 2 a.m., rub the sleep from my eyes and work with my team to quickly identify the problem. I can also tell you that working remotely makes the entire process more complicated if teams are not accustomed to it.

There are many articles about best practices aimed at a general audience, but engineering teams have specific challenges as the ones responsible for keeping online services up and running. And while leading tech companies already have sophisticated IT teams and operations in place, what about financial institutions and hospitals and other industries where IT is a tool, but not a primary focus? It’s often the small things that can make all the difference when working remotely; things that seem obvious in the moment, but may have been overlooked.

So here are some tips for managing incidents remotely:

Designate a call leader

There should be one person who is the “call-leader” responsible for gathering critical updates and sharing them with key stakeholders during an outage. Having a single point of contact makes communication and collaboration less confusing, especially in a remote and distributed environment. The call leader is responsible for:

  • Providing status updates to the call on a regular basis
  • Ensuring people are not acting on their own
  • Making sure only one thing is tested at a time
  • Making judgment calls when team members aren’t sure which course of action to choose by collecting all available information and then issuing a decision

Get everyone the right hardware

If you’re an engineering manager, make sure each of your team members feels adequately set up, and let them expense improvements to their home office. Having office-quality internet at home is crucial when that becomes your primary workplace. Most engineering teams at sophisticated IT organizations will already provide work laptops — but for many companies this is a novel idea worth exploring. Providing a budget for the small things like webcams, microphones and extra monitors can improve communication, response time and the ability of team members to actively contribute to solving emergent problems. Make sure that company-provided hardware is also properly equipped with all needed software for team members to do their jobs effectively.

Also, I won’t name names… but there have been a handful of times in my career when the person responsible for a service left their two-factor authentication app/hardware somewhere inconvenient. So when that service went down, they were not able to get in and fix the problem quickly. This can add a lot of unnecessary time and frustration — so remember to keep your phones with authentication apps or your hardware keys (Gemalto, YubiKey, etc.) nearby at all times when you are on call!

Create an instant messaging channel

Having an easy and quick way to share graphs, logs, details, changes and so on is crucial to mitigating the length and scope of a major outage. Creating a unique channel in your instant messaging (IM) app (such as Slack, Discord or IRC) dedicated to dealing with the specific outage at hand will accomplish a few things. For one, it won’t add to other channels noise that will be distracting for others involved. It also provides a home for all key stakeholders, and is a place to direct people who want to get involved. Importantly, it also serves as a timeline of what happened, which may be useful later during a retrospective.

Follow conference call etiquette

This sounds simple, but can have a drastic impact on your ability to resolve an incident quickly: Be a good citizen. This applies to absolutely everyone now. When there’s no clear agenda, when people are talking over one another, when there’s a ton of background noise, this all can distract from the problem at hand. The call leader should run the conference call while the team is responding to an incident, and each person will have a chance to share their update. When not speaking, other team members should be on mute, so that everyone doesn’t hear their keyboard clanking away while notes are being taken.

Run an online fire drill

If you’ve never run an online fire drill — this is the time to do it. The idea is to dedicate time when everything is fine to simulate a failure. This is often done using Chaos Engineering. One person causes a simple failure (they are the safety net, watching the whole time, ready to roll things back with a fix if needed). The other team gets alerted, paged, they log in, and it is their task to find the failure. This method forces teams to do more than just pay lip service — if there are weaknesses in your team’s processes, you will find them. You can then adjust accordingly, build up muscle memory and be better prepared for when disaster actually strikes.

In short, if you’ve established the call leader, created the IM room, gone over conference bridge etiquette, put your runbooks online, have your 2FA needs handy and have all the right hardware and software… then run an online fire drill to test that when something unexpectedly fails, the team is ready to respond quickly.

No items found.
Kolton Andrus
Kolton Andrus
Start your free trial

Gremlin's automated reliability platform empowers you to find and fix availability risks before they impact your users. Start finding hidden risks in your systems with a free 30 day trial.