The unfortunate truth with cyber security incident response is that sometimes the attackers come out ahead. This was the case with a recent incident we responded to, during which it felt like everything that could go wrong did.
The past can be a powerful teacher, and we invite you to use this case study to find out what went wrong and how you can learn from these mistakes to improve your own security. We will walk you through the timeline of events, what went wrong, and how the impact could have been lessened.
Discovery
At 2:30 in the morning on a Sunday, the client’s managed service provider (MSP) received alerts and quickly realized that something was wrong. The SQL servers began alarming on resource utilization, network connectivity was slow, and disks on the SAN were buzzing at alarming rates.
After a brief investigation, they discovered an active ransomware attack was taking place. In an instinctual reaction, the MSP began shutting everything down. Systems and networking hardware were taken offline in an attempt to disrupt the attack.
But it was too late. The damage had already been done.
Impact
The MSP frantically accessed the client’s backup server to develop a plan of recovery, only to realize the backups were destroyed before the ransom event began. It was the same case for any volume shadow copies or any other form of backup. The attackers found the client’s backups, methodically gained access and destroyed them before the ransom event began.
If a network path to your backups from the network exists, attackers will find it to gain access. The only way to prevent this is to air gap your backups. Backing up to tape, offline disk, or an air-gapped network would have been a saving grace here, but it was too late. As they say, hindsight is 20/20.
All critical servers were encrypted, and all backups were destroyed. What could they do?
Seeing no other option, the client engaged a firm to assist in ransom negotiation to try to get their data back. The client quickly learned that their ransom demand was $750,000, and their cyber insurance deductible was $500,000.
Ouch.
At the time, the business was hard down, meaning all operations had stopped. Each day without production equated to over $100,000 in losses.
It felt like things couldn’t get any worse. And then they did.
Things Got Complicated
While the client was working diligently to negotiate with the attackers, a few things came to light.
First, the ransom deployment was botched. This variant was meant to encrypt an entire system with one decryption key to be used for all files. However, the attacker had mistakenly generated a unique encryption key for each file on the system. Even if the client negotiated to get the keys, the effort to restore the millions of files would take weeks rather than days.
Next, as negotiators attempted to reach a lower price, the attackers responded by raising the ransom demand. Then, they would disappear for days. The attacker was well aware that this business was not operational. These negotiation techniques were intentionally forceful to ensure the next time they engaged, our victim would come ready and prepared to pay.
After nearly one week of downtime and continued negotiations, our client had finally reached an agreement with the attackers and was prepared to pay the ransom.
Then, we hit another snag. During our investigation, we learned that the ransomware adversary was on the OFAC sanction list of known terrorist organizations. The government stepped in and prohibited the client from paying the ransom.
Back to square one: no data, no backups, and no possibility of getting the decryption keys.
Now What?
At this point, our customer determined that a full rebuild of the network and restoring any records from hard copies was their only path to recovery. That hurts, but it was the reality, and the client saw it as an opportunity to harden their environment while they recover.
Yet another snag. Insurance informed the client that their coverage would only support restoring the infrastructure to the previous state. Any improvements would not be covered.
Lessons Learned
The repercussions of this attack – financial impact and reputation degradation – were likely preventable. During our investigation, we determined the root cause of the incident was an insecure VPN portal with single-factor authentication.
In this very unfortunate series of events, one thing is apparent. All of this could have been prevented with a solid cyber security incident response program and some proactive testing on the network.
A good cyber security incident response partner would have reviewed the backup posture and helped them design a better program. A good security partner would have discovered the single-factor VPN portal and worked with the client to secure the network access. But again, hindsight is 20/20.
Moral of the story? Air gap your backups. Once ransomed, don’t trust that you can get your data back, even if you choose to negotiate. Develop your cyber security incident response plan before an incident, and test your plan. Be proactive by partnering with a vendor that will help you identify and close the gaps before an attack happens. And finally, make sure your insurance plan makes sense to your business model, including the deductible.
Need help creating your own incident response plan? Check out our free IR plan template or get in touch with our IR team today.
Thanks for the insight, Oscar!