What the first 72 hours actually look like — and what we learned.
It was just before 9:30 AM on a Sunday when the call came in. A ransomware attack had executed overnight. Systems were down. Locations couldn't open Monday. And the clock was already running.
This wasn't a drill. This was a real incident at a real Midwest organization, a multi-site operation serving thousands of people, and Springthrough was engaged to lead the response and recovery.
Here's what the response actually looked like, and what every organization should take away from it.
The First Hour Is Everything
The organization had an incident response plan, and having it made an immediate difference. Before any external incident response firm was on the phone, there were decisions to make with imperfect information — and the plan gave the team a clear starting point.
Within the first hour:
- An emergency response meeting was convened with six people
- Firewall traffic was restricted to contain lateral movement
- Compromised accounts were disabled
- Affected systems were isolated from the production network
- The decision was made to close locations for Monday
This is what we call immediate containment — the actions you take in the first window using whatever infrastructure you already have, before a specialized forensics or IR firm takes the lead. It doesn't require fancy tools. It requires knowing your environment and moving decisively.
One critical early priority: determining the minimum requirements to get locations open in a limited fashion. Not waiting for full recovery to start that conversation — but actively working from day one to identify what each location needed to serve clients safely. That focus is what allowed most locations to be open again by Tuesday, with only one business day of full closure. Full recovery, however, took weeks.
Cyber Insurance Changes the Equation — If You Use It Right
One of the most important calls that Sunday wasn't to a vendor. It was to the cyber insurance carrier.
Engaging the insurer within hours of discovery is not just good practice — it's often required if you want coverage for containment and recovery costs. Many policies have specific notification windows, and missing them can jeopardize reimbursement for the exact work you're doing in the first days of response. Get the insurer on the phone early. Keep them informed. Document everything.
Early engagement also opened the door to the insurer's designated forensics firm, who took the lead on formal investigation and containment. And this introduced a constraint that every organization needs to understand going in: until the forensics firm signs off, you cannot take recovery actions that would compromise their investigation. That means systems that need to be rebuilt may need to sit while forensics completes their work. It's frustrating, but it's necessary — and it's another reason having an experienced partner who understands this process is so valuable.
A few things we'd tell any organization today:
- Know your policy before you need it. Understand your notification windows, what containment and recovery costs are covered, and what documentation the carrier expects from you in the first 24 hours.
- Call early — your coverage may depend on it. Insurance carriers want to be engaged immediately. Early notification protects your claim.
- Document everything from minute one. The claims process will want timestamped evidence of every action. Teams messages, emails, calendar blocks — it all matters.
What Weeks Two and Three Actually Look Like
The movies show the dramatic breach. What they don't show is the weeks of unglamorous recovery work that follows.
After the immediate crisis and forensics clearance, the real grind begins:
- Endpoint-by-endpoint scanning while systems are offline and disconnected from the network
- DNS and WAF reconfiguration to harden domain infrastructure before bringing services back online
- Server rebuilds and application restores — database by database, folder by folder
- Firewall rule reviews across every site and remote location
- Security tooling deployment — getting endpoint detection and response coverage to every machine
- Insurance documentation — the administrative burden of recovery is real and time-consuming
Springthrough's documented response and recovery time exceeded 200 hours — and that doesn't include client staff time, or the hours invested by the forensics and legal teams. Real-world incidents rarely look like a clean incident report.
Five Lessons That Apply to Every Organization
1. Have a plan before the attack — and make sure your IT partner knows it.
This organization had documented response procedures. That mattered. It focused those opening minutes and gave the team a shared framework for decision-making when the pressure was highest. If you don't have a plan, build one now. If you have one, make sure it's current and that your IT partner has been part of reviewing it.
2. Immediate containment saves you money and time.
Every minute a threat actor has access is another minute of potential lateral movement and data exfiltration. The steps you can take before forensics arrives — disabling accounts, isolating systems, restricting traffic — are not technical heroics. They're operational discipline. They require prior planning and knowing what to do.
3. Understand the forensics constraint.
Once your forensics firm is engaged, they control the pace of recovery. You will want to rebuild systems. You may not be able to yet. This is not a failure of the response — it's how the process is supposed to work. Building that expectation into your planning and communication with leadership matters.
4. Recovery is slower than you expect — plan for it.
Locations may reopen quickly in a limited fashion, but full operational recovery takes weeks. In this case, one business day of full closure was followed by weeks of rebuilds, restores, and security hardening. Set realistic expectations with leadership, don't declare victory too early, and make sure you have the staffing to sustain recovery work alongside normal operations.
5. Plan for key staff being unavailable — and have a vendor who can scale.
Multi-week recoveries don't pause for life. Key internal staff may be out for medical leave, family emergencies, or simply because they can't sustain crisis-mode hours indefinitely. In this incident, that's exactly what happened — and having a vendor who already knew the environment and could scale resources up without a ramp-up period was decisive. Think about what your organization does when your single IT person is out in the middle of a crisis. The answer can't be "we'll figure it out then."
This Is What We Do
Springthrough's role in this incident was what it always is: show up, take ownership, and do the work. We were on-site, in the calls, working with the insurer, rebuilding servers, and documenting the path from crisis to stability — more than 200 hours of it.
That's not something you can outsource to someone who shows up for the first time when the lights go out.
Don't wait for a Sunday morning to find out who answers the phone.
If your organization has an incident response plan that hasn't been opened in two years, or doesn't have one at all, let's fix that before you need it.
Pressure-test your incident response plan arrow_forwardSpringthrough provides Managed IT Services and vCIO services to organizations across the Midwest. Our team is based in Grand Rapids, Michigan.
