Skip to Content

Change Healthcare: the attack that disrupted 190 million patients and cost $22 million in ransom

The largest health data breach in American history is a wake-up call for any organization that depends on digital systems to operate
May 6, 2026 by
Change Healthcare: the attack that disrupted 190 million patients and cost $22 million in ransom

When the system stops, life stops too

In February 2024, the ransomware group ALPHV/BlackCat attacked Change Healthcare, a subsidiary of UnitedHealth Group and the largest health payment processor in the United States. The result was devastating: critical systems were unavailable for weeks, preventing hospitals, pharmacies, and clinics from processing prescriptions, authorizing procedures, and receiving payments from insurers. According to a Reuters report from January 2025, the incident exposed data of 190 million patients, becoming the largest health data breach in American history.

UnitedHealth Group paid approximately $22 million in ransom to the criminal group, an amount publicly confirmed by the company itself. Even so, the data was exposed and total operational losses reached figures close to $2.2 billion, according to a report published by BleepingComputer. Paying the ransom did not guarantee the immediate recovery of the systems, did not eliminate the already exfiltrated data, and did not erase the impact on patients, healthcare providers, and business partners connected to the platform.

The case brutally illustrates a reality that many managers still underestimate: the interruption of digital systems is not just an IT problem. It is an operational, financial crisis and, in sectors like healthcare, a direct threat to the continuity of essential services. Organizations of all sizes and sectors are exposed. The question is not whether an attack can happen to your company, but whether your structure is prepared to withstand, detect, and recover when it does.

The purpose of this article is not to speculate on what occurred internally at Change Healthcare. The technical details of the incident are not fully public. What we can, and should, do is use this case as a starting point for an honest analysis of the vectors that attacks of this nature tend to exploit and the layers of protection that any organization can implement.


The vectors that attacks like this tend to exploit

Although the internal details of the incident are not public, ransomware attacks conducted by organized groups like ALPHV/BlackCat typically start with compromised or weak credentials. Initial access to corporate environments often occurs via credentials leaked in other incidents, purchased on underground forums, or obtained through brute force on remote access services exposed to the internet. A reused password by an employee, a VPN access without multi-factor authentication, or a service account with excessive permissions are enough for an attacker to establish an initial presence within the environment. From there, lateral movement, privilege escalation, and ransomware deployment are relatively predictable phases for groups with resources and experience.

Another recurring vector in large-scale attacks is the absence of proactive monitoring. Industry research, such as Mandiant's M-Trends report, indicates that the average dwell time of an attacker within a corporate environment before detection can exceed 16 days. During this period, criminals map the network, identify critical systems, disable security tools, and strategically position ransomware to maximize impact. Without intelligent alerts and continuous behavior analysis, this movement goes unnoticed until the damage is already done.

The third critical vector in incidents with prolonged impact is the absence of effective network segmentation. When all systems in an organization are interconnected without well-defined access controls between segments, a single compromised entry point can give the attacker free access to the entire infrastructure. Backup systems accessible over the same network as production servers, for example, are priority targets, as criminals know that eliminating backups dramatically increases the pressure to pay the ransom.


What can be done to protect your infrastructure

The first and most urgent layer of protection is to ensure that your organization has endpoint protection with detection and response (EDR) on all devices. EDR solutions monitor anomalous behaviors in real time, not just signatures of known threats. This means that even new malware, never seen before, can be detected by the behavior pattern it exhibits: attempts to escalate privileges, lateral movement, mass access to files, or communication with suspicious external servers. The difference between traditional antivirus and an EDR solution is analogous to the difference between a security camera that records and a monitoring center that reacts.

The second non-negotiable layer is a isolated, encrypted, and regularly tested backup. A backup that has never been tested is not a backup; it is hope. Backups connected to the same network as production systems are often the first to be destroyed by modern ransomware. The proper architecture provides for immutable copies stored in completely isolated environments, with documented periodic restoration tests and a clearly defined RTO (Recovery Time Objective). Knowing exactly how long it takes for your operation to come back online after a disaster is information that every manager should have at their fingertips.

The third layer is behavioral and cultural: continuous user training combined with multi-factor authentication (MFA) for all critical access. The human factor remains one of the main entry points in security incidents. Regular phishing simulations, awareness programs, and clear credential usage policies significantly reduce the attack surface. Combined with MFA, these measures create a barrier that makes credential compromise much harder to exploit, even when an employee's password is captured.


Questions that every decision-maker should ask themselves now

1. Would my backups really work in a disaster like this? How long would it take for my operation to be back online?

2. Does my team have the right tools to identify and block an attack like this immediately, before it causes the entire disaster? How am I investing in preparing my technical team?

3. How long would my company survive without access to systems and files?

1. Would my backups really work in a disaster like this? How long would it take for my operation to be back online?

Most organizations believe they have functional backups until the moment they need to use them. Having files copied to some server is not the same as having an operational continuity strategy. A robust managed backup policy includes immutable copies in multiple destinations, including environments completely isolated from the production network, end-to-end encryption, and, most importantly, restoration tests with documented frequency. The RTO, the time required to restore critical systems, and the RPO, the maximum acceptable data loss point, need to be known and validated goals, not optimistic estimates made in the heat of a crisis.

A managed IT provider with this capability not only performs backups: it continuously validates whether restoration would work, monitors the integrity of the copies, and ensures that isolated environments are up to date. The right question is not "do we have a backup?", but "how many hours would it take for our operation to be back online if all our servers were encrypted right now?".

2. Does my team have the right tools to identify and block an attack before it causes the disaster?

Sophisticated attacks rarely happen in minutes. They develop over days or weeks within the environment, in slow and calculated movements to avoid detection. An internal IT team without tools for 24/7 monitoring, EDR, and event correlation will hardly detect this movement before the ransomware is triggered. Not because the team is incompetent, but because the right tools and the ability for continuous analysis require investment and specialization that are beyond the scope of most internal teams.

Investing in the preparation of the technical team means ensuring access to advanced detection platforms, clear alert response processes, and ideally, support from a SOC (Security Operations Center) that operates outside of business hours. An attack initiated in the early hours of Friday, when the internal team is unavailable, can turn into a complete disaster by Monday morning. Continuous management of patches and vulnerabilities is also part of this preparation: 60% of breaches involve vulnerabilities for which a fix was already available at the time of the attack, according to data from the Ponemon Institute.

3. How long would my company survive without access to systems and files?

This is perhaps the most honest question a manager can ask themselves. In the case of Change Healthcare, the disruption lasted weeks and affected not only the targeted organization but an entire chain of partners and customers who depended on its systems. Does your company have a documented and tested incident response plan? Would your employees know what to do in the first hours after detecting an attack? Is there a clear procedure for isolating compromised systems without paralyzing the entire operation?

An effective incident response plan defines roles, responsibilities, communication flows, and technical procedures before a crisis occurs. It is tested through simulations (tabletop exercises) and updated regularly. Without it, every hour of downtime costs not only in lost revenue but also in decisions made under pressure, without criteria and coordination. The operational resilience of an organization is built long before the attack.


If your company does not yet have an integrated layered protection strategy, consider conducting a Strategic IT Assessment, at no obligation, to identify vulnerabilities before they become headlines.

Change Healthcare: the attack that disrupted 190 million patients and cost $22 million in ransom
May 6, 2026
Share this post
Tags
Archive