Attackers Don’t Break In.
They Log In.  

Over 74% of breaches involve post authentication access - verizon dbir

Over 74% of breaches involve post authentication access - verizon dbir

Post-Authentication Data Security (PADS) by FenixPyre

What makes PADS Unavoidable today

What makes PADS Unavoidable today

Traditional security protects the "room," but the "jewelry" is being carried out the front door by people with keys.
FenixPyre doesn’t replace the door, we ensure the valuables stay protected even after it’s opened.

Identity is Proven to Fail

Phishing, MFA fatigue, and token replay make credential compromise a matter of when, not if. Access is no longer a proxy for trust.

Data Outpaces Controls

Files move across SaaS, clouds, and unmanaged devices. Once access is granted, data slips beyond environment-based controls and becomes implicitly trusted.

Objective: Monetization

Attackers want data that is easy to access and monetize. They want readable, portable files they can exfiltrate, ransom and sell. Security must follow the objective.

Identity is Proven to Fail

Phishing, MFA fatigue, and token replay make credential compromise a matter of when, not if. Access is no longer a proxy for trust.

Data Outpaces Controls

Files move across SaaS, clouds, and unmanaged devices. Once access is granted, data slips beyond environment-based controls and becomes implicitly trusted.

Objective: Monetization

Attackers want data that is easy to access and monetize. They want readable, portable files they can exfiltrate, ransom and sell. Security must follow the objective.

78% Increase

THE VOLUME GAP

U.S. data compromises jumped 78% in a single year (2023), hitting an all-time high despite record security investments.

74%

THE HUMAN ELEMENT

Of breaches involve stolen credentials, phishing, or simple human error. - VERIZON DBIR

$4.88M

AVERAGE BREACH COST

The 2024 global average cost of a data breach, a record high for the industry.

78% Increase

THE VOLUME GAP

U.S. data compromises jumped 78% in a single year (2023), hitting an all-time high despite record security investments.

74%

THE HUMAN ELEMENT

Of breaches involve stolen credentials, phishing, or simple human error. - VERIZON DBIR

$4.88M

AVERAGE BREACH COST

The 2024 global average cost of a data breach, a record high for the industry.

What is PADS by FenixPyre

What is PADS by FenixPyre

FenixPyre extends identity and access-based security by applying cryptographic protection directly to the DATA itself to ensure it remains secure even after access is granted.

FenixPyre extends identity and access-based security by applying cryptographic protection directly to the DATA itself to ensure it remains secure even after access is granted.

Persistent, Data-Centric Encryption

Encryption is applied directly to the data itself - FIPS 140-2 validated, AES-256 protection that persists wherever files go.

Context-Aware Access Control

Access is enforced dynamically based on identity, role, location, and device - reducing exposure even after access is granted.

Application-Agnostic Protection

Any file. Any application. From Office documents to CAD and engineering tools, data stays protected without changing how users work.

Overlay, Not Replacement

Deploys on top of existing permission systems like NTFS and cloud IAM - no parallel access models to manage.

Seamless Access Everywhere

Encrypted data works transparently across local devices, network shares, and cloud platforms - no disruption, no retraining.

Continuous Visibility & Enforcement

Every file access is logged and streamed to your SIEM for real-time monitoring, analytics, and insider risk detection.

Persistent, Data-Centric Encryption

Encryption is applied directly to the data itself - FIPS 140-2 validated, AES-256 protection that persists wherever files go.

Application-Agnostic Protection

Any file. Any application. From Office documents to CAD and engineering tools, data stays protected without changing how users work.

Seamless Access Everywhere

Encrypted data works transparently across local devices, network shares, and cloud platforms - no disruption, no retraining.

Context-Aware Access Control

Access is enforced dynamically based on identity, role, location, and device - reducing exposure even after access is granted.

Overlay, Not Replacement

Deploys on top of existing permission systems like NTFS and cloud IAM - no parallel access models to manage.

Continuous Visibility & Enforcement

Every file access is logged and streamed to your SIEM for real-time monitoring, analytics, and insider risk detection.

Persistent, Data-Centric Encryption

Encryption is applied directly to the data itself - FIPS 140-2 validated, AES-256 protection that persists wherever files go.

Seamless Access Everywhere

Encrypted data works transparently across local devices, network shares, and cloud platforms - no disruption, no retraining.

Overlay, Not Replacement

Deploys on top of existing permission systems like NTFS and cloud IAM - no parallel access models to manage.

Application-Agnostic Protection

Any file. Any application. From Office documents to CAD and engineering tools, data stays protected without changing how users work.

Context-Aware Access Control

Access is enforced dynamically based on identity, role, location, and device - reducing exposure even after access is granted.

Continuous Visibility & Enforcement

Every file access is logged and streamed to your SIEM for real-time monitoring, analytics, and insider risk detection.

CORE PHILOSOPHY

"PADS keeps data protected whenever and wherever it’s used, regardless of how access was obtained."

INCIDENT ANALYSIS LOG

INCIDENT ANALYSIS LOG

VULNERABILITY: IMPLICIT TRUST IN AUTHENTICATED SESSIONS

VULNERABILITY: IMPLICIT TRUST IN AUTHENTICATED SESSIONS

VULNERABILITY: IMPLICIT TRUST IN AUTHENTICATED SESSIONS

DEFENSE OUTCOME
DEFENSE OUTCOME

FenixPyre Neutralization: 100%

FenixPyre Neutralization: 100%

FenixPyre Neutralization: 100%

TARGET ORGANIZATION
TARGET ORGANIZATION
TARGET ORGANIZATION
BREACH METHOD
BREACH METHOD
BREACH METHOD
WHAT FAILED
WHAT FAILED
WHAT FAILED
THE PADS DIFFERENCE
THE PADS DIFFERENCE
THE PADS DIFFERENCE

Nike

Corporate network compromise

Nike

Corporate network compromise

Nike

Corporate network compromise
Corporate network compromise
Corporate network compromise
Corporate network compromise
Authentication controls worked
Authentication controls worked
Authentication controls worked
Systems encrypted data at rest
Systems encrypted data at rest
Systems encrypted data at rest
No mass malware required for access
No mass malware required for access
No mass malware required for access
Activity resembled legitimate file access
Activity resembled legitimate file access
Activity resembled legitimate file access
Files decrypted when accessed/exported
Files decrypted when accessed/exported
Files decrypted when accessed/exported
Exported documents remain policy-bound encrypted artifacts.
Exfiltrated design files become unusable outside approved environments.
IP theft attempts produce encrypted, non-exploitable data.
Exported documents remain policy-bound encrypted artifacts.
Exfiltrated design files become unusable outside approved environments.
IP theft attempts produce encrypted, non-exploitable data.
Exported documents remain policy-bound encrypted artifacts.
Exfiltrated design files become unusable outside approved environments.
IP theft attempts produce encrypted, non-exploitable data.

Uber

MFA Fatigue

Uber

MFA Fatigue

Uber

MFA Fatigue
MFA Bombing / Social Engineering
MFA Bombing / Social Engineering
MFA Bombing / Social Engineering
MFA was satisfied
MFA was satisfied
MFA was satisfied
Zero Trust trusted the session
Zero Trust trusted the session
Zero Trust trusted the session
Endpoint tools saw no malware
Endpoint tools saw no malware
Endpoint tools saw no malware
Data was vulnerable in the session
Data was vulnerable in the session
Data was vulnerable in the session
Sensitive internal files stay encrypted.
Access evaluated at the file level.
Access ≠ exfiltration.
Sensitive internal files stay encrypted.
Access evaluated at the file level.
Access ≠ exfiltration.
Sensitive internal files stay encrypted.
Access evaluated at the file level.
Access ≠ exfiltration.

Conduent

Unauthorized system access

Conduent

Unauthorized system access

Conduent

Unauthorized system access
Unauthorized system access
Unauthorized system access
Unauthorized system access
Authentication and internal access controls functioned
Authentication and internal access controls functioned
Authentication and internal access controls functioned
Data encrypted at rest in storage systems
Data encrypted at rest in storage systems
Data encrypted at rest in storage systems
Authorized sessions decrypted sensitive files automatically
Authorized sessions decrypted sensitive files automatically
Authorized sessions decrypted sensitive files automatically
Bulk document exports not prevented
Bulk document exports not prevented
Bulk document exports not prevented
Sensitive files remain encrypted and policy-bound after access.
Exfiltrated datasets remain unreadable outside approved environments.
Data remains encrypted and unusable for extortion and resale value.
Sensitive files remain encrypted and policy-bound after access.
Exfiltrated datasets remain unreadable outside approved environments.
Data remains encrypted and unusable for extortion and resale value.
Sensitive files remain encrypted and policy-bound after access.
Exfiltrated datasets remain unreadable outside approved environments.
Data remains encrypted and unusable for extortion and resale value.

Waymo

Legitimate Employee Access

Waymo

Legitimate Employee Access

Waymo

Legitimate Employee Access
Insider Threat
Insider Threat
Insider Threat
Authentication succeeded
Authentication succeeded
Authentication succeeded
Employee had legitimate access
Employee had legitimate access
Employee had legitimate access
No malware or exploit
No malware or exploit
No malware or exploit
Encryption at rest worked.
Encryption at rest worked.
Encryption at rest worked.
Files readable and exportable once accessed
Files readable and exportable once accessed
Files readable and exportable once accessed
Exported design files remain policy-bound encrypted artifacts.
Copying IP to personal devices produces unusable encrypted files.
Insider access ≠ IP exfiltration.
Exported design files remain policy-bound encrypted artifacts.
Copying IP to personal devices produces unusable encrypted files.
Insider access ≠ IP exfiltration.
Exported design files remain policy-bound encrypted artifacts.
Copying IP to personal devices produces unusable encrypted files.
Insider access ≠ IP exfiltration.

MOVEit Transfer

Zero-day exploitation

MOVEit Transfer

Zero-day exploitation

MOVEit Transfer

Zero-day exploitation
Zero-day exploitation
Zero-day exploitation
Zero-day exploitation
Perimeter security and IAM worked
Perimeter security and IAM worked
Perimeter security and IAM worked
Files encrypted at rest on server
Files encrypted at rest on server
Files encrypted at rest on server
Files decrypted during transfer
Files decrypted during transfer
Files decrypted during transfer
Large file exports were treated as normal operations
Large file exports were treated as normal operations
Large file exports were treated as normal operations
Exported files remain policy-bound and encrypted even after leaving the platform.
Mass file exfiltration produces unusable encrypted artifacts.
File transfer systems become operationally compromised but data remains protected.
Exported files remain policy-bound and encrypted even after leaving the platform.
Mass file exfiltration produces unusable encrypted artifacts.
File transfer systems become operationally compromised but data remains protected.
Exported files remain policy-bound and encrypted even after leaving the platform.
Mass file exfiltration produces unusable encrypted artifacts.
File transfer systems become operationally compromised but data remains protected.

These companies did everything right by today’s standards, until access was granted. 
Security trusted the session. Data became usable. PADS by FenixPyre enforces protection at the data layer, so authentication alone is never enough. 

These companies did everything right by today’s standards, until access was granted. 
Security trusted the session. Data became usable.
PADS by FenixPyre enforces protection at the data layer, so authentication alone is never enough. 

These companies did everything right by today’s standards, until access was granted. 
Security trusted the session. Data became usable. PADS by FenixPyre enforces protection at the data layer, so authentication alone is never enough. 

The Business Impact of PADS

The Business Impact of PADS

PADS fundamentally changes what a breach means, technically and financially.
PADS fundamentally changes what a breach means, technically and financially.

Security spending aims to make breaches less likely.

PADS makes breaches less meaningful. That's the difference

between managing risk and neutralizing its worst outcome.

Exfiltrated files remain encrypted, unreadable, worthless.

Network compromise no longer becomes data loss.

No regulatory trigger. No extortion leverage. No disclosure obligation.

The breach happened - but nothing was exposed.

One control. Five risk classes neutralized.

Credential theft. Insider misuse. Third-party exposure.

SaaS abuse. Supply chain compromise.

Demonstrate provable data containment to insurers. Better underwriting terms. Fewer exclusions. Less claim friction.

Integrates above your existing stack - IAM, Zero Trust, EDR, DLP. No rip-and-replace. No workflow disruption. No write-offs.

Prove persistent data protection across CMMC, HIPAA, GLBA, ISO, and NIST. No workflow redesign. No reclassification projects. Audit-ready by default.

Cybersecurity spending keeps rising. So do breach losses.

Cybersecurity spending keeps rising. So do breach losses.

Cybersecurity spending keeps rising. So do breach losses.

PADS changes that equation - by protecting the asset attackers actually want, not just the perimeter around it.

PADS changes that equation - by protecting the asset attackers actually want, not just the perimeter around it.

PADS changes that equation - by protecting the asset attackers actually want, not just the perimeter around it.

Zero Trust succeeded. It just stopped one step too early.

Zero Trust succeeded. It just stopped one step too early.

Zero Trust succeeded. It just stopped one step too early.

Before your next Zero Trust investment - read what it still can't protect.

Before your next Zero Trust investment - read what it still can't protect.

Before your next Zero Trust investment - read what it still can't protect.

Your DLP is working exactly as designed.

Your DLP is working exactly as designed.

Your DLP is working exactly as designed.

That's the problem - and why more DLP spend won't fix it.

That's the problem - and why more DLP spend won't fix it.

That's the problem - and why more DLP spend won't fix it.

Plug In. Don't Rip Out.

Plug In. Don't Rip Out.

PADS integrates cleanly into your existing environment within hours and without disruption. It sits above your stack - not inside it.

PADS integrates cleanly into your existing environment within hours and without disruption. It sits above your stack - not inside it.

PADS integrates cleanly into your existing environment within hours and without disruption. It sits above your stack - not inside it.

VALUE IN DAYS NOT MONTHS

VALUE IN DAYS NOT MONTHS

VALUE IN DAYS NOT MONTHS

UNIVERSAL COMPATIBILITY

UNIVERSAL COMPATIBILITY

Identity Providers

Identity Providers

Okta, Azure AD (Entra ID), Ping Identity

Okta, Azure AD (Entra ID), Ping Identity

Cloud Storage & SaaS

Cloud Storage & SaaS

M365, SharePoint, OneDrive, Box, Dropbox

M365, SharePoint, OneDrive, Box, Dropbox

Complex Data Types

Complex Data Types

Native support for all files, including heavy CAD

Native support for all files, including heavy CAD

Seamlessly deployable on-prem or in the cloud. Security that moves with your data, not against your users.

ZERO FRICTION GUARANTEE

ZERO FRICTION GUARANTEE

No re-architecture

No re-architecture

No data migration

No data migration

No IAM policy changes

No IAM policy changes

No workflow disruption

No workflow disruption

The Pattern Is Familiar

The Pattern Is Familiar

Network Evolution

Perimeter → Zero Trust

Perimeter → Zero Trust

Threat Evolution

AV → EDR

AV → EDR

Data Evolution

DLP → PADS

DLP → PADS

Featured On The Blog

Data Protection

Apr 17, 2026

The Duty of Care Gap: Why Today's Breach Litigation Standard Was Built for Yesterday's Attack

In the week of April 1 through April 7, 2026, five class action lawsuits were filed against Mercor, a $10 billion AI training startup serving OpenAI, Anthropic, and Meta. Five lawsuits in seven days. Each one built around the same fundamental argument - that Mercor failed to implement adequate security measures to protect the sensitive data of more than 40,000 contractors whose personal information, professional work product, and identifying documents were stolen in one of the most consequential data breaches of 2026.

The plaintiffs are not wrong that a failure occurred. The breach was real. The harm is real. The stolen data - 939 gigabytes of proprietary source code, 3 terabytes of video interview recordings and identity verification documents, a 211 gigabyte user database, internal communications, and AI training methodologies that Y Combinator CEO Garry Tan described as representing billions in value and a major national security issue - is now in the hands of attackers who obtained it through a cascading supply chain attack that harvested legitimate credentials from a compromised open source dependency.

The lawsuits are right that Mercor failed. They are wrong about what that failure actually was. And in being wrong about that, they are asking for a legal remedy built on a standard of care argument that - even if fully satisfied - would not have protected a single file when the credentials were compromised.

That is not a minor procedural deficiency. It is a fundamental misidentification of the duty that was breached. And it matters enormously - not just for the 40,000 contractors who deserve meaningful remedy, but for every organization that will read the Mercor settlement, implement its required controls, and believe they have met their obligation to protect the people whose data they hold.

They will not have. And the next breach will prove it.

The Standard of Care Argument the Lawsuits Are Building

To understand why the lawsuits are asking for the wrong fix it is necessary to understand precisely what legal standard they are invoking and where that standard falls short.

Data breach class actions in the United States are predominantly built on negligence theory. To succeed on a negligence claim a plaintiff must establish that the defendant owed a duty of care, that the defendant breached that duty, that the breach caused the plaintiff's harm, and that the plaintiff suffered cognizable damages.

The duty of care in data breach cases has been progressively defined by courts, regulators, and compliance frameworks over the past two decades. The FTC has enforcement authority over unfair or deceptive data security practices. The SEC has specific guidance for registered investment advisers and technology companies on data protection obligations. State attorneys general have brought actions under consumer protection statutes. Courts have increasingly recognized an implicit duty to protect sensitive personal data commensurate with the nature of the data held and the reasonable expectations of the people who provided it.

What has emerged from this body of law, regulation, and enforcement is a standard of care built almost entirely around access layer controls. The duty as courts and regulators currently understand it is a duty to prevent unauthorized access. Implement MFA. Segment networks. Monitor for anomalous activity. Rotate credentials. Conduct regular security audits. Encrypt data at rest and in transit.

The Mercor lawsuits invoke exactly this standard. The Gill complaint alleges failure to implement MFA, failure to limit access to PII, failure to monitor systems, failure to rotate passwords, and failure to encrypt sensitive data during storage and transmission. It is a textbook recitation of the access layer standard of care as it currently exists in data breach litigation doctrine.

And here is the legal problem that nobody in any of the five courtrooms is currently confronting:

That standard of care - even fully satisfied - would not have prevented the harm the plaintiffs suffered. Because the harm did not originate from a failure of access layer controls. It originated from a failure at the data layer. And the legal doctrine has not yet caught up to that distinction.

The Encryption Allegation Points at the Right Problem and Then Misses It

Among all the allegations in the Mercor complaints, the failure to encrypt sensitive data during storage and transmission is the one that comes closest to identifying the actual duty that was breached. It points toward the right problem. But the way it is framed - listed alongside MFA and password rotation as one item among several access layer improvements - reveals that the plaintiff's attorneys understand encryption as a storage security measure rather than as a fundamentally different category of data protection obligation.

That distinction is not semantic. It is the difference between a remedy that changes the outcome for 40,000 contractors and a remedy that produces a more expensive breach with identical consequences.

Encryption at rest means data sitting in a database or storage system is encrypted when it is not being accessed. Encryption in transit means data moving between systems is encrypted as it travels. Both are legitimate and important security controls. Both are widely recognized components of the current standard of care. And both are rendered completely ineffective the moment an attacker obtains valid credentials - because when a user authenticates through the normal access pathway the system decrypts the data for them, it cannot distinguish between a legitimate user and an attacker holding stolen credentials, and the encryption that was supposed to protect the data dissolves on contact with a valid authenticated session. In the exact breach scenario the Mercor lawsuits describe, both controls perform exactly as designed and protect nothing.

This means that in the exact breach scenario the Mercor lawsuits describe - an attacker authenticating successfully with stolen credentials and accessing files through the authorized decryption pathway - both forms of encryption the complaint demands would have been fully satisfied and would have protected nothing. The files would still have been usable. The exfiltration would still have proceeded. The harm would still have flowed to 40,000 contractors.

The lawsuits are demanding a standard of care that has already been implicitly satisfied by the mechanism of the attack itself. And demanding it more rigorously produces no meaningful benefit to the people the litigation is supposed to protect.

The Duty That Was Actually Breached

If the current standard of care - even fully implemented - would not have changed the outcome, the legal question becomes what duty would have. What obligation, if discharged, would have rendered the breach consequence-free for the 40,000 contractors who are now plaintiffs?

The answer is precise and it points to a duty that existing doctrine has not yet adequately articulated: the duty to protect data at the file layer after authentication succeeds.

This is the Post Authentication Data Security duty. It is distinct from and more demanding than the access layer duty that current doctrine recognizes. It is not a duty to prevent unauthorized access - though that duty exists and matters. It is a duty to ensure that data remains protected even when access succeeds, whether that access was legitimately obtained or achieved through credential theft, supply chain compromise, insider misuse, or any other vector that produces a valid authenticated session.

The distinction maps directly onto the facts of the Mercor breach. The attackers authenticated successfully. Every access control performed exactly as designed. The breach did not occur at the access layer - it occurred at the data layer, where no protection existed to govern what happened to files after authentication succeeded.

Under the current standard of care doctrine, Mercor's failure is characterized as an access layer failure - insufficient MFA, inadequate monitoring, poor credential hygiene. Those characterizations may be legally valid but they are factually incomplete. The more precise and more legally significant failure was the absence of file layer protection that would have rendered the authenticated access consequence-free regardless of who held the credentials.

The duty to protect data at the file layer after authentication succeeds is the duty the Mercor lawsuits are gesturing toward but failing to name. And naming it precisely is the most important legal contribution the Mercor litigation could make to the evolution of data breach doctrine.

Why the Current Standard of Care Is Structurally Insufficient

The cybersecurity industry has known for years that stolen credentials are the single biggest vulnerability in the modern security stack. This is not a controversial position. Verizon's Data Breach Investigations Report has identified compromised credentials as the leading cause of breaches for nearly a decade running. IBM's Cost of a Data Breach Report consistently ranks stolen credentials as both the most common and most expensive attack vector. Every major security framework - NIST, ISO 27001, HITRUST - includes extensive controls around identity and access precisely because the industry understands that when credentials are compromised, everything built around them collapses.

The cybersecurity industry has known this. It has known it for a long time. And it has continued to build and sell architectures that are fundamentally dependent on the integrity of those same credentials - producing a decade of breach reports confirming the problem while simultaneously recommending the same access layer controls that the breach reports prove are insufficient.

That failure has a direct legal consequence. Courts and regulators developing the standard of care in data breach cases have done what courts and regulators reasonably do - they have looked to the security industry for guidance on what constitutes reasonable practice. The standard of care that has emerged reflects the industry consensus those courts and regulators found when they looked. A perimeter-centric, access-focused framework that treats credential integrity as the primary and in many cases sufficient protection for sensitive data.

The doctrine is not wrong on its own terms. It accurately reflects what the industry told courts and regulators was adequate. The problem is that the industry's own data has been contradicting that consensus for years - and the legal standard has had no mechanism to update itself in response. The result is a standard of care that courts apply in good faith, that organizations implement in good faith, and that leaves sensitive unstructured files fully exposed to the primary attack vector the industry itself has identified as the leading cause of breaches for nearly a decade.

That is not a gap in legal reasoning. It is a gap between legal doctrine and technical reality - and it is a gap that the Mercor breach has rendered impossible to ignore.

The Mercor breach is the most precise possible illustration of that gap. The attack chain began with a compromised GitHub Actions workflow in an open source vulnerability scanner. It harvested credentials through a malicious dependency executing in a CI/CD pipeline. It used those credentials to authenticate as legitimate users. It accessed and exfiltrated files that the authenticated session was authorized to access. Every step of that chain operated entirely within the parameters of a security architecture that meets the current standard of care.

The standard of care that the Mercor lawsuits are invoking - the standard that Mercor allegedly failed to meet - would not have detected or prevented any step of that chain after the initial credential harvest. Because the standard is designed around preventing unauthorized access and the attack succeeded by achieving authorized access with stolen credentials.

A standard of care that cannot address the primary attack vector in the industry's own breach data is not a standard that adequately defines the duty organizations owe to the people whose data they hold.

What the Evolved Standard of Care Looks Like

The legal evolution that the Mercor lawsuits should be driving - but are not yet articulating - is a standard of care that extends the duty of protection beyond the access layer to the data layer itself.

Under an evolved standard the duty is not satisfied by encrypting data at rest and in transit. Those controls protect data from passive interception and storage compromise. They do not protect data from authenticated access using stolen credentials. They do not protect files from exfiltration by a session that the system has recognized and authorized. They are necessary components of a complete security posture but they are not sufficient to discharge the duty of care owed to people whose most sensitive personal and professional information is held in unstructured files.

The evolved standard requires file layer protection - encryption that travels with the file itself, that governs usability independent of the access layer, that remains in force regardless of what credentials were used to obtain access, and that renders the file unusable to any recipient who cannot demonstrate, at the moment of access, that they are the authorized user in the authorized context for which access was intended.

This is Post Authentication Data Security applied as a legal duty rather than a security recommendation. It is the control that, had it been in place at Mercor, would have changed the outcome completely.

The attackers authenticated successfully. They accessed the files. They exfiltrated the files. And the files were ciphertext. Not because the authentication failed. Not because the access was detected and blocked. But because the files themselves were protected in a way that made the authenticated access consequence-free for every contractor whose data was taken.

Under an evolved standard of care that recognized this duty, Mercor's failure was not that it lacked adequate MFA or insufficient password rotation. It is that it held 40,000 people's most sensitive data in unprotected files that were fully usable to anyone who obtained valid credentials - and in a world where credential theft through supply chain compromise is the industry's leading breach vector, holding sensitive data in unprotected files is itself the breach of duty.

The Delve Scandal Proves the Point

The Mercor breach did not happen in isolation. It happened simultaneously with the exposure of Delve Technologies - the GRC automation startup that had issued compliance certifications for LiteLLM, the open source AI proxy whose compromise enabled the credential harvest that reached Mercor. Those certifications were, according to the whistleblower who exposed the company, industrialized fiction. Pre-populated attestations. Certifications issued without independent verification of the controls they purported to certify.

The convergence of these two stories is not incidental. It is the most powerful possible illustration of the gap between certified compliance and actual data protection that sits at the heart of the standard of care problem.

Mercor had compliance certifications. LiteLLM had compliance certifications. Those certifications validated access controls, security processes, and organizational security practices against the current standard of care. And none of it protected a single file when the credentials were compromised.

This is the standard of care problem rendered in its starkest form. The compliance framework the lawsuits are demanding Mercor should have met is a framework designed to certify access controls. It has no mechanism for certifying what happens to files after access succeeds. It validates the door. It has nothing to say about the files behind the door when someone walks through with a stolen key.

The Delve scandal did not create this problem. It exposed it. The problem existed in every legitimately certified organization whose sensitive files are protected only by the access controls that a valid authenticated session bypasses by definition. The certification confirms the lock works. It says nothing about the readability of what is inside when the lock is opened with a stolen key.

Post Authentication Data Security provides the protection that certification cannot - because it is not a process control that can be attested to. It is a technical control that either renders files unusable or does not. There is no compliance theater version of file layer encryption. The files are either protected or they are not. And that binary self-executing reality is precisely what the evolved standard of care should require.

The Regulatory Safe Harbor Argument

The legal implications of file layer protection extend beyond negligence theory into the regulatory framework that governs breach notification and penalty - and here the argument for an evolved standard of care becomes most immediately actionable for organizations deciding right now how to protect the files they hold.

Most data breach notification laws are triggered by the exposure of usable readable personal data. GDPR Article 34 explicitly states that notification to affected individuals is not required when data was encrypted and rendered unintelligible to unauthorized parties. HIPAA's Safe Harbor provision categorizes encrypted breached data as a non-reportable event. California's CCPA, New York's SHIELD Act, and most equivalent state frameworks include explicit encryption safe harbors that reduce or eliminate notification obligations when stolen data was encrypted and ciphertext.

These safe harbors already exist in the regulatory framework. They already recognize that encrypted data that cannot be read does not produce the harm that breach notification laws are designed to address. They are the regulatory system's implicit acknowledgment of the principle that Post Authentication Data Security makes explicit - that what matters for data protection purposes is not whether the data was accessed but whether it was usable when it was taken.

The Mercor lawsuits are built on the premise that contractor data was compromised in a readable form. Under the regulatory safe harbor framework that already exists, file layer encrypted data that is exfiltrated but unusable does not meet the threshold for mandatory notification. The breach event that generates the legal obligation does not occur. The five lawsuits have no viable plaintiff because the harm the plaintiffs allege - exposure of readable personal data to criminal actors who can exploit it - has not occurred.

The safe harbor framework is the regulatory system pointing toward the evolved standard of care that litigation doctrine has not yet fully articulated. It already recognizes that encryption at the data layer changes the legal character of a breach. The doctrinal evolution required is to extend that recognition from a regulatory safe harbor into an affirmative duty - a standard of care that requires file layer protection not merely as a mitigating factor but as a component of the baseline obligation owed to people whose sensitive data is held in unstructured files.

What the Mercor Lawsuits Should Be Arguing

The most important legal contribution the Mercor litigation could make is to reframe the standard of care claim around the duty that was actually breached rather than the duty that existing doctrine recognizes.

The complaint should not lead with failure to implement MFA or failure to rotate passwords. Those are real failures and they belong in the complaint. But they are not the failure that made 40,000 contractors vulnerable to years of identity theft risk. The failure that did that was holding sensitive unstructured files - files containing Social Security numbers, identity documents, video recordings, and proprietary work product - without file layer protection that would have rendered those files unreadable to anyone who took them regardless of what credentials they used.

The encryption allegation in the current complaint points toward this duty but frames it as a storage security failure. The stronger and more legally significant framing is a failure of Post Authentication Data Security - a failure to protect files at the data layer in a way that maintains protection after authentication succeeds, independent of credential integrity, independent of access layer controls, independent of whether the session that accessed the files was legitimate or the product of supply chain credential theft.

That framing advances data breach doctrine in a meaningful direction. It creates a legal framework that actually maps onto the threat environment the industry's own data describes - a world in which credential compromise is the leading attack vector and access layer controls are necessary but insufficient to discharge the duty of care owed to the people whose data is at risk.

It also creates a remedy that would actually change the outcome. Not a settlement requiring better MFA and more rigorous password rotation that leaves 40,000 people's files just as usable the next time valid credentials are stolen. A standard that requires file layer protection - protection that holds when everything else fails, protection that renders credential theft consequence-free for the people whose data was taken.

The Conversation the Industry and the Legal Community Must Have Together

The Mercor lawsuits will settle. The settlement will specify controls. The controls will reflect the current standard of care. And the current standard of care will remain a decade behind the threat environment it is supposed to address.

Unless the legal community starts asking the question that the complaints are currently missing.

Not whether Mercor had adequate access controls. Whether Mercor discharged its duty to protect the files its contractors trusted it to hold - protect them in a way that maintains that protection after authentication succeeds, that holds when credentials are stolen, that renders the breach consequence-free for the people whose data is taken regardless of how the attacker obtained access.

That is the standard the threat environment demands. That is the standard the regulatory safe harbor framework is already gesturing toward. That is the standard the evolved duty of care in data breach litigation needs to articulate.

Post Authentication Data Security is not the standard of care today. It is the standard of care the Mercor breach demonstrates is necessary - and the standard that the legal community, the security industry, and the organizations that hold sensitive unstructured files have a shared obligation to establish before the next breach proves the same point at the same cost to the same people who had no choice but to trust that the files they handed over would be protected when it mattered most.

The five lawsuits filed in seven days are the most powerful available argument for why that conversation cannot wait.

FenixPyre is purpose-built to close the Post Authentication Data Security gap for unstructured data - ensuring that files remain protected at the data layer regardless of how access was obtained. In a world where supply chain attacks make credential theft an inevitability, file layer protection is not a security enhancement. It is the evolved standard of care the modern threat environment demands.


Data Protection

Mar 23, 2026

When Accenture Reports a 127% Surge in Dark Web Insider Recruitment, It’s Time to Rethink Data Security

Accenture’s Cyber Intelligence team recently published research that should alarm every CISO and board member: insider threats facilitated through dark web ecosystems are escalating at an unprecedented rate.

The numbers are stark:

  • 69% increase in insiders offering access (2025 vs. 2024)

  • 127% surge in hackers actively recruiting insiders (vs. 2022)

As Ryan Whelan, Accenture’s Global Head of Cyber Intelligence, explains:

“The insider economy is now principally designed to support early-stage intrusions, with criminal gangs increasingly relying on insiders to bypass cyber defenses.”

This is not theoretical.

Dark web posts explicitly name targets:

  • Coinbase

  • Binance

  • Kraken

  • Gemini

  • Accenture

  • Genpact

  • Spotify

  • Netflix

…and dozens more across financial services, consulting, and technology.

The going rate?

  • $3,000–$15,000 for initial access

  • $25,000 for 37 million cryptocurrency exchange records

The Real Implication of Accenture’s Findings

What this research makes clear - when taken to its logical conclusion - is this:

Managing insider risk requires more than governing access. It requires governing how data is used after access is granted.

This is the role of Post-Authentication Data Security (PADS).

PADS is a security layer that governs how data can be used after access is granted - enforcing policy at the moment of data interaction, not just at authentication.

What Accenture’s Research Makes Clear

Accenture’s findings highlight a structural shift in threat dynamics:

  • Insiders provide initial access and credentials (30% of cases)

  • Perimeter defenses are bypassed entirely

  • Activity appears legitimate - because it is legitimate

  • Security controls defer by design once authentication succeeds

Whelan emphasizes lifecycle controls:

  • Stronger hiring and identity verification

  • Role separation and least privilege

  • Immediate access revocation during offboarding

  • Monitoring for pre-departure activity

  • Behavioral analytics and insider threat programs

These are essential.

They reduce the likelihood that insider threats emerge - or go undetected.

But they also reveal something deeper:

Even with these controls, an authenticated user can still use data in ways that are indistinguishable from legitimate activity.

Where Existing Controls End - and Why the Gap Exists

When a recruited insider acts, the cybersecurity stack behaves exactly as designed:

  • Identity is verified

  • Access is authorized

  • Permissions are correctly applied

  • Activity aligns with role expectations

  • Monitoring systems observe “normal” behavior

From the system’s perspective:

Everything is working correctly.

And that is precisely the problem.

Because “working correctly” still allows data to be:

  • Queried

  • Downloaded

  • Copied

  • Transferred

  • Sold

Nothing is bypassed.
Nothing is broken.
No control is technically evaded.

The attack succeeds because:

The security stack is architected to stop at authentication.

Whelan’s findings reinforce this reality:

Attackers are not defeating controls - they are operating within the boundary those controls were designed to trust.

The Architectural Limitation

Modern security is built to answer one question:

Who should have access?

It is not built to answer:

What should an authenticated user be allowed to do with data - right now, in this context?

This is why insider recruitment is so effective.

Existing controls - IAM, Zero Trust, SIEM, DLP, UEBA - are optimized for:

  • Preventing unauthorized access

  • Detecting abnormal behavior

They are not designed to stop:

Authorized, normal-looking misuse of data

This is not a failure of execution.

It is a limitation of architecture.

The Missing Layer: Post-Authentication Data Security (PADS)

Accenture’s framework focuses on managing insider risk across the employee lifecycle.

PADS extends that framework into the data interaction lifecycle.

If traditional controls answer:

  • Who should have access?

  • When should access be granted or revoked?

  • Is behavior anomalous?

PADS answers:

  • What should this user be able to do with the data they can access?

  • Is this specific use of data appropriate in this context?

This is not a replacement for insider threat programs.

It is the layer that ensures their effectiveness - even when insiders act within expected patterns.

Why This Matters in the Insider Economy

The insider recruitment model works because it exploits a core assumption:

Authenticated access implies legitimate use.

Accenture’s research shows attackers are deliberately targeting that assumption.

They recruit insiders because:

  • Access is already granted

  • Activity blends into normal workflows

  • Detection becomes significantly harder

PADS shifts control from access → to data usage.

What Changes When Data Is Governed After Access

In a PADS-enabled environment:

  • Access still functions as designed

  • Authorized users still perform legitimate work

But:

  • Bulk extraction can be restricted or challenged

  • Sensitive data use can trigger contextual controls

  • Data remains protected - even outside the system

  • Actions - not just identities - are evaluated in real time

This means even if:

  • An insider is recruited

  • Credentials are valid

  • Behavior appears normal

The outcome changes.

Data is no longer freely extractable and usable simply because access was granted.

Aligning With Accenture’s Recommendations - And Extending Them

Whelan’s recommendations create a strong foundation:

  • Strengthen hiring and identity verification

  • Enforce role separation and least privilege

  • Revoke access immediately during offboarding

  • Monitor for behavioral anomalies

  • Expand insider threat intelligence

All of these aim to:

Prevent trusted individuals from using legitimate access to cause harm

But traditional implementations approach this indirectly.

They:

  • Limit access scope

  • Attempt to detect misuse

  • Reduce opportunity over time

They do not directly control:

What happens to data at the moment it is used

Where Traditional Controls Fall Short

Objective

Traditional Approach

Limitation

Prevent malicious insiders

Pre-employment screening

Cannot prevent post-hire recruitment

Limit exposure

RBAC / PoLP

Broad access still exists within roles

Stop access at risk

Offboarding

Reactive - after decision point

Detect misuse

UEBA / monitoring

Requires deviation from “normal”

Identify targeting

Threat intelligence

Does not stop insider action

These controls rely on:

  • Predicting intent

  • Detecting anomalies

  • Acting after signals appear

In insider recruitment scenarios:

Those signals may never appear in time.

How PADS Delivers the Outcome Directly

Objective

PADS Capability

Outcome

Limit insider impact

Data usability governance

Controls actions within valid access

Prevent extraction

Contextual policy enforcement

Evaluates intent at time of use

Reduce detection reliance

Real-time controls

No need for “abnormal” behavior

Mitigate insider risk

Persistent data protection

Exfiltrated data is unusable

Contain breaches

Outcome-based enforcement

Prevents usable data loss

PADS operates where risk actually materializes:

The moment data is accessed and used

The Strategic Implication: An Architectural Fault Line

Accenture classifies insider threats as a medium-frequency, high-impact strategic risk.

But the deeper implication is this:

Insider risk is not an edge case - it is a consequence of how cybersecurity is designed.

Whelan’s findings expose a critical assumption:

Once a user is authenticated, risk is sufficiently managed.

That assumption no longer holds.

Modern architecture treats:

  • Authentication as the boundary of trust

Everything beyond that boundary is governed by:

  • Permissions

  • Expected behavior

  • Post-event detection

Not by real-time control of data itself.

This is the fault line.

The Bottom Line

Accenture’s findings don’t just highlight the rise of insider threats - they expose a fundamental flaw in modern cybersecurity:

The assumption that risk ends when access is granted.

In reality:

That is where risk begins.

The Verizon DBIR reinforces this:

  • 74% of breaches involve the human element

  • Occurring within legitimate, authenticated sessions

No controls are bypassed.
No systems are broken.

Attackers simply operate inside the boundary the stack was designed to trust.

Whelan’s recommendations strengthen identity and access.

But they also point to a deeper truth:

Without governing how data is used after access is granted, the problem remains unsolved.

That is what Post-Authentication Data Security (PADS) delivers.

It shifts security from:

  • Controlling entry

To:

  • Controlling outcome

Because in today’s threat landscape:

Access is no longer the boundary of risk. Data usage is.

Resources

  • Accenture Cyber Intelligence Report: Insider Threat Escalation (2025)

  • What is PADS - The definition, category map, and how PADS completes the security model

  • Why PADS now - The forces driving post-authentication data theft

Final Thought

Every employee with access to sensitive data is a recruitment target.

Traditional security stops at authentication.

That’s exactly where the insider economy starts.

Data Protection

Mar 23, 2026

When IBM X-Force Says "Post-Auth is the New Perimeter," People Should Take Note

Ryan Anschutz, North America Leader for IBM X-Force Incident Response, recently published an article that deserves more attention than a typical LinkedIn post receives.

It started, as the best security lessons often do, with something completely mundane.

Ryan needed to export a list of event attendees. The UI had no export button. So, he opened browser developer tools, looked at what the application was doing behind the scenes, and scripted the authenticated API calls to extract everything he needed.

No exploits. No bypasses. No stolen credentials.

His conclusion: "The application worked exactly as designed. That's the part worth sitting with."

That sentence is the entire post-authentication data security (PADS) problem stated as plainly as it can be stated.

WHAT RYAN'S EXPORT TASK ACTUALLY DEMONSTRATES 

What Ryan described is not a vulnerability. It is not a misconfiguration. It is not a failure of any control. 

It is what happens when an authenticated session is trusted completely. When the backend extends full data usability to anyone holding a valid credential, with no evaluation of whether that trust should extend to bulk extraction, rapid pagination, or automated API calls at a scale no human would produce manually.

The application's authentication worked. Its authorization worked. Its session management worked. Every control functioned exactly as designed.

And a complete dataset was extracted in minutes.

This is what the 2024 Verizon Data Breach Investigations Report is describing when it notes that 74% of breaches involve valid credentials. It is not that attackers are bypassing authentication. It is that they have learned to operate inside the trust that authentication grants, and once inside that trust, there is almost nothing designed to evaluate whether specific data should be usable at a specific moment, under specific conditions, at a specific volume. 

As Ryan puts it: "Attackers don't care about your UI. They care about what the backend will trust."  

RYAN'S QUESTION IS THE RIGHT QUESTION 

Ryan's bottom line for IR teams is worth quoting directly: 

"The question is not, 'Did MFA work?' The real question is, 'What did the backend trust after MFA succeeded?' That is the perimeter now."

This reframe from "did authentication succeed" to "what did the system trust after authentication succeeded", is precisely the shift that Post-Authentication Data Security (PADS) represents as a security category.

Traditional security architecture is built to answer the first question. The foundational layers, firewalls, IAM, MFA, Zero Trust, are designed to evaluate whether a given identity or session should be granted access in the first place. They operate on the principle that authentication and authorization are the primary security boundaries.

DLP represents the industry's first major attempt to address what happens after authentication. It monitors data movement and attempts to prevent sensitive information from leaving the organization through unauthorized channels. This is critical and valuable.

But Ryan's GraphQL example exposes the limitation: DLP is designed to detect abnormal data movement, not to govern normal data use.

The session was appropriately granted. The API calls were legitimate. The data access was authorized. The pagination pattern, if throttled to human speed, would appear normal. No unauthorized egress channel was used, just standard API responses over HTTPS.

DLP's fundamental assumption is that if data access appears normal, it probably is normal.

This is exactly the assumption that Ryan's example breaks. An attacker who understands how the backend evaluates "normal" can operate entirely within those parameters while extracting complete datasets.

The actions that followed authentication were indistinguishable from legitimate use. And no control in the stack, including DLP, was designed to ask whether bulk data extraction should be permitted even when the session was valid and the behavior appeared normal.

His observation cuts to the core of the problem: "After authentication, everything becomes the real perimeter, and most defenses still aren't built around that truth."

DLP monitors the perimeter. But when the attacker operates inside what the system considers normal authenticated behavior, there is no perimeter event to detect. 

WHAT COULD HAVE CHANGED THE OUTCOME

Ryan identifies several controls that could have interrupted the extraction: 

• Session tokens bound to device or browser context

• Behavioral rate limiting that notices no human paginates this fast

• Authorization enforced at the API layer, not assumed via the UI

• Step-up authentication for bulk or sensitive data access

• Short session lifetimes with frequent token rotation

• API-level telemetry that shows actual query behavior, not just page views

These recommendations map directly to what PADS delivers as a category:



IBM X-Force Recommendation 



PADS Capability 



How It Changes the Outcome 

Session tokens bound to device/browser context 

Contextual session management 

Sessions can't be replayed from different devices or environments - even with valid credentials 

Behavioral rate limiting

Anomaly detection & policy enforcement 

Automated extraction at scale triggers real-time intervention before data leaves 

Authorization enforced at API layer, not assumed via UI 

Data-layer access controls 

Backend enforces what data can be accessed regardless of how the request arrives 

Step-up authentication for bulk access 

Dynamic risk-based authentication 

High-volume data access requires additional verification even for authenticated users 

Short session lifetimes with frequent token rotation 

Session governance 

Limits window of opportunity for credential replay or session hijacking 

API-level telemetry showing actual query behavior 

Data interaction visibility 

Surfaces what's actually happening at the data layer, not just what the UI suggests 

WHERE DETECTION ALONE FALLS SHORT

Ryan's recommendations represent the access-control and behavioral-detection responses to the post-authentication problem. They are valuable and necessary.

But his list implicitly identifies their shared limitation: they all depend on detecting that something unusual is happening. Rate limiting notices unusual pagination speed. Behavioral monitoring notices unusual query patterns. Step-up authentication notices unusual data volume.

What happens when the extraction isn't unusual? When an attacker paginates at human speed, extracts data gradually over days, and operates within the behavioral thresholds that monitoring tools consider normal.

This is the scenario that Post-Authentication Data Security addresses at a more fundamental level. Rather than detecting unusual behavior and interrupting it, PADS governs data usability at the data layer itself. The question is not "does this behavior look suspicious?" It is "should this data be usable, under these conditions, for this action, to this destination?"

In a PADS model, data remains cryptographically protected and is only made usable at the moment of legitimate use - meaning extraction alone no longer equals compromise.

When data is protected at the layer Ryan is describing, the layer where the backend decides what an authenticated session can actually do with the data it accesses then the extraction scenario changes fundamentally.

The attacker can script the API calls. They can walk the pagination. They can extract every file in the repository.

They just can't read any of it.

THE BOTTOM LINE

Ryan's conclusion deserves to be repeated:

"The question is not, 'Did MFA work?' The real question is, 'What did the backend trust after MFA succeeded?' That is the perimeter now."

Every control you currently own is designed to answer the first question.

Almost none are designed to answer the second.

That gap between authentication and data protection is where 74% of breaches now operate.

Post-auth is the new perimeter. And as Ryan's article demonstrates, most defenses still aren't built around that truth.

Post-Authentication Data Security is the category that changes that.

RESOURCES

Ryan Anschutz's original article: https://www.ibm.com/think/x-force/post-auth-new-perimeter

What is PADS The definition, the category map, and how PADS completes the security model existing tools leave unfinished.

Why PADS Now The three forces that made post-authentication data theft the dominant threat.

Every tool you own stops at login. That's exactly where attackers start. 

Data Protection

Apr 17, 2026

The Duty of Care Gap: Why Today's Breach Litigation Standard Was Built for Yesterday's Attack

In the week of April 1 through April 7, 2026, five class action lawsuits were filed against Mercor, a $10 billion AI training startup serving OpenAI, Anthropic, and Meta. Five lawsuits in seven days. Each one built around the same fundamental argument - that Mercor failed to implement adequate security measures to protect the sensitive data of more than 40,000 contractors whose personal information, professional work product, and identifying documents were stolen in one of the most consequential data breaches of 2026.

The plaintiffs are not wrong that a failure occurred. The breach was real. The harm is real. The stolen data - 939 gigabytes of proprietary source code, 3 terabytes of video interview recordings and identity verification documents, a 211 gigabyte user database, internal communications, and AI training methodologies that Y Combinator CEO Garry Tan described as representing billions in value and a major national security issue - is now in the hands of attackers who obtained it through a cascading supply chain attack that harvested legitimate credentials from a compromised open source dependency.

The lawsuits are right that Mercor failed. They are wrong about what that failure actually was. And in being wrong about that, they are asking for a legal remedy built on a standard of care argument that - even if fully satisfied - would not have protected a single file when the credentials were compromised.

That is not a minor procedural deficiency. It is a fundamental misidentification of the duty that was breached. And it matters enormously - not just for the 40,000 contractors who deserve meaningful remedy, but for every organization that will read the Mercor settlement, implement its required controls, and believe they have met their obligation to protect the people whose data they hold.

They will not have. And the next breach will prove it.

The Standard of Care Argument the Lawsuits Are Building

To understand why the lawsuits are asking for the wrong fix it is necessary to understand precisely what legal standard they are invoking and where that standard falls short.

Data breach class actions in the United States are predominantly built on negligence theory. To succeed on a negligence claim a plaintiff must establish that the defendant owed a duty of care, that the defendant breached that duty, that the breach caused the plaintiff's harm, and that the plaintiff suffered cognizable damages.

The duty of care in data breach cases has been progressively defined by courts, regulators, and compliance frameworks over the past two decades. The FTC has enforcement authority over unfair or deceptive data security practices. The SEC has specific guidance for registered investment advisers and technology companies on data protection obligations. State attorneys general have brought actions under consumer protection statutes. Courts have increasingly recognized an implicit duty to protect sensitive personal data commensurate with the nature of the data held and the reasonable expectations of the people who provided it.

What has emerged from this body of law, regulation, and enforcement is a standard of care built almost entirely around access layer controls. The duty as courts and regulators currently understand it is a duty to prevent unauthorized access. Implement MFA. Segment networks. Monitor for anomalous activity. Rotate credentials. Conduct regular security audits. Encrypt data at rest and in transit.

The Mercor lawsuits invoke exactly this standard. The Gill complaint alleges failure to implement MFA, failure to limit access to PII, failure to monitor systems, failure to rotate passwords, and failure to encrypt sensitive data during storage and transmission. It is a textbook recitation of the access layer standard of care as it currently exists in data breach litigation doctrine.

And here is the legal problem that nobody in any of the five courtrooms is currently confronting:

That standard of care - even fully satisfied - would not have prevented the harm the plaintiffs suffered. Because the harm did not originate from a failure of access layer controls. It originated from a failure at the data layer. And the legal doctrine has not yet caught up to that distinction.

The Encryption Allegation Points at the Right Problem and Then Misses It

Among all the allegations in the Mercor complaints, the failure to encrypt sensitive data during storage and transmission is the one that comes closest to identifying the actual duty that was breached. It points toward the right problem. But the way it is framed - listed alongside MFA and password rotation as one item among several access layer improvements - reveals that the plaintiff's attorneys understand encryption as a storage security measure rather than as a fundamentally different category of data protection obligation.

That distinction is not semantic. It is the difference between a remedy that changes the outcome for 40,000 contractors and a remedy that produces a more expensive breach with identical consequences.

Encryption at rest means data sitting in a database or storage system is encrypted when it is not being accessed. Encryption in transit means data moving between systems is encrypted as it travels. Both are legitimate and important security controls. Both are widely recognized components of the current standard of care. And both are rendered completely ineffective the moment an attacker obtains valid credentials - because when a user authenticates through the normal access pathway the system decrypts the data for them, it cannot distinguish between a legitimate user and an attacker holding stolen credentials, and the encryption that was supposed to protect the data dissolves on contact with a valid authenticated session. In the exact breach scenario the Mercor lawsuits describe, both controls perform exactly as designed and protect nothing.

This means that in the exact breach scenario the Mercor lawsuits describe - an attacker authenticating successfully with stolen credentials and accessing files through the authorized decryption pathway - both forms of encryption the complaint demands would have been fully satisfied and would have protected nothing. The files would still have been usable. The exfiltration would still have proceeded. The harm would still have flowed to 40,000 contractors.

The lawsuits are demanding a standard of care that has already been implicitly satisfied by the mechanism of the attack itself. And demanding it more rigorously produces no meaningful benefit to the people the litigation is supposed to protect.

The Duty That Was Actually Breached

If the current standard of care - even fully implemented - would not have changed the outcome, the legal question becomes what duty would have. What obligation, if discharged, would have rendered the breach consequence-free for the 40,000 contractors who are now plaintiffs?

The answer is precise and it points to a duty that existing doctrine has not yet adequately articulated: the duty to protect data at the file layer after authentication succeeds.

This is the Post Authentication Data Security duty. It is distinct from and more demanding than the access layer duty that current doctrine recognizes. It is not a duty to prevent unauthorized access - though that duty exists and matters. It is a duty to ensure that data remains protected even when access succeeds, whether that access was legitimately obtained or achieved through credential theft, supply chain compromise, insider misuse, or any other vector that produces a valid authenticated session.

The distinction maps directly onto the facts of the Mercor breach. The attackers authenticated successfully. Every access control performed exactly as designed. The breach did not occur at the access layer - it occurred at the data layer, where no protection existed to govern what happened to files after authentication succeeded.

Under the current standard of care doctrine, Mercor's failure is characterized as an access layer failure - insufficient MFA, inadequate monitoring, poor credential hygiene. Those characterizations may be legally valid but they are factually incomplete. The more precise and more legally significant failure was the absence of file layer protection that would have rendered the authenticated access consequence-free regardless of who held the credentials.

The duty to protect data at the file layer after authentication succeeds is the duty the Mercor lawsuits are gesturing toward but failing to name. And naming it precisely is the most important legal contribution the Mercor litigation could make to the evolution of data breach doctrine.

Why the Current Standard of Care Is Structurally Insufficient

The cybersecurity industry has known for years that stolen credentials are the single biggest vulnerability in the modern security stack. This is not a controversial position. Verizon's Data Breach Investigations Report has identified compromised credentials as the leading cause of breaches for nearly a decade running. IBM's Cost of a Data Breach Report consistently ranks stolen credentials as both the most common and most expensive attack vector. Every major security framework - NIST, ISO 27001, HITRUST - includes extensive controls around identity and access precisely because the industry understands that when credentials are compromised, everything built around them collapses.

The cybersecurity industry has known this. It has known it for a long time. And it has continued to build and sell architectures that are fundamentally dependent on the integrity of those same credentials - producing a decade of breach reports confirming the problem while simultaneously recommending the same access layer controls that the breach reports prove are insufficient.

That failure has a direct legal consequence. Courts and regulators developing the standard of care in data breach cases have done what courts and regulators reasonably do - they have looked to the security industry for guidance on what constitutes reasonable practice. The standard of care that has emerged reflects the industry consensus those courts and regulators found when they looked. A perimeter-centric, access-focused framework that treats credential integrity as the primary and in many cases sufficient protection for sensitive data.

The doctrine is not wrong on its own terms. It accurately reflects what the industry told courts and regulators was adequate. The problem is that the industry's own data has been contradicting that consensus for years - and the legal standard has had no mechanism to update itself in response. The result is a standard of care that courts apply in good faith, that organizations implement in good faith, and that leaves sensitive unstructured files fully exposed to the primary attack vector the industry itself has identified as the leading cause of breaches for nearly a decade.

That is not a gap in legal reasoning. It is a gap between legal doctrine and technical reality - and it is a gap that the Mercor breach has rendered impossible to ignore.

The Mercor breach is the most precise possible illustration of that gap. The attack chain began with a compromised GitHub Actions workflow in an open source vulnerability scanner. It harvested credentials through a malicious dependency executing in a CI/CD pipeline. It used those credentials to authenticate as legitimate users. It accessed and exfiltrated files that the authenticated session was authorized to access. Every step of that chain operated entirely within the parameters of a security architecture that meets the current standard of care.

The standard of care that the Mercor lawsuits are invoking - the standard that Mercor allegedly failed to meet - would not have detected or prevented any step of that chain after the initial credential harvest. Because the standard is designed around preventing unauthorized access and the attack succeeded by achieving authorized access with stolen credentials.

A standard of care that cannot address the primary attack vector in the industry's own breach data is not a standard that adequately defines the duty organizations owe to the people whose data they hold.

What the Evolved Standard of Care Looks Like

The legal evolution that the Mercor lawsuits should be driving - but are not yet articulating - is a standard of care that extends the duty of protection beyond the access layer to the data layer itself.

Under an evolved standard the duty is not satisfied by encrypting data at rest and in transit. Those controls protect data from passive interception and storage compromise. They do not protect data from authenticated access using stolen credentials. They do not protect files from exfiltration by a session that the system has recognized and authorized. They are necessary components of a complete security posture but they are not sufficient to discharge the duty of care owed to people whose most sensitive personal and professional information is held in unstructured files.

The evolved standard requires file layer protection - encryption that travels with the file itself, that governs usability independent of the access layer, that remains in force regardless of what credentials were used to obtain access, and that renders the file unusable to any recipient who cannot demonstrate, at the moment of access, that they are the authorized user in the authorized context for which access was intended.

This is Post Authentication Data Security applied as a legal duty rather than a security recommendation. It is the control that, had it been in place at Mercor, would have changed the outcome completely.

The attackers authenticated successfully. They accessed the files. They exfiltrated the files. And the files were ciphertext. Not because the authentication failed. Not because the access was detected and blocked. But because the files themselves were protected in a way that made the authenticated access consequence-free for every contractor whose data was taken.

Under an evolved standard of care that recognized this duty, Mercor's failure was not that it lacked adequate MFA or insufficient password rotation. It is that it held 40,000 people's most sensitive data in unprotected files that were fully usable to anyone who obtained valid credentials - and in a world where credential theft through supply chain compromise is the industry's leading breach vector, holding sensitive data in unprotected files is itself the breach of duty.

The Delve Scandal Proves the Point

The Mercor breach did not happen in isolation. It happened simultaneously with the exposure of Delve Technologies - the GRC automation startup that had issued compliance certifications for LiteLLM, the open source AI proxy whose compromise enabled the credential harvest that reached Mercor. Those certifications were, according to the whistleblower who exposed the company, industrialized fiction. Pre-populated attestations. Certifications issued without independent verification of the controls they purported to certify.

The convergence of these two stories is not incidental. It is the most powerful possible illustration of the gap between certified compliance and actual data protection that sits at the heart of the standard of care problem.

Mercor had compliance certifications. LiteLLM had compliance certifications. Those certifications validated access controls, security processes, and organizational security practices against the current standard of care. And none of it protected a single file when the credentials were compromised.

This is the standard of care problem rendered in its starkest form. The compliance framework the lawsuits are demanding Mercor should have met is a framework designed to certify access controls. It has no mechanism for certifying what happens to files after access succeeds. It validates the door. It has nothing to say about the files behind the door when someone walks through with a stolen key.

The Delve scandal did not create this problem. It exposed it. The problem existed in every legitimately certified organization whose sensitive files are protected only by the access controls that a valid authenticated session bypasses by definition. The certification confirms the lock works. It says nothing about the readability of what is inside when the lock is opened with a stolen key.

Post Authentication Data Security provides the protection that certification cannot - because it is not a process control that can be attested to. It is a technical control that either renders files unusable or does not. There is no compliance theater version of file layer encryption. The files are either protected or they are not. And that binary self-executing reality is precisely what the evolved standard of care should require.

The Regulatory Safe Harbor Argument

The legal implications of file layer protection extend beyond negligence theory into the regulatory framework that governs breach notification and penalty - and here the argument for an evolved standard of care becomes most immediately actionable for organizations deciding right now how to protect the files they hold.

Most data breach notification laws are triggered by the exposure of usable readable personal data. GDPR Article 34 explicitly states that notification to affected individuals is not required when data was encrypted and rendered unintelligible to unauthorized parties. HIPAA's Safe Harbor provision categorizes encrypted breached data as a non-reportable event. California's CCPA, New York's SHIELD Act, and most equivalent state frameworks include explicit encryption safe harbors that reduce or eliminate notification obligations when stolen data was encrypted and ciphertext.

These safe harbors already exist in the regulatory framework. They already recognize that encrypted data that cannot be read does not produce the harm that breach notification laws are designed to address. They are the regulatory system's implicit acknowledgment of the principle that Post Authentication Data Security makes explicit - that what matters for data protection purposes is not whether the data was accessed but whether it was usable when it was taken.

The Mercor lawsuits are built on the premise that contractor data was compromised in a readable form. Under the regulatory safe harbor framework that already exists, file layer encrypted data that is exfiltrated but unusable does not meet the threshold for mandatory notification. The breach event that generates the legal obligation does not occur. The five lawsuits have no viable plaintiff because the harm the plaintiffs allege - exposure of readable personal data to criminal actors who can exploit it - has not occurred.

The safe harbor framework is the regulatory system pointing toward the evolved standard of care that litigation doctrine has not yet fully articulated. It already recognizes that encryption at the data layer changes the legal character of a breach. The doctrinal evolution required is to extend that recognition from a regulatory safe harbor into an affirmative duty - a standard of care that requires file layer protection not merely as a mitigating factor but as a component of the baseline obligation owed to people whose sensitive data is held in unstructured files.

What the Mercor Lawsuits Should Be Arguing

The most important legal contribution the Mercor litigation could make is to reframe the standard of care claim around the duty that was actually breached rather than the duty that existing doctrine recognizes.

The complaint should not lead with failure to implement MFA or failure to rotate passwords. Those are real failures and they belong in the complaint. But they are not the failure that made 40,000 contractors vulnerable to years of identity theft risk. The failure that did that was holding sensitive unstructured files - files containing Social Security numbers, identity documents, video recordings, and proprietary work product - without file layer protection that would have rendered those files unreadable to anyone who took them regardless of what credentials they used.

The encryption allegation in the current complaint points toward this duty but frames it as a storage security failure. The stronger and more legally significant framing is a failure of Post Authentication Data Security - a failure to protect files at the data layer in a way that maintains protection after authentication succeeds, independent of credential integrity, independent of access layer controls, independent of whether the session that accessed the files was legitimate or the product of supply chain credential theft.

That framing advances data breach doctrine in a meaningful direction. It creates a legal framework that actually maps onto the threat environment the industry's own data describes - a world in which credential compromise is the leading attack vector and access layer controls are necessary but insufficient to discharge the duty of care owed to the people whose data is at risk.

It also creates a remedy that would actually change the outcome. Not a settlement requiring better MFA and more rigorous password rotation that leaves 40,000 people's files just as usable the next time valid credentials are stolen. A standard that requires file layer protection - protection that holds when everything else fails, protection that renders credential theft consequence-free for the people whose data was taken.

The Conversation the Industry and the Legal Community Must Have Together

The Mercor lawsuits will settle. The settlement will specify controls. The controls will reflect the current standard of care. And the current standard of care will remain a decade behind the threat environment it is supposed to address.

Unless the legal community starts asking the question that the complaints are currently missing.

Not whether Mercor had adequate access controls. Whether Mercor discharged its duty to protect the files its contractors trusted it to hold - protect them in a way that maintains that protection after authentication succeeds, that holds when credentials are stolen, that renders the breach consequence-free for the people whose data is taken regardless of how the attacker obtained access.

That is the standard the threat environment demands. That is the standard the regulatory safe harbor framework is already gesturing toward. That is the standard the evolved duty of care in data breach litigation needs to articulate.

Post Authentication Data Security is not the standard of care today. It is the standard of care the Mercor breach demonstrates is necessary - and the standard that the legal community, the security industry, and the organizations that hold sensitive unstructured files have a shared obligation to establish before the next breach proves the same point at the same cost to the same people who had no choice but to trust that the files they handed over would be protected when it mattered most.

The five lawsuits filed in seven days are the most powerful available argument for why that conversation cannot wait.

FenixPyre is purpose-built to close the Post Authentication Data Security gap for unstructured data - ensuring that files remain protected at the data layer regardless of how access was obtained. In a world where supply chain attacks make credential theft an inevitability, file layer protection is not a security enhancement. It is the evolved standard of care the modern threat environment demands.


Data Protection

Mar 23, 2026

When Accenture Reports a 127% Surge in Dark Web Insider Recruitment, It’s Time to Rethink Data Security

Accenture’s Cyber Intelligence team recently published research that should alarm every CISO and board member: insider threats facilitated through dark web ecosystems are escalating at an unprecedented rate.

The numbers are stark:

  • 69% increase in insiders offering access (2025 vs. 2024)

  • 127% surge in hackers actively recruiting insiders (vs. 2022)

As Ryan Whelan, Accenture’s Global Head of Cyber Intelligence, explains:

“The insider economy is now principally designed to support early-stage intrusions, with criminal gangs increasingly relying on insiders to bypass cyber defenses.”

This is not theoretical.

Dark web posts explicitly name targets:

  • Coinbase

  • Binance

  • Kraken

  • Gemini

  • Accenture

  • Genpact

  • Spotify

  • Netflix

…and dozens more across financial services, consulting, and technology.

The going rate?

  • $3,000–$15,000 for initial access

  • $25,000 for 37 million cryptocurrency exchange records

The Real Implication of Accenture’s Findings

What this research makes clear - when taken to its logical conclusion - is this:

Managing insider risk requires more than governing access. It requires governing how data is used after access is granted.

This is the role of Post-Authentication Data Security (PADS).

PADS is a security layer that governs how data can be used after access is granted - enforcing policy at the moment of data interaction, not just at authentication.

What Accenture’s Research Makes Clear

Accenture’s findings highlight a structural shift in threat dynamics:

  • Insiders provide initial access and credentials (30% of cases)

  • Perimeter defenses are bypassed entirely

  • Activity appears legitimate - because it is legitimate

  • Security controls defer by design once authentication succeeds

Whelan emphasizes lifecycle controls:

  • Stronger hiring and identity verification

  • Role separation and least privilege

  • Immediate access revocation during offboarding

  • Monitoring for pre-departure activity

  • Behavioral analytics and insider threat programs

These are essential.

They reduce the likelihood that insider threats emerge - or go undetected.

But they also reveal something deeper:

Even with these controls, an authenticated user can still use data in ways that are indistinguishable from legitimate activity.

Where Existing Controls End - and Why the Gap Exists

When a recruited insider acts, the cybersecurity stack behaves exactly as designed:

  • Identity is verified

  • Access is authorized

  • Permissions are correctly applied

  • Activity aligns with role expectations

  • Monitoring systems observe “normal” behavior

From the system’s perspective:

Everything is working correctly.

And that is precisely the problem.

Because “working correctly” still allows data to be:

  • Queried

  • Downloaded

  • Copied

  • Transferred

  • Sold

Nothing is bypassed.
Nothing is broken.
No control is technically evaded.

The attack succeeds because:

The security stack is architected to stop at authentication.

Whelan’s findings reinforce this reality:

Attackers are not defeating controls - they are operating within the boundary those controls were designed to trust.

The Architectural Limitation

Modern security is built to answer one question:

Who should have access?

It is not built to answer:

What should an authenticated user be allowed to do with data - right now, in this context?

This is why insider recruitment is so effective.

Existing controls - IAM, Zero Trust, SIEM, DLP, UEBA - are optimized for:

  • Preventing unauthorized access

  • Detecting abnormal behavior

They are not designed to stop:

Authorized, normal-looking misuse of data

This is not a failure of execution.

It is a limitation of architecture.

The Missing Layer: Post-Authentication Data Security (PADS)

Accenture’s framework focuses on managing insider risk across the employee lifecycle.

PADS extends that framework into the data interaction lifecycle.

If traditional controls answer:

  • Who should have access?

  • When should access be granted or revoked?

  • Is behavior anomalous?

PADS answers:

  • What should this user be able to do with the data they can access?

  • Is this specific use of data appropriate in this context?

This is not a replacement for insider threat programs.

It is the layer that ensures their effectiveness - even when insiders act within expected patterns.

Why This Matters in the Insider Economy

The insider recruitment model works because it exploits a core assumption:

Authenticated access implies legitimate use.

Accenture’s research shows attackers are deliberately targeting that assumption.

They recruit insiders because:

  • Access is already granted

  • Activity blends into normal workflows

  • Detection becomes significantly harder

PADS shifts control from access → to data usage.

What Changes When Data Is Governed After Access

In a PADS-enabled environment:

  • Access still functions as designed

  • Authorized users still perform legitimate work

But:

  • Bulk extraction can be restricted or challenged

  • Sensitive data use can trigger contextual controls

  • Data remains protected - even outside the system

  • Actions - not just identities - are evaluated in real time

This means even if:

  • An insider is recruited

  • Credentials are valid

  • Behavior appears normal

The outcome changes.

Data is no longer freely extractable and usable simply because access was granted.

Aligning With Accenture’s Recommendations - And Extending Them

Whelan’s recommendations create a strong foundation:

  • Strengthen hiring and identity verification

  • Enforce role separation and least privilege

  • Revoke access immediately during offboarding

  • Monitor for behavioral anomalies

  • Expand insider threat intelligence

All of these aim to:

Prevent trusted individuals from using legitimate access to cause harm

But traditional implementations approach this indirectly.

They:

  • Limit access scope

  • Attempt to detect misuse

  • Reduce opportunity over time

They do not directly control:

What happens to data at the moment it is used

Where Traditional Controls Fall Short

Objective

Traditional Approach

Limitation

Prevent malicious insiders

Pre-employment screening

Cannot prevent post-hire recruitment

Limit exposure

RBAC / PoLP

Broad access still exists within roles

Stop access at risk

Offboarding

Reactive - after decision point

Detect misuse

UEBA / monitoring

Requires deviation from “normal”

Identify targeting

Threat intelligence

Does not stop insider action

These controls rely on:

  • Predicting intent

  • Detecting anomalies

  • Acting after signals appear

In insider recruitment scenarios:

Those signals may never appear in time.

How PADS Delivers the Outcome Directly

Objective

PADS Capability

Outcome

Limit insider impact

Data usability governance

Controls actions within valid access

Prevent extraction

Contextual policy enforcement

Evaluates intent at time of use

Reduce detection reliance

Real-time controls

No need for “abnormal” behavior

Mitigate insider risk

Persistent data protection

Exfiltrated data is unusable

Contain breaches

Outcome-based enforcement

Prevents usable data loss

PADS operates where risk actually materializes:

The moment data is accessed and used

The Strategic Implication: An Architectural Fault Line

Accenture classifies insider threats as a medium-frequency, high-impact strategic risk.

But the deeper implication is this:

Insider risk is not an edge case - it is a consequence of how cybersecurity is designed.

Whelan’s findings expose a critical assumption:

Once a user is authenticated, risk is sufficiently managed.

That assumption no longer holds.

Modern architecture treats:

  • Authentication as the boundary of trust

Everything beyond that boundary is governed by:

  • Permissions

  • Expected behavior

  • Post-event detection

Not by real-time control of data itself.

This is the fault line.

The Bottom Line

Accenture’s findings don’t just highlight the rise of insider threats - they expose a fundamental flaw in modern cybersecurity:

The assumption that risk ends when access is granted.

In reality:

That is where risk begins.

The Verizon DBIR reinforces this:

  • 74% of breaches involve the human element

  • Occurring within legitimate, authenticated sessions

No controls are bypassed.
No systems are broken.

Attackers simply operate inside the boundary the stack was designed to trust.

Whelan’s recommendations strengthen identity and access.

But they also point to a deeper truth:

Without governing how data is used after access is granted, the problem remains unsolved.

That is what Post-Authentication Data Security (PADS) delivers.

It shifts security from:

  • Controlling entry

To:

  • Controlling outcome

Because in today’s threat landscape:

Access is no longer the boundary of risk. Data usage is.

Resources

  • Accenture Cyber Intelligence Report: Insider Threat Escalation (2025)

  • What is PADS - The definition, category map, and how PADS completes the security model

  • Why PADS now - The forces driving post-authentication data theft

Final Thought

Every employee with access to sensitive data is a recruitment target.

Traditional security stops at authentication.

That’s exactly where the insider economy starts.

Data Protection

Apr 17, 2026

The Duty of Care Gap: Why Today's Breach Litigation Standard Was Built for Yesterday's Attack

In the week of April 1 through April 7, 2026, five class action lawsuits were filed against Mercor, a $10 billion AI training startup serving OpenAI, Anthropic, and Meta. Five lawsuits in seven days. Each one built around the same fundamental argument - that Mercor failed to implement adequate security measures to protect the sensitive data of more than 40,000 contractors whose personal information, professional work product, and identifying documents were stolen in one of the most consequential data breaches of 2026.

The plaintiffs are not wrong that a failure occurred. The breach was real. The harm is real. The stolen data - 939 gigabytes of proprietary source code, 3 terabytes of video interview recordings and identity verification documents, a 211 gigabyte user database, internal communications, and AI training methodologies that Y Combinator CEO Garry Tan described as representing billions in value and a major national security issue - is now in the hands of attackers who obtained it through a cascading supply chain attack that harvested legitimate credentials from a compromised open source dependency.

The lawsuits are right that Mercor failed. They are wrong about what that failure actually was. And in being wrong about that, they are asking for a legal remedy built on a standard of care argument that - even if fully satisfied - would not have protected a single file when the credentials were compromised.

That is not a minor procedural deficiency. It is a fundamental misidentification of the duty that was breached. And it matters enormously - not just for the 40,000 contractors who deserve meaningful remedy, but for every organization that will read the Mercor settlement, implement its required controls, and believe they have met their obligation to protect the people whose data they hold.

They will not have. And the next breach will prove it.

The Standard of Care Argument the Lawsuits Are Building

To understand why the lawsuits are asking for the wrong fix it is necessary to understand precisely what legal standard they are invoking and where that standard falls short.

Data breach class actions in the United States are predominantly built on negligence theory. To succeed on a negligence claim a plaintiff must establish that the defendant owed a duty of care, that the defendant breached that duty, that the breach caused the plaintiff's harm, and that the plaintiff suffered cognizable damages.

The duty of care in data breach cases has been progressively defined by courts, regulators, and compliance frameworks over the past two decades. The FTC has enforcement authority over unfair or deceptive data security practices. The SEC has specific guidance for registered investment advisers and technology companies on data protection obligations. State attorneys general have brought actions under consumer protection statutes. Courts have increasingly recognized an implicit duty to protect sensitive personal data commensurate with the nature of the data held and the reasonable expectations of the people who provided it.

What has emerged from this body of law, regulation, and enforcement is a standard of care built almost entirely around access layer controls. The duty as courts and regulators currently understand it is a duty to prevent unauthorized access. Implement MFA. Segment networks. Monitor for anomalous activity. Rotate credentials. Conduct regular security audits. Encrypt data at rest and in transit.

The Mercor lawsuits invoke exactly this standard. The Gill complaint alleges failure to implement MFA, failure to limit access to PII, failure to monitor systems, failure to rotate passwords, and failure to encrypt sensitive data during storage and transmission. It is a textbook recitation of the access layer standard of care as it currently exists in data breach litigation doctrine.

And here is the legal problem that nobody in any of the five courtrooms is currently confronting:

That standard of care - even fully satisfied - would not have prevented the harm the plaintiffs suffered. Because the harm did not originate from a failure of access layer controls. It originated from a failure at the data layer. And the legal doctrine has not yet caught up to that distinction.

The Encryption Allegation Points at the Right Problem and Then Misses It

Among all the allegations in the Mercor complaints, the failure to encrypt sensitive data during storage and transmission is the one that comes closest to identifying the actual duty that was breached. It points toward the right problem. But the way it is framed - listed alongside MFA and password rotation as one item among several access layer improvements - reveals that the plaintiff's attorneys understand encryption as a storage security measure rather than as a fundamentally different category of data protection obligation.

That distinction is not semantic. It is the difference between a remedy that changes the outcome for 40,000 contractors and a remedy that produces a more expensive breach with identical consequences.

Encryption at rest means data sitting in a database or storage system is encrypted when it is not being accessed. Encryption in transit means data moving between systems is encrypted as it travels. Both are legitimate and important security controls. Both are widely recognized components of the current standard of care. And both are rendered completely ineffective the moment an attacker obtains valid credentials - because when a user authenticates through the normal access pathway the system decrypts the data for them, it cannot distinguish between a legitimate user and an attacker holding stolen credentials, and the encryption that was supposed to protect the data dissolves on contact with a valid authenticated session. In the exact breach scenario the Mercor lawsuits describe, both controls perform exactly as designed and protect nothing.

This means that in the exact breach scenario the Mercor lawsuits describe - an attacker authenticating successfully with stolen credentials and accessing files through the authorized decryption pathway - both forms of encryption the complaint demands would have been fully satisfied and would have protected nothing. The files would still have been usable. The exfiltration would still have proceeded. The harm would still have flowed to 40,000 contractors.

The lawsuits are demanding a standard of care that has already been implicitly satisfied by the mechanism of the attack itself. And demanding it more rigorously produces no meaningful benefit to the people the litigation is supposed to protect.

The Duty That Was Actually Breached

If the current standard of care - even fully implemented - would not have changed the outcome, the legal question becomes what duty would have. What obligation, if discharged, would have rendered the breach consequence-free for the 40,000 contractors who are now plaintiffs?

The answer is precise and it points to a duty that existing doctrine has not yet adequately articulated: the duty to protect data at the file layer after authentication succeeds.

This is the Post Authentication Data Security duty. It is distinct from and more demanding than the access layer duty that current doctrine recognizes. It is not a duty to prevent unauthorized access - though that duty exists and matters. It is a duty to ensure that data remains protected even when access succeeds, whether that access was legitimately obtained or achieved through credential theft, supply chain compromise, insider misuse, or any other vector that produces a valid authenticated session.

The distinction maps directly onto the facts of the Mercor breach. The attackers authenticated successfully. Every access control performed exactly as designed. The breach did not occur at the access layer - it occurred at the data layer, where no protection existed to govern what happened to files after authentication succeeded.

Under the current standard of care doctrine, Mercor's failure is characterized as an access layer failure - insufficient MFA, inadequate monitoring, poor credential hygiene. Those characterizations may be legally valid but they are factually incomplete. The more precise and more legally significant failure was the absence of file layer protection that would have rendered the authenticated access consequence-free regardless of who held the credentials.

The duty to protect data at the file layer after authentication succeeds is the duty the Mercor lawsuits are gesturing toward but failing to name. And naming it precisely is the most important legal contribution the Mercor litigation could make to the evolution of data breach doctrine.

Why the Current Standard of Care Is Structurally Insufficient

The cybersecurity industry has known for years that stolen credentials are the single biggest vulnerability in the modern security stack. This is not a controversial position. Verizon's Data Breach Investigations Report has identified compromised credentials as the leading cause of breaches for nearly a decade running. IBM's Cost of a Data Breach Report consistently ranks stolen credentials as both the most common and most expensive attack vector. Every major security framework - NIST, ISO 27001, HITRUST - includes extensive controls around identity and access precisely because the industry understands that when credentials are compromised, everything built around them collapses.

The cybersecurity industry has known this. It has known it for a long time. And it has continued to build and sell architectures that are fundamentally dependent on the integrity of those same credentials - producing a decade of breach reports confirming the problem while simultaneously recommending the same access layer controls that the breach reports prove are insufficient.

That failure has a direct legal consequence. Courts and regulators developing the standard of care in data breach cases have done what courts and regulators reasonably do - they have looked to the security industry for guidance on what constitutes reasonable practice. The standard of care that has emerged reflects the industry consensus those courts and regulators found when they looked. A perimeter-centric, access-focused framework that treats credential integrity as the primary and in many cases sufficient protection for sensitive data.

The doctrine is not wrong on its own terms. It accurately reflects what the industry told courts and regulators was adequate. The problem is that the industry's own data has been contradicting that consensus for years - and the legal standard has had no mechanism to update itself in response. The result is a standard of care that courts apply in good faith, that organizations implement in good faith, and that leaves sensitive unstructured files fully exposed to the primary attack vector the industry itself has identified as the leading cause of breaches for nearly a decade.

That is not a gap in legal reasoning. It is a gap between legal doctrine and technical reality - and it is a gap that the Mercor breach has rendered impossible to ignore.

The Mercor breach is the most precise possible illustration of that gap. The attack chain began with a compromised GitHub Actions workflow in an open source vulnerability scanner. It harvested credentials through a malicious dependency executing in a CI/CD pipeline. It used those credentials to authenticate as legitimate users. It accessed and exfiltrated files that the authenticated session was authorized to access. Every step of that chain operated entirely within the parameters of a security architecture that meets the current standard of care.

The standard of care that the Mercor lawsuits are invoking - the standard that Mercor allegedly failed to meet - would not have detected or prevented any step of that chain after the initial credential harvest. Because the standard is designed around preventing unauthorized access and the attack succeeded by achieving authorized access with stolen credentials.

A standard of care that cannot address the primary attack vector in the industry's own breach data is not a standard that adequately defines the duty organizations owe to the people whose data they hold.

What the Evolved Standard of Care Looks Like

The legal evolution that the Mercor lawsuits should be driving - but are not yet articulating - is a standard of care that extends the duty of protection beyond the access layer to the data layer itself.

Under an evolved standard the duty is not satisfied by encrypting data at rest and in transit. Those controls protect data from passive interception and storage compromise. They do not protect data from authenticated access using stolen credentials. They do not protect files from exfiltration by a session that the system has recognized and authorized. They are necessary components of a complete security posture but they are not sufficient to discharge the duty of care owed to people whose most sensitive personal and professional information is held in unstructured files.

The evolved standard requires file layer protection - encryption that travels with the file itself, that governs usability independent of the access layer, that remains in force regardless of what credentials were used to obtain access, and that renders the file unusable to any recipient who cannot demonstrate, at the moment of access, that they are the authorized user in the authorized context for which access was intended.

This is Post Authentication Data Security applied as a legal duty rather than a security recommendation. It is the control that, had it been in place at Mercor, would have changed the outcome completely.

The attackers authenticated successfully. They accessed the files. They exfiltrated the files. And the files were ciphertext. Not because the authentication failed. Not because the access was detected and blocked. But because the files themselves were protected in a way that made the authenticated access consequence-free for every contractor whose data was taken.

Under an evolved standard of care that recognized this duty, Mercor's failure was not that it lacked adequate MFA or insufficient password rotation. It is that it held 40,000 people's most sensitive data in unprotected files that were fully usable to anyone who obtained valid credentials - and in a world where credential theft through supply chain compromise is the industry's leading breach vector, holding sensitive data in unprotected files is itself the breach of duty.

The Delve Scandal Proves the Point

The Mercor breach did not happen in isolation. It happened simultaneously with the exposure of Delve Technologies - the GRC automation startup that had issued compliance certifications for LiteLLM, the open source AI proxy whose compromise enabled the credential harvest that reached Mercor. Those certifications were, according to the whistleblower who exposed the company, industrialized fiction. Pre-populated attestations. Certifications issued without independent verification of the controls they purported to certify.

The convergence of these two stories is not incidental. It is the most powerful possible illustration of the gap between certified compliance and actual data protection that sits at the heart of the standard of care problem.

Mercor had compliance certifications. LiteLLM had compliance certifications. Those certifications validated access controls, security processes, and organizational security practices against the current standard of care. And none of it protected a single file when the credentials were compromised.

This is the standard of care problem rendered in its starkest form. The compliance framework the lawsuits are demanding Mercor should have met is a framework designed to certify access controls. It has no mechanism for certifying what happens to files after access succeeds. It validates the door. It has nothing to say about the files behind the door when someone walks through with a stolen key.

The Delve scandal did not create this problem. It exposed it. The problem existed in every legitimately certified organization whose sensitive files are protected only by the access controls that a valid authenticated session bypasses by definition. The certification confirms the lock works. It says nothing about the readability of what is inside when the lock is opened with a stolen key.

Post Authentication Data Security provides the protection that certification cannot - because it is not a process control that can be attested to. It is a technical control that either renders files unusable or does not. There is no compliance theater version of file layer encryption. The files are either protected or they are not. And that binary self-executing reality is precisely what the evolved standard of care should require.

The Regulatory Safe Harbor Argument

The legal implications of file layer protection extend beyond negligence theory into the regulatory framework that governs breach notification and penalty - and here the argument for an evolved standard of care becomes most immediately actionable for organizations deciding right now how to protect the files they hold.

Most data breach notification laws are triggered by the exposure of usable readable personal data. GDPR Article 34 explicitly states that notification to affected individuals is not required when data was encrypted and rendered unintelligible to unauthorized parties. HIPAA's Safe Harbor provision categorizes encrypted breached data as a non-reportable event. California's CCPA, New York's SHIELD Act, and most equivalent state frameworks include explicit encryption safe harbors that reduce or eliminate notification obligations when stolen data was encrypted and ciphertext.

These safe harbors already exist in the regulatory framework. They already recognize that encrypted data that cannot be read does not produce the harm that breach notification laws are designed to address. They are the regulatory system's implicit acknowledgment of the principle that Post Authentication Data Security makes explicit - that what matters for data protection purposes is not whether the data was accessed but whether it was usable when it was taken.

The Mercor lawsuits are built on the premise that contractor data was compromised in a readable form. Under the regulatory safe harbor framework that already exists, file layer encrypted data that is exfiltrated but unusable does not meet the threshold for mandatory notification. The breach event that generates the legal obligation does not occur. The five lawsuits have no viable plaintiff because the harm the plaintiffs allege - exposure of readable personal data to criminal actors who can exploit it - has not occurred.

The safe harbor framework is the regulatory system pointing toward the evolved standard of care that litigation doctrine has not yet fully articulated. It already recognizes that encryption at the data layer changes the legal character of a breach. The doctrinal evolution required is to extend that recognition from a regulatory safe harbor into an affirmative duty - a standard of care that requires file layer protection not merely as a mitigating factor but as a component of the baseline obligation owed to people whose sensitive data is held in unstructured files.

What the Mercor Lawsuits Should Be Arguing

The most important legal contribution the Mercor litigation could make is to reframe the standard of care claim around the duty that was actually breached rather than the duty that existing doctrine recognizes.

The complaint should not lead with failure to implement MFA or failure to rotate passwords. Those are real failures and they belong in the complaint. But they are not the failure that made 40,000 contractors vulnerable to years of identity theft risk. The failure that did that was holding sensitive unstructured files - files containing Social Security numbers, identity documents, video recordings, and proprietary work product - without file layer protection that would have rendered those files unreadable to anyone who took them regardless of what credentials they used.

The encryption allegation in the current complaint points toward this duty but frames it as a storage security failure. The stronger and more legally significant framing is a failure of Post Authentication Data Security - a failure to protect files at the data layer in a way that maintains protection after authentication succeeds, independent of credential integrity, independent of access layer controls, independent of whether the session that accessed the files was legitimate or the product of supply chain credential theft.

That framing advances data breach doctrine in a meaningful direction. It creates a legal framework that actually maps onto the threat environment the industry's own data describes - a world in which credential compromise is the leading attack vector and access layer controls are necessary but insufficient to discharge the duty of care owed to the people whose data is at risk.

It also creates a remedy that would actually change the outcome. Not a settlement requiring better MFA and more rigorous password rotation that leaves 40,000 people's files just as usable the next time valid credentials are stolen. A standard that requires file layer protection - protection that holds when everything else fails, protection that renders credential theft consequence-free for the people whose data was taken.

The Conversation the Industry and the Legal Community Must Have Together

The Mercor lawsuits will settle. The settlement will specify controls. The controls will reflect the current standard of care. And the current standard of care will remain a decade behind the threat environment it is supposed to address.

Unless the legal community starts asking the question that the complaints are currently missing.

Not whether Mercor had adequate access controls. Whether Mercor discharged its duty to protect the files its contractors trusted it to hold - protect them in a way that maintains that protection after authentication succeeds, that holds when credentials are stolen, that renders the breach consequence-free for the people whose data is taken regardless of how the attacker obtained access.

That is the standard the threat environment demands. That is the standard the regulatory safe harbor framework is already gesturing toward. That is the standard the evolved duty of care in data breach litigation needs to articulate.

Post Authentication Data Security is not the standard of care today. It is the standard of care the Mercor breach demonstrates is necessary - and the standard that the legal community, the security industry, and the organizations that hold sensitive unstructured files have a shared obligation to establish before the next breach proves the same point at the same cost to the same people who had no choice but to trust that the files they handed over would be protected when it mattered most.

The five lawsuits filed in seven days are the most powerful available argument for why that conversation cannot wait.

FenixPyre is purpose-built to close the Post Authentication Data Security gap for unstructured data - ensuring that files remain protected at the data layer regardless of how access was obtained. In a world where supply chain attacks make credential theft an inevitability, file layer protection is not a security enhancement. It is the evolved standard of care the modern threat environment demands.


Data Protection

Mar 23, 2026

When Accenture Reports a 127% Surge in Dark Web Insider Recruitment, It’s Time to Rethink Data Security

Accenture’s Cyber Intelligence team recently published research that should alarm every CISO and board member: insider threats facilitated through dark web ecosystems are escalating at an unprecedented rate.

The numbers are stark:

  • 69% increase in insiders offering access (2025 vs. 2024)

  • 127% surge in hackers actively recruiting insiders (vs. 2022)

As Ryan Whelan, Accenture’s Global Head of Cyber Intelligence, explains:

“The insider economy is now principally designed to support early-stage intrusions, with criminal gangs increasingly relying on insiders to bypass cyber defenses.”

This is not theoretical.

Dark web posts explicitly name targets:

  • Coinbase

  • Binance

  • Kraken

  • Gemini

  • Accenture

  • Genpact

  • Spotify

  • Netflix

…and dozens more across financial services, consulting, and technology.

The going rate?

  • $3,000–$15,000 for initial access

  • $25,000 for 37 million cryptocurrency exchange records

The Real Implication of Accenture’s Findings

What this research makes clear - when taken to its logical conclusion - is this:

Managing insider risk requires more than governing access. It requires governing how data is used after access is granted.

This is the role of Post-Authentication Data Security (PADS).

PADS is a security layer that governs how data can be used after access is granted - enforcing policy at the moment of data interaction, not just at authentication.

What Accenture’s Research Makes Clear

Accenture’s findings highlight a structural shift in threat dynamics:

  • Insiders provide initial access and credentials (30% of cases)

  • Perimeter defenses are bypassed entirely

  • Activity appears legitimate - because it is legitimate

  • Security controls defer by design once authentication succeeds

Whelan emphasizes lifecycle controls:

  • Stronger hiring and identity verification

  • Role separation and least privilege

  • Immediate access revocation during offboarding

  • Monitoring for pre-departure activity

  • Behavioral analytics and insider threat programs

These are essential.

They reduce the likelihood that insider threats emerge - or go undetected.

But they also reveal something deeper:

Even with these controls, an authenticated user can still use data in ways that are indistinguishable from legitimate activity.

Where Existing Controls End - and Why the Gap Exists

When a recruited insider acts, the cybersecurity stack behaves exactly as designed:

  • Identity is verified

  • Access is authorized

  • Permissions are correctly applied

  • Activity aligns with role expectations

  • Monitoring systems observe “normal” behavior

From the system’s perspective:

Everything is working correctly.

And that is precisely the problem.

Because “working correctly” still allows data to be:

  • Queried

  • Downloaded

  • Copied

  • Transferred

  • Sold

Nothing is bypassed.
Nothing is broken.
No control is technically evaded.

The attack succeeds because:

The security stack is architected to stop at authentication.

Whelan’s findings reinforce this reality:

Attackers are not defeating controls - they are operating within the boundary those controls were designed to trust.

The Architectural Limitation

Modern security is built to answer one question:

Who should have access?

It is not built to answer:

What should an authenticated user be allowed to do with data - right now, in this context?

This is why insider recruitment is so effective.

Existing controls - IAM, Zero Trust, SIEM, DLP, UEBA - are optimized for:

  • Preventing unauthorized access

  • Detecting abnormal behavior

They are not designed to stop:

Authorized, normal-looking misuse of data

This is not a failure of execution.

It is a limitation of architecture.

The Missing Layer: Post-Authentication Data Security (PADS)

Accenture’s framework focuses on managing insider risk across the employee lifecycle.

PADS extends that framework into the data interaction lifecycle.

If traditional controls answer:

  • Who should have access?

  • When should access be granted or revoked?

  • Is behavior anomalous?

PADS answers:

  • What should this user be able to do with the data they can access?

  • Is this specific use of data appropriate in this context?

This is not a replacement for insider threat programs.

It is the layer that ensures their effectiveness - even when insiders act within expected patterns.

Why This Matters in the Insider Economy

The insider recruitment model works because it exploits a core assumption:

Authenticated access implies legitimate use.

Accenture’s research shows attackers are deliberately targeting that assumption.

They recruit insiders because:

  • Access is already granted

  • Activity blends into normal workflows

  • Detection becomes significantly harder

PADS shifts control from access → to data usage.

What Changes When Data Is Governed After Access

In a PADS-enabled environment:

  • Access still functions as designed

  • Authorized users still perform legitimate work

But:

  • Bulk extraction can be restricted or challenged

  • Sensitive data use can trigger contextual controls

  • Data remains protected - even outside the system

  • Actions - not just identities - are evaluated in real time

This means even if:

  • An insider is recruited

  • Credentials are valid

  • Behavior appears normal

The outcome changes.

Data is no longer freely extractable and usable simply because access was granted.

Aligning With Accenture’s Recommendations - And Extending Them

Whelan’s recommendations create a strong foundation:

  • Strengthen hiring and identity verification

  • Enforce role separation and least privilege

  • Revoke access immediately during offboarding

  • Monitor for behavioral anomalies

  • Expand insider threat intelligence

All of these aim to:

Prevent trusted individuals from using legitimate access to cause harm

But traditional implementations approach this indirectly.

They:

  • Limit access scope

  • Attempt to detect misuse

  • Reduce opportunity over time

They do not directly control:

What happens to data at the moment it is used

Where Traditional Controls Fall Short

Objective

Traditional Approach

Limitation

Prevent malicious insiders

Pre-employment screening

Cannot prevent post-hire recruitment

Limit exposure

RBAC / PoLP

Broad access still exists within roles

Stop access at risk

Offboarding

Reactive - after decision point

Detect misuse

UEBA / monitoring

Requires deviation from “normal”

Identify targeting

Threat intelligence

Does not stop insider action

These controls rely on:

  • Predicting intent

  • Detecting anomalies

  • Acting after signals appear

In insider recruitment scenarios:

Those signals may never appear in time.

How PADS Delivers the Outcome Directly

Objective

PADS Capability

Outcome

Limit insider impact

Data usability governance

Controls actions within valid access

Prevent extraction

Contextual policy enforcement

Evaluates intent at time of use

Reduce detection reliance

Real-time controls

No need for “abnormal” behavior

Mitigate insider risk

Persistent data protection

Exfiltrated data is unusable

Contain breaches

Outcome-based enforcement

Prevents usable data loss

PADS operates where risk actually materializes:

The moment data is accessed and used

The Strategic Implication: An Architectural Fault Line

Accenture classifies insider threats as a medium-frequency, high-impact strategic risk.

But the deeper implication is this:

Insider risk is not an edge case - it is a consequence of how cybersecurity is designed.

Whelan’s findings expose a critical assumption:

Once a user is authenticated, risk is sufficiently managed.

That assumption no longer holds.

Modern architecture treats:

  • Authentication as the boundary of trust

Everything beyond that boundary is governed by:

  • Permissions

  • Expected behavior

  • Post-event detection

Not by real-time control of data itself.

This is the fault line.

The Bottom Line

Accenture’s findings don’t just highlight the rise of insider threats - they expose a fundamental flaw in modern cybersecurity:

The assumption that risk ends when access is granted.

In reality:

That is where risk begins.

The Verizon DBIR reinforces this:

  • 74% of breaches involve the human element

  • Occurring within legitimate, authenticated sessions

No controls are bypassed.
No systems are broken.

Attackers simply operate inside the boundary the stack was designed to trust.

Whelan’s recommendations strengthen identity and access.

But they also point to a deeper truth:

Without governing how data is used after access is granted, the problem remains unsolved.

That is what Post-Authentication Data Security (PADS) delivers.

It shifts security from:

  • Controlling entry

To:

  • Controlling outcome

Because in today’s threat landscape:

Access is no longer the boundary of risk. Data usage is.

Resources

  • Accenture Cyber Intelligence Report: Insider Threat Escalation (2025)

  • What is PADS - The definition, category map, and how PADS completes the security model

  • Why PADS now - The forces driving post-authentication data theft

Final Thought

Every employee with access to sensitive data is a recruitment target.

Traditional security stops at authentication.

That’s exactly where the insider economy starts.

Secure, out of the box

Every tool you own stops at login. That's exactly where attackers start.

PADS turns authentication compromise into a harmless, contained incident, not a breach.

Every tool you own stops at login. That's exactly where attackers start.

PADS turns authentication compromise into a harmless, contained incident, not a breach.

© 2018-2026 FenixPyre Inc, All rights reserved

© 2018-2026 FenixPyre Inc, All rights reserved

© 2018-2026 FenixPyre Inc, All rights reserved