LangChain Core Vulnerability Exploitation CVE-2025-68664
BLUF
A critical flaw in the LangChain framework (CVE-2025-68664) allows unauthenticated attackers to extract secrets via serialization injection.
Executive Cost Summary
This cost analysis was developed by the CyberDax team using expert judgment and assisted analytical tools to support clarity and consistency.
For organizations affected by CVE-2025-68664 exploitation or suspected abuse:
Low-end total cost: $850,000 – $1.1M
(Rapid patching, limited exposure, no evidence of downstream misuse)Typical expected range: $1M – $2.5M
(Secret rotation, service disruption, customer communications, and assurance activities)
Upper-bound realistic scenarios: $3M – $4.5M
(Chained access via exposed credentials, regulatory scrutiny, prolonged investigations)
Key cost driver:
Costs are driven less by immediate outage and more by loss of trust in AI application boundaries. The need to re-establish assurance around secret handling, validate that no secondary systems were accessed, and demonstrate improved AI governance significantly amplifies financial and operational impact.
Potential Affected Sectors
· Technology
· AI/Machine Learning companies
· Organizations that use the LangChain framework
· Potentially critical infrastructure
Potential Affected Countries
Global
Date of First Reported Activity
· December 26, 2025
Date of Last Reported Activity Update
· December 26, 2025
CVE-2025-68664
CVSS 3.1
· (9.3) AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:L/A:N
Nessus ID
There is not a Nessus ID available for CVE-2025-68664 at this time
Is this on the KEV list?
· No
Patching/Mitigation Data:
Patch Release Date
December 26, 2025
Link to patch information
· hxxps://www.upwind.io/feed/cve-2025-68664-langchain-serialization-injection
Mitigation
· Upgrade LangChain
o Update to a patched version as soon as it's available to prevent the vulnerability.
· Disable Secret Resolution
o Turn off the ability to resolve environment variables during deserialization by default; enable it only when dealing with completely trusted data sources.
· Input Validation
o Carefully validate any serialized data to ensure it's not influenced by untrusted user input before deserialization.
APT Names
· There are no APT groups associated with CVE-2025-6864 at this time
Associated Criminal Organization Names
· There are no criminal organizations associated with CVE-2025-6864 at this time
IOCs
· There have been no IOCs publicly released at this time.
Tools Used in Campaign
Exploit Techniques
Prompt Injection
· This is the primary delivery mechanism. Attackers use crafted prompts to trick an LLM into producing a response that contains the internal LangChain marker key lc. When this malicious output is subsequently serialized by the application, it becomes an "armed" object ready to be triggered upon deserialization.
Secret Exfiltration Payloads
· Exploits typically focus on the lc_secrets key within the injected dictionary. This allows attackers to automatically extract environment variables such as AWS/Azure credentials, database connection strings, and LLM API keys (e.g., OpenAI or Anthropic keys) when the data is reloaded.
Arbitrary Module Loading
· Because the vulnerability bypasses the intended class allowlist in langchain-core/load/load.py, attackers can instantiate arbitrary Python global modules. This can escalate from secret theft to Remote Code Execution (RCE) depending on the libraries available in the local environment.
Specialized Research & Audit Tools
Penligent.ai (Deep Audit Technology)
Used for dynamic sandbox verification and semantic-aware auditing. It identifies vulnerable paths by parsing the Abstract Syntax Tree (AST) to see where user data flows into dumps() or load().
ZoomEye
· Attackers and researchers use this cyberspace search engine to find over 300 exposed, vulnerable LangChain instances that could be susceptible to remote exploitation.
Cyata Security Research
· The original discovery team provided a proof-of-concept (PoC) flow demonstrating how attacker-controlled LLM outputs (like additional_kwargs or response_metadata) can be used to compromise the host environment.
Reachable Attack Surfaces
· Tools that manage the following LangChain features are most likely to be used in exploit chains
Caching & Tracing
· Systems that store and reload past AI generations.
Message Histories
· Databases or tools that persist chat logs and later reconstruct them using vulnerable load() functions.
Streaming APIs
· Orchestration layers that move structured data between tools and agents in real-time.
TTPs:
· T864 (Serialization Injection): Main technique for secret extraction.
· T1552.001 (Unsecured Credentials: Environment Variables): Targeting secrets from environment variables.
Malware Names
· There has not been any malware associated with CVE-2025-68664 at this time
Potential Rules / suggested hunts
As a reminder these are indicator rules / potential hunts, they are likely to be noisy. For best results perform reviews via data models.
Suricata
Detect the presence of the internal "lc": 1 key when paired with other serialization indicators like "type": "secret" or "type": "constructor".
alert http any any -> $HOME_NET any (msg:"ET EXPLOIT Possible LangChain Serialization Injection (CVE-2025-68664)"; content:"\"lc\": 1"; http_client_body; content:"\"type\": \"secret\""; http_client_body; reference:cve,2025-68664; classtype:attempted-admin; sid:202568664; rev:1;)
SentinelOne
Hunt for the LangChain application process (e.g., python.exe) performing unexpected actions.
Sample Query (Deep Visibility):
SQL
// Detect unusual child processes from the Python application
ProcessName = "python.exe" AND ChildProcessName IN ("cmd.exe", "powershell.exe", "sh", "bash")
// Detect access to sensitive environment variable stores or registry keys
ProcessName = "python.exe" AND RegistryKeyPath CONTAINS "Environment"
Alternative Hunt
Look for instances where python.exe makes sudden outbound network connections to unknown external IPs immediately following a large data ingress, which may indicate secret exfiltration.
Splunk
Hunt for Serialized Markers
Search for the raw "lc": 1 key in logs from web servers, application tracers, or LLM interaction histories.
index=app_logs sourcetype=json
| search "\"lc\": 1" AND ("\"type\": \"secret\"" OR "\"type\": \"constructor\"")
| table _time, host, source, client_ip, user_agent, raw_payload
Use code with caution.
Anomalous Secret Access
If you log application-level access to environment variables, hunt for spikes in access to OPENAI_API_KEY, AWS_SECRET_ACCESS_KEY, or other high-value secrets originating from non-standard parts of your application logic.
Delivery Method
· Exploitation of the exposed LangChain API endpoints.
Email Samples
Not applicable.
References
Cyata AI
hxxps://cyata.ai/blog/langgrinch-langchain-core-cve-2025-68664/
CISA
hxxps://www.cisa.gov/known-exploited-vulnerabilities-catalog
SOC Radar
hxxps://socradar.io/blog/cve-2025-68664-langchain-flaw-secret-extraction/
Wiz IO
hxxps://www.wiz.io/vulnerability-database/cve/cve-2025-23317
NVD
hxxps://nvd.nist.gov/vuln/detail/CVE-2025-68664
UpWind
hxxps://www.upwind.io/feed/cve-2025-68664-langchain-serialization-injection
GitHub
hxxps://github.com/advisories/GHSA-c67j-w6g6-q2cm