Skip to content

LangChain Vulnerable to Template Injection via Attribute Access in Prompt Templates

High severity GitHub Reviewed Published Nov 19, 2025 in langchain-ai/langchain • Updated Nov 20, 2025

Package

pip langchain-core (pip)

Affected versions

>= 1.0.0, <= 1.0.6
<= 0.3.79

Patched versions

1.0.7
0.3.80

Description

Context

A template injection vulnerability exists in LangChain's prompt template system that allows attackers to access Python object internals through template syntax. This vulnerability affects applications that accept untrusted template strings (not just template variables) in ChatPromptTemplate and related prompt template classes.

Templates allow attribute access (.) and indexing ([]) but not method invocation (()).

The combination of attribute access and indexing may enable exploitation depending on which objects are passed to templates. When template variables are simple strings (the common case), the impact is limited. However, when using MessagesPlaceholder with chat message objects, attackers can traverse through object attributes and dictionary lookups (e.g., __globals__) to reach sensitive data such as environment variables.

The vulnerability specifically requires that applications accept template strings (the structure) from untrusted sources, not just template variables (the data). Most applications either do not use templates or else use hardcoded templates and are not vulnerable.

Affected Components

  • langchain-core package
  • Template formats:
    • F-string templates (template_format="f-string") - Vulnerability fixed
    • Mustache templates (template_format="mustache") - Defensive hardening
    • Jinja2 templates (template_format="jinja2") - Defensive hardening

Impact

Attackers who can control template strings (not just template variables) can:

  • Access Python object attributes and internal properties via attribute traversal
  • Extract sensitive information from object internals (e.g., __class__, __globals__)
  • Potentially escalate to more severe attacks depending on the objects passed to templates

Attack Vectors

1. F-string Template Injection

Before Fix:

from langchain_core.prompts import ChatPromptTemplate

malicious_template = ChatPromptTemplate.from_messages(
    [("human", "{msg.__class__.__name__}")],
    template_format="f-string"
)

# Note that this requires passing a placeholder variable for "msg.__class__.__name__".
result = malicious_template.invoke({"msg": "foo", "msg.__class__.__name__": "safe_placeholder"})
# Previously returned
# >>> result.messages[0].content
# >>> 'str'

2. Mustache Template Injection

Before Fix:

from langchain_core.prompts import ChatPromptTemplate
from langchain_core.messages import HumanMessage

msg = HumanMessage("Hello")

# Attacker controls the template string
malicious_template = ChatPromptTemplate.from_messages(
    [("human", "{{question.__class__.__name__}}")],
    template_format="mustache"
)

result = malicious_template.invoke({"question": msg})
# Previously returned: "HumanMessage" (getattr() exposed internals)

3. Jinja2 Template Injection

Before Fix:

from langchain_core.prompts import ChatPromptTemplate
from langchain_core.messages import HumanMessage

msg = HumanMessage("Hello")

# Attacker controls the template string
malicious_template = ChatPromptTemplate.from_messages(
    [("human", "{{question.parse_raw}}")],
    template_format="jinja2"
)

result = malicious_template.invoke({"question": msg})
# Could access non-dunder attributes/methods on objects

Root Cause

  1. F-string templates: The implementation used Python's string.Formatter().parse() to extract variable names from template strings. This method returns the complete field expression, including attribute access syntax:
    from string import Formatter
    
    template = "{msg.__class__} and {x}"
    print([var_name for (_, var_name, _, _) in Formatter().parse(template)])
    # Returns: ['msg.__class__', 'x']
    The extracted names were not validated to ensure they were simple identifiers. As a result, template strings containing attribute traversal and indexing expressions (e.g., {obj.__class__.__name__} or {obj.method.__globals__[os]}) were accepted and subsequently evaluated during formatting. While f-string templates do not support method calls with (), they do support [] indexing, which could allow traversal through dictionaries like __globals__ to reach sensitive objects.
  2. Mustache templates: By design, used getattr() as a fallback to support accessing attributes on objects (e.g., {{user.name}} on a User object). However, we decided to restrict this to simpler primitives that subclass dict, list, and tuple types as defensive hardening, since untrusted templates could exploit attribute access to reach internal properties like class on arbitrary objects
  3. Jinja2 templates: Jinja2's default SandboxedEnvironment blocks dunder attributes (e.g., __class__) but permits access to other attributes and methods on objects. While Jinja2 templates in LangChain are typically used with trusted template strings, as a defense-in-depth measure, we've restricted the environment to block all attribute and method access on objects
    passed to templates.

Who Is Affected?

High Risk Scenarios

You are affected if your application:

  • Accepts template strings from untrusted sources (user input, external APIs, databases)
  • Dynamically constructs prompt templates based on user-provided patterns
  • Allows users to customize or create prompt templates

Example vulnerable code:

# User controls the template string itself
user_template_string = request.json.get("template")  # DANGEROUS

prompt = ChatPromptTemplate.from_messages(
    [("human", user_template_string)],
    template_format="mustache"
)

result = prompt.invoke({"data": sensitive_object})

Low/No Risk Scenarios

You are NOT affected if:

  • Template strings are hardcoded in your application code
  • Template strings come only from trusted, controlled sources
  • Users can only provide values for template variables, not the template structure itself

Example safe code:

# Template is hardcoded - users only control variables
prompt = ChatPromptTemplate.from_messages(
    [("human", "User question: {question}")],  # SAFE
    template_format="f-string"
)

# User input only fills the 'question' variable
result = prompt.invoke({"question": user_input})

The Fix

F-string Templates

F-string templates had a clear vulnerability where attribute access syntax was exploitable. We've added strict validation to prevent this:

  • Added validation to enforce that variable names must be valid Python identifiers
  • Rejects syntax like {obj.attr}, {obj[0]}, or {obj.__class__}
  • Only allows simple variable names: {variable_name}
# After fix - these are rejected at template creation time
ChatPromptTemplate.from_messages(
    [("human", "{msg.__class__}")],  # ValueError: Invalid variable name
    template_format="f-string"
)

Mustache Templates (Defensive Hardening)

As defensive hardening, we've restricted what Mustache templates support to reduce the attack surface:

  • Replaced getattr() fallback with strict type checking
  • Only allows traversal into dict, list, and tuple types
  • Blocks attribute access on arbitrary Python objects
# After hardening - attribute access returns empty string
prompt = ChatPromptTemplate.from_messages(
    [("human", "{{msg.__class__}}")],
    template_format="mustache"
)
result = prompt.invoke({"msg": HumanMessage("test")})
# Returns: "" (access blocked)

Jinja2 Templates (Defensive Hardening)

As defensive hardening, we've significantly restricted Jinja2 template capabilities:

  • Introduced _RestrictedSandboxedEnvironment that blocks ALL attribute/method access
  • Only allows simple variable lookups from the context dictionary
  • Raises SecurityError on any attribute access attempt
# After hardening - all attribute access is blocked
prompt = ChatPromptTemplate.from_messages(
    [("human", "{{msg.content}}")],
    template_format="jinja2"
)
# Raises SecurityError: Access to attributes is not allowed

Important Recommendation: Due to the expressiveness of Jinja2 and the difficulty of fully sandboxing it, we recommend reserving Jinja2 templates for trusted sources only. If you need to accept template strings from untrusted users, use f-string or mustache templates with the new restrictions instead.

While we've hardened the Jinja2 implementation, the nature of templating engines makes comprehensive sandboxing challenging. The safest approach is to only use Jinja2 templates when you control the template source.

Important Reminder: Many applications do not need prompt templates. Templates are useful for variable substitution and dynamic logic (if statements, loops, conditionals). However, if you're building a chatbot or conversational application, you can often work directly with message objects (e.g., HumanMessage, AIMessage, ToolMessage) without templates. Direct message construction avoids template-related security concerns entirely.

Remediation

Immediate Actions

  1. Audit your code for any locations where template strings come from untrusted sources
  2. Update to the patched version of langchain-core
  3. Review template usage to ensure separation between template structure and user data

Best Practices

  • Consider if you need templates at all - Many applications can work directly with message objects (HumanMessage, AIMessage, etc.) without templates
  • Reserve Jinja2 for trusted sources - Only use Jinja2 templates when you fully control the template content

References

@eyurtsev eyurtsev published to langchain-ai/langchain Nov 19, 2025
Published to the GitHub Advisory Database Nov 20, 2025
Reviewed Nov 20, 2025
Last updated Nov 20, 2025

Severity

High

CVSS overall score

This score calculates overall vulnerability severity from 0 to 10 and is based on the Common Vulnerability Scoring System (CVSS).
/ 10

CVSS v4 base metrics

Exploitability Metrics
Attack Vector Network
Attack Complexity Low
Attack Requirements Present
Privileges Required None
User interaction None
Vulnerable System Impact Metrics
Confidentiality High
Integrity Low
Availability None
Subsequent System Impact Metrics
Confidentiality None
Integrity None
Availability None

CVSS v4 base metrics

Exploitability Metrics
Attack Vector: This metric reflects the context by which vulnerability exploitation is possible. This metric value (and consequently the resulting severity) will be larger the more remote (logically, and physically) an attacker can be in order to exploit the vulnerable system. The assumption is that the number of potential attackers for a vulnerability that could be exploited from across a network is larger than the number of potential attackers that could exploit a vulnerability requiring physical access to a device, and therefore warrants a greater severity.
Attack Complexity: This metric captures measurable actions that must be taken by the attacker to actively evade or circumvent existing built-in security-enhancing conditions in order to obtain a working exploit. These are conditions whose primary purpose is to increase security and/or increase exploit engineering complexity. A vulnerability exploitable without a target-specific variable has a lower complexity than a vulnerability that would require non-trivial customization. This metric is meant to capture security mechanisms utilized by the vulnerable system.
Attack Requirements: This metric captures the prerequisite deployment and execution conditions or variables of the vulnerable system that enable the attack. These differ from security-enhancing techniques/technologies (ref Attack Complexity) as the primary purpose of these conditions is not to explicitly mitigate attacks, but rather, emerge naturally as a consequence of the deployment and execution of the vulnerable system.
Privileges Required: This metric describes the level of privileges an attacker must possess prior to successfully exploiting the vulnerability. The method by which the attacker obtains privileged credentials prior to the attack (e.g., free trial accounts), is outside the scope of this metric. Generally, self-service provisioned accounts do not constitute a privilege requirement if the attacker can grant themselves privileges as part of the attack.
User interaction: This metric captures the requirement for a human user, other than the attacker, to participate in the successful compromise of the vulnerable system. This metric determines whether the vulnerability can be exploited solely at the will of the attacker, or whether a separate user (or user-initiated process) must participate in some manner.
Vulnerable System Impact Metrics
Confidentiality: This metric measures the impact to the confidentiality of the information managed by the VULNERABLE SYSTEM due to a successfully exploited vulnerability. Confidentiality refers to limiting information access and disclosure to only authorized users, as well as preventing access by, or disclosure to, unauthorized ones.
Integrity: This metric measures the impact to integrity of a successfully exploited vulnerability. Integrity refers to the trustworthiness and veracity of information. Integrity of the VULNERABLE SYSTEM is impacted when an attacker makes unauthorized modification of system data. Integrity is also impacted when a system user can repudiate critical actions taken in the context of the system (e.g. due to insufficient logging).
Availability: This metric measures the impact to the availability of the VULNERABLE SYSTEM resulting from a successfully exploited vulnerability. While the Confidentiality and Integrity impact metrics apply to the loss of confidentiality or integrity of data (e.g., information, files) used by the system, this metric refers to the loss of availability of the impacted system itself, such as a networked service (e.g., web, database, email). Since availability refers to the accessibility of information resources, attacks that consume network bandwidth, processor cycles, or disk space all impact the availability of a system.
Subsequent System Impact Metrics
Confidentiality: This metric measures the impact to the confidentiality of the information managed by the SUBSEQUENT SYSTEM due to a successfully exploited vulnerability. Confidentiality refers to limiting information access and disclosure to only authorized users, as well as preventing access by, or disclosure to, unauthorized ones.
Integrity: This metric measures the impact to integrity of a successfully exploited vulnerability. Integrity refers to the trustworthiness and veracity of information. Integrity of the SUBSEQUENT SYSTEM is impacted when an attacker makes unauthorized modification of system data. Integrity is also impacted when a system user can repudiate critical actions taken in the context of the system (e.g. due to insufficient logging).
Availability: This metric measures the impact to the availability of the SUBSEQUENT SYSTEM resulting from a successfully exploited vulnerability. While the Confidentiality and Integrity impact metrics apply to the loss of confidentiality or integrity of data (e.g., information, files) used by the system, this metric refers to the loss of availability of the impacted system itself, such as a networked service (e.g., web, database, email). Since availability refers to the accessibility of information resources, attacks that consume network bandwidth, processor cycles, or disk space all impact the availability of a system.
CVSS:4.0/AV:N/AC:L/AT:P/PR:N/UI:N/VC:H/VI:L/VA:N/SC:N/SI:N/SA:N

EPSS score

Exploit Prediction Scoring System (EPSS)

This score estimates the probability of this vulnerability being exploited within the next 30 days. Data provided by FIRST.
(13th percentile)

Weaknesses

Improper Neutralization of Special Elements Used in a Template Engine

The product uses a template engine to insert or process externally-influenced input, but it does not neutralize or incorrectly neutralizes special elements or syntax that can be interpreted as template expressions or other code directives when processed by the engine. Learn more on MITRE.

CVE ID

CVE-2025-65106

GHSA ID

GHSA-6qv9-48xg-fc7f

Credits

Loading Checking history
See something to contribute? Suggest improvements for this vulnerability.