logo

452 Prompt injection


Description

The application builds LLM prompts using external input, but the structure fails to distinguish the line between user input and system instructions.


Impact

The consequences are entirely contextual, depending on the system that the model is integrated into. Leakage of sensitive information, remote code execution, execution of unauthorized actions, etc.


Recommendation

- Ensure proper sanitization of user-controllable input. - User-controllable input must be identified as untrusted and potentially dangerous. - Separate instructions from user input. - Audit interactions. - The model could be fine-tuned to better control and neutralize potentially dangerous inputs. - Limit critical functions exposed to the model.


Threat

Authenticated attacker from the Internet.


Expected Remediation Time

60 minutes.


Score 4.0

Default score using CVSS 4.0. It may change depending on the context of the src.

Base 4.0

  • Attack vector: N
  • Attack complexity: L
  • Attack Requirements: N
  • Privileges required: L
  • User interaction: N
  • Confidentiality (VC): N
  • Integrity (VI): L
  • Availability (VA): N
  • Confidentiality (SC): N
  • Integrity (SI): N
  • Availability (SA): N

Threat 4.0

  • Exploit maturity: U

Requirements


Last updated

2025/07/10