An attack in which the attacker inserts malicious content into, for instance, information likely to be included in a response to a prompt; when retrieved, the malicious content causes the AI model to behave in unexpected or harmful ways See Prompt Injection Attack and Passive Prompt Injection Attack