AI agent security: taxonomy, status, and future

Dec. 2024


In the following google doc., we categorize and summarize recent papers on the security risks of LLM-enabled AI agents. We use this doc. as a literature review and a paper tracker. We also provide discussions about attack realisticness (e.g., most direct prompt injection attacks are not realistic) and potential research directions.

LLM-enabled agent systems safety and security




@article{guo2024llmSec, 
  title   = {LLM-enabled agent systems safety and security},
  author  = {Guo, Wenbo and Nie, Yuzhou},
  journal = {henrygwb.github.io},
  year    = {2024},
  url     = {https://henrygwb.github.io/posts/agent_security.htm}
}