Pentesting AI
2 notes
Notes
- Linkshttps://arcanum-sec.github.io/arc_pi_taxonomy/
- Pentesting AI / LLM ApplicationsAttacking LLM-backed applications. Prompt injection (direct and indirect), jailbreaks, tool/function-call abuse, RAG and vector DB poisoning, model supply-chain attacks, training data extraction, agent escape, OWASP LLM Top 10 with concrete payloads and curl-runnable PoCs.