home / skills / pluginagentmarketplace / custom-plugin-ai-red-teaming

pluginagentmarketplace/custom-plugin-ai-red-teaming

AI Red Teaming Plugin Development

25 skills
GitHub

Sponsored

automated-testing

pluginagentmarketplace/custom-plugin-ai-red-teaming

1
This skill automates AI security testing within CI/CD pipelines, enabling continuous protection by integrating injections, jailbreak, safety, and privacy
red-team-reporting

pluginagentmarketplace/custom-plugin-ai-red-teaming

1
This skill generates professional red-team security reports with executive summaries, findings, remediation tracking, and compliance mappings to stakeholders.
llm-jailbreaking

pluginagentmarketplace/custom-plugin-ai-red-teaming

1
This skill analyzes and tests LLM safety boundaries using jailbreaking techniques to reveal vulnerabilities and improve defenses.
testing-methodologies

pluginagentmarketplace/custom-plugin-ai-red-teaming

1
This skill analyzes AI security testing methodologies to help you identify vulnerabilities, prioritize threats, and create actionable remediation plans.
vulnerability-discovery

pluginagentmarketplace/custom-plugin-ai-red-teaming

1
This skill helps identify and prioritize LLM vulnerabilities through threat modeling, attack surface analysis, and OWASP LLM 2025 mapping.
adversarial-training

pluginagentmarketplace/custom-plugin-ai-red-teaming

1
This skill strengthens model robustness by training with adversarial examples and attack simulations to withstand data poisoning and misinformation.
code-injection

pluginagentmarketplace/custom-plugin-ai-red-teaming

1
This skill identifies and mitigates code injection vulnerabilities in AI systems by testing prompt-to-code, tool exploitation, and template injection vectors.
continuous-monitoring

pluginagentmarketplace/custom-plugin-ai-red-teaming

1
This skill enables real-time detection of adversarial attacks and model drift in production AI systems, reducing risk and downtime.
model-inversion

pluginagentmarketplace/custom-plugin-ai-red-teaming

1
This skill helps assess and mitigate privacy risks from model inversion by identifying membership inference, data extraction, and gradient leakage
prompt-hacking

pluginagentmarketplace/custom-plugin-ai-red-teaming

1
This skill analyzes advanced prompt-hacking techniques to bolster defenses against injection and multi-turn manipulation in AI systems.
responsible-disclosure

pluginagentmarketplace/custom-plugin-ai-red-teaming

1
This skill helps you implement responsible disclosure practices for AI vulnerabilities, coordinating with vendors, timelines, and bug bounty programs to
rag-exploitation

pluginagentmarketplace/custom-plugin-ai-red-teaming

1
This skill helps assess and exploit retrieval-augmented generation systems by identifying knowledge base poisoning, retrieval manipulation, and context
input-output-guardrails

pluginagentmarketplace/custom-plugin-ai-red-teaming

1
This skill enforces multi-layer input-output guardrails to filter malicious inputs, redact PII, and block unsafe outputs for safer AI interactions.
model-extraction

pluginagentmarketplace/custom-plugin-ai-red-teaming

1
This skill analyzes and catalogs potential model extraction vulnerabilities to help you strengthen defenses and assess exposure across APIs and embeddings.
adversarial-examples

pluginagentmarketplace/custom-plugin-ai-red-teaming

1
This skill generates adversarial inputs and edge cases to stress-test LLM robustness and reveal failure modes across linguistic, numerical, and format
certifications-training

pluginagentmarketplace/custom-plugin-ai-red-teaming

1
This skill helps you build AI security expertise through certifications, CTFs, and structured training for a professional security career.
data-poisoning

pluginagentmarketplace/custom-plugin-ai-red-teaming

1
This skill tests AI training pipelines for data poisoning vulnerabilities, evaluating attack vectors and monitoring resilience across datasets and fine-tuning.
defense-implementation

pluginagentmarketplace/custom-plugin-ai-red-teaming

1
This skill helps you implement production-ready defenses for LLM security by validating inputs, filtering outputs, and enforcing safe prompts.
infrastructure-security

pluginagentmarketplace/custom-plugin-ai-red-teaming

1
This skill helps secure AI/ML infrastructure by protecting API, model storage, and compute resources with defense-in-depth practices.
prompt-injection

pluginagentmarketplace/custom-plugin-ai-red-teaming

1
This skill helps you assess LLM prompt injection resilience by executing structured tests and generating actionable mitigation recommendations.
red-team-frameworks

pluginagentmarketplace/custom-plugin-ai-red-teaming

1
This skill helps you orchestrate automated AI red teaming using PyRIT, garak, Counterfit, ART, and TextAttack to identify vulnerabilities.
safety-filter-bypass

pluginagentmarketplace/custom-plugin-ai-red-teaming

1
This skill helps you test and strengthen AI safety filters by simulating bypass techniques and guiding responsible disclosure.
secure-deployment

pluginagentmarketplace/custom-plugin-ai-red-teaming

1
This skill helps you deploy AI/ML models securely by enforcing defense-in-depth, zero-trust, and rigorous pre-deployment to runtime protections.