home / skills / sickn33 / antigravity-awesome-skills / azure-ai-contentsafety-py
This skill helps you detect harmful content in text and images using Azure AI Content Safety in Python, with blocklist management and severity analysis.
npx playbooks add skill sickn33/antigravity-awesome-skills --skill azure-ai-contentsafety-pyReview the files below or copy the command above to add this skill to your agents.
---
name: azure-ai-contentsafety-py
description: |
Azure AI Content Safety SDK for Python. Use for detecting harmful content in text and images with multi-severity classification.
Triggers: "azure-ai-contentsafety", "ContentSafetyClient", "content moderation", "harmful content", "text analysis", "image analysis".
package: azure-ai-contentsafety
---
# Azure AI Content Safety SDK for Python
Detect harmful user-generated and AI-generated content in applications.
## Installation
```bash
pip install azure-ai-contentsafety
```
## Environment Variables
```bash
CONTENT_SAFETY_ENDPOINT=https://<resource>.cognitiveservices.azure.com
CONTENT_SAFETY_KEY=<your-api-key>
```
## Authentication
### API Key
```python
from azure.ai.contentsafety import ContentSafetyClient
from azure.core.credentials import AzureKeyCredential
import os
client = ContentSafetyClient(
endpoint=os.environ["CONTENT_SAFETY_ENDPOINT"],
credential=AzureKeyCredential(os.environ["CONTENT_SAFETY_KEY"])
)
```
### Entra ID
```python
from azure.ai.contentsafety import ContentSafetyClient
from azure.identity import DefaultAzureCredential
client = ContentSafetyClient(
endpoint=os.environ["CONTENT_SAFETY_ENDPOINT"],
credential=DefaultAzureCredential()
)
```
## Analyze Text
```python
from azure.ai.contentsafety import ContentSafetyClient
from azure.ai.contentsafety.models import AnalyzeTextOptions, TextCategory
from azure.core.credentials import AzureKeyCredential
client = ContentSafetyClient(endpoint, AzureKeyCredential(key))
request = AnalyzeTextOptions(text="Your text content to analyze")
response = client.analyze_text(request)
# Check each category
for category in [TextCategory.HATE, TextCategory.SELF_HARM,
TextCategory.SEXUAL, TextCategory.VIOLENCE]:
result = next((r for r in response.categories_analysis
if r.category == category), None)
if result:
print(f"{category}: severity {result.severity}")
```
## Analyze Image
```python
from azure.ai.contentsafety import ContentSafetyClient
from azure.ai.contentsafety.models import AnalyzeImageOptions, ImageData
from azure.core.credentials import AzureKeyCredential
import base64
client = ContentSafetyClient(endpoint, AzureKeyCredential(key))
# From file
with open("image.jpg", "rb") as f:
image_data = base64.b64encode(f.read()).decode("utf-8")
request = AnalyzeImageOptions(
image=ImageData(content=image_data)
)
response = client.analyze_image(request)
for result in response.categories_analysis:
print(f"{result.category}: severity {result.severity}")
```
### Image from URL
```python
from azure.ai.contentsafety.models import AnalyzeImageOptions, ImageData
request = AnalyzeImageOptions(
image=ImageData(blob_url="https://example.com/image.jpg")
)
response = client.analyze_image(request)
```
## Text Blocklist Management
### Create Blocklist
```python
from azure.ai.contentsafety import BlocklistClient
from azure.ai.contentsafety.models import TextBlocklist
from azure.core.credentials import AzureKeyCredential
blocklist_client = BlocklistClient(endpoint, AzureKeyCredential(key))
blocklist = TextBlocklist(
blocklist_name="my-blocklist",
description="Custom terms to block"
)
result = blocklist_client.create_or_update_text_blocklist(
blocklist_name="my-blocklist",
options=blocklist
)
```
### Add Block Items
```python
from azure.ai.contentsafety.models import AddOrUpdateTextBlocklistItemsOptions, TextBlocklistItem
items = AddOrUpdateTextBlocklistItemsOptions(
blocklist_items=[
TextBlocklistItem(text="blocked-term-1"),
TextBlocklistItem(text="blocked-term-2")
]
)
result = blocklist_client.add_or_update_blocklist_items(
blocklist_name="my-blocklist",
options=items
)
```
### Analyze with Blocklist
```python
from azure.ai.contentsafety.models import AnalyzeTextOptions
request = AnalyzeTextOptions(
text="Text containing blocked-term-1",
blocklist_names=["my-blocklist"],
halt_on_blocklist_hit=True
)
response = client.analyze_text(request)
if response.blocklists_match:
for match in response.blocklists_match:
print(f"Blocked: {match.blocklist_item_text}")
```
## Severity Levels
Text analysis returns 4 severity levels (0, 2, 4, 6) by default. For 8 levels (0-7):
```python
from azure.ai.contentsafety.models import AnalyzeTextOptions, AnalyzeTextOutputType
request = AnalyzeTextOptions(
text="Your text",
output_type=AnalyzeTextOutputType.EIGHT_SEVERITY_LEVELS
)
```
## Harm Categories
| Category | Description |
|----------|-------------|
| `Hate` | Attacks based on identity (race, religion, gender, etc.) |
| `Sexual` | Sexual content, relationships, anatomy |
| `Violence` | Physical harm, weapons, injury |
| `SelfHarm` | Self-injury, suicide, eating disorders |
## Severity Scale
| Level | Text Range | Image Range | Meaning |
|-------|------------|-------------|---------|
| 0 | Safe | Safe | No harmful content |
| 2 | Low | Low | Mild references |
| 4 | Medium | Medium | Moderate content |
| 6 | High | High | Severe content |
## Client Types
| Client | Purpose |
|--------|---------|
| `ContentSafetyClient` | Analyze text and images |
| `BlocklistClient` | Manage custom blocklists |
## Best Practices
1. **Use blocklists** for domain-specific terms
2. **Set severity thresholds** appropriate for your use case
3. **Handle multiple categories** — content can be harmful in multiple ways
4. **Use halt_on_blocklist_hit** for immediate rejection
5. **Log analysis results** for audit and improvement
6. **Consider 8-severity mode** for finer-grained control
7. **Pre-moderate AI outputs** before showing to users
This skill provides a Python client wrapper for Azure AI Content Safety to detect and classify harmful content in text and images. It exposes ContentSafetyClient and BlocklistClient functionality to analyze multi-category harm, manage custom blocklists, and control severity granularity. Use it to pre-moderate user-generated or AI-generated content and enforce policy-driven rejection or escalation flows.
The skill authenticates to an Azure Content Safety resource using an API key or Entra ID and sends AnalyzeTextOptions or AnalyzeImageOptions requests. Responses return per-category severity scores (Hate, Sexual, Violence, SelfHarm) and optional blocklist matches; image analysis accepts base64 content or blob URLs. You can configure output severity levels (4 or 8-level scales), supply blocklists to halt on matches, and read category analysis to make acceptance/rejection decisions.
Which authentication methods are supported?
API key via AzureKeyCredential and Entra ID via DefaultAzureCredential are both supported.
Can I analyze images from a URL?
Yes. Image analysis accepts either base64 content or a blob_url pointing to the image.