home / skills / humanizerai / agent-skills / detect-ai
This skill analyzes text with the HumanizerAI API to determine AI authorship and present a clear, actionable score for publishing confidence.
npx playbooks add skill humanizerai/agent-skills --skill detect-aiReview the files below or copy the command above to add this skill to your agents.
---
name: detect-ai
description: Analyze text to detect if it was written by AI. Returns a score from 0-100 with detailed metrics. Use when checking content before publishing or submitting.
user-invocable: true
argument-hint: [text to analyze]
allowed-tools: WebFetch
---
# Detect AI Content
Analyze text to determine if it was written by AI using the HumanizerAI API.
## How It Works
When the user invokes `/detect-ai`, you should:
1. Extract the text from $ARGUMENTS
2. Call the HumanizerAI API to analyze the text
3. Present the results in a clear, actionable format
## API Call
Make a POST request to `https://humanizerai.com/api/v1/detect`:
```
Authorization: Bearer $HUMANIZERAI_API_KEY
Content-Type: application/json
{
"text": "<user's text>"
}
```
## API Response Format
The API returns JSON like this:
```json
{
"score": {
"overall": 82,
"perplexity": 96,
"burstiness": 15,
"readability": 23,
"satPercent": 3,
"simplicity": 35,
"ngramScore": 8,
"averageSentenceLength": 21
},
"wordCount": 82,
"sentenceCount": 4,
"verdict": "ai"
}
```
**IMPORTANT:** The main AI score is `score.overall` (not `score` directly). This is the score to display to the user.
## Present Results Like This
```
## AI Detection Results
**Score:** [score.overall]/100 ([verdict])
**Words Analyzed:** [wordCount]
### Metrics
- Perplexity: [score.perplexity]
- Burstiness: [score.burstiness]
- Readability: [score.readability]
- N-gram Score: [score.ngramScore]
### Recommendation
[Based on score.overall, suggest whether to humanize]
```
## Score Interpretation (use score.overall)
- 0-20: Human-written content
- 21-40: Likely human, minor AI patterns
- 41-60: Mixed signals, could be either
- 61-80: Likely AI-generated
- 81-100: Highly likely AI-generated
## Error Handling
If the API call fails:
1. Check if HUMANIZERAI_API_KEY is set
2. Suggest the user get an API key at https://humanizerai.com
3. Provide the error message for debugging
This skill analyzes text to determine the likelihood it was written by an AI and returns a 0–100 score with detailed metrics. It calls the HumanizerAI detection endpoint and presents a concise verdict, per-metric breakdown, and a recommendation on whether to humanize the content. Use it to validate drafts before publishing or submission.
The skill extracts the supplied text, sends it to the HumanizerAI detect API, and parses the JSON response. It displays the main overall score (0–100), verdict, word and sentence counts, and component metrics like perplexity, burstiness, readability, and n-gram score. If the API call fails, it checks the configured API key and surfaces actionable error guidance.
What does the overall score mean?
The overall score (0–100) indicates likelihood of AI authorship: lower scores suggest human-written text, higher scores indicate likely AI-generated content. Use the score ranges for interpretation and review the detailed metrics for context.
What should I do if the API call fails?
First verify that HUMANIZERAI_API_KEY is configured. If the key is present, capture the error message returned by the API and retry. If you need a key, get one at https://humanizerai.com.