home / skills / humanizerai / agent-skills / detect-ai

detect-ai skill

/skills/detect-ai

This skill analyzes text with the HumanizerAI API to determine AI authorship and present a clear, actionable score for publishing confidence.

npx playbooks add skill humanizerai/agent-skills --skill detect-ai

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
1.9 KB
---
name: detect-ai
description: Analyze text to detect if it was written by AI. Returns a score from 0-100 with detailed metrics. Use when checking content before publishing or submitting.
user-invocable: true
argument-hint: [text to analyze]
allowed-tools: WebFetch
---

# Detect AI Content

Analyze text to determine if it was written by AI using the HumanizerAI API.

## How It Works

When the user invokes `/detect-ai`, you should:

1. Extract the text from $ARGUMENTS
2. Call the HumanizerAI API to analyze the text
3. Present the results in a clear, actionable format

## API Call

Make a POST request to `https://humanizerai.com/api/v1/detect`:

```
Authorization: Bearer $HUMANIZERAI_API_KEY
Content-Type: application/json

{
  "text": "<user's text>"
}
```

## API Response Format

The API returns JSON like this:

```json
{
  "score": {
    "overall": 82,
    "perplexity": 96,
    "burstiness": 15,
    "readability": 23,
    "satPercent": 3,
    "simplicity": 35,
    "ngramScore": 8,
    "averageSentenceLength": 21
  },
  "wordCount": 82,
  "sentenceCount": 4,
  "verdict": "ai"
}
```

**IMPORTANT:** The main AI score is `score.overall` (not `score` directly). This is the score to display to the user.

## Present Results Like This

```
## AI Detection Results

**Score:** [score.overall]/100 ([verdict])
**Words Analyzed:** [wordCount]

### Metrics
- Perplexity: [score.perplexity]
- Burstiness: [score.burstiness]
- Readability: [score.readability]
- N-gram Score: [score.ngramScore]

### Recommendation
[Based on score.overall, suggest whether to humanize]
```

## Score Interpretation (use score.overall)

- 0-20: Human-written content
- 21-40: Likely human, minor AI patterns
- 41-60: Mixed signals, could be either
- 61-80: Likely AI-generated
- 81-100: Highly likely AI-generated

## Error Handling

If the API call fails:
1. Check if HUMANIZERAI_API_KEY is set
2. Suggest the user get an API key at https://humanizerai.com
3. Provide the error message for debugging

Overview

This skill analyzes text to determine the likelihood it was written by an AI and returns a 0–100 score with detailed metrics. It calls the HumanizerAI detection endpoint and presents a concise verdict, per-metric breakdown, and a recommendation on whether to humanize the content. Use it to validate drafts before publishing or submission.

How this skill works

The skill extracts the supplied text, sends it to the HumanizerAI detect API, and parses the JSON response. It displays the main overall score (0–100), verdict, word and sentence counts, and component metrics like perplexity, burstiness, readability, and n-gram score. If the API call fails, it checks the configured API key and surfaces actionable error guidance.

When to use it

  • Before publishing blog posts, articles, or marketing copy to ensure authenticity.
  • When reviewing academic or professional submissions for potential AI-generated content.
  • During editorial workflows to decide whether content needs humanization or rewriting.
  • As part of moderation pipelines to flag likely AI-generated contributions.
  • Before client delivery to verify originality and alignment with style guidelines.

Best practices

  • Provide a representative sample of the text (not just a single sentence) for reliable scoring.
  • Treat the overall score as an indicator, not definitive proof; review metrics and context.
  • Combine the detection score with manual review for sensitive or high-stakes content.
  • Store the report with the analyzed text and timestamp for audit and follow-up.
  • If API errors occur, confirm the HUMANIZERAI_API_KEY is set and retry with the full error message.

Example use cases

  • A content editor scans a draft article to decide if it needs humanization before publication.
  • A university reviewer checks a student submission for AI-like patterns before investigation.
  • A marketing team vets user-generated content for authenticity prior to featuring it.
  • A platform flags suspicious forum posts for human moderation based on high AI scores.

FAQ

What does the overall score mean?

The overall score (0–100) indicates likelihood of AI authorship: lower scores suggest human-written text, higher scores indicate likely AI-generated content. Use the score ranges for interpretation and review the detailed metrics for context.

What should I do if the API call fails?

First verify that HUMANIZERAI_API_KEY is configured. If the key is present, capture the error message returned by the API and retry. If you need a key, get one at https://humanizerai.com.