home / skills / quantmind-br / skills / perplexity-cli

perplexity-cli skill

/perplexity-cli

This skill enables performing AI-powered searches from the terminal using Perplexity CLI with pro mode and JSON output for scripting.

This is most likely a fork of the perplexity-cli skill from neversight
npx playbooks add skill quantmind-br/skills --skill perplexity-cli

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
2.3 KB
---
name: perplexity-cli
description: |
  CLI interface for Perplexity AI. Perform AI-powered searches, queries, and research directly from terminal.
  Use when user mentions Perplexity, AI search, web research, or needs to query AI models like GPT, Claude, Grok, Gemini.
  Commands: query.
compatibility: |
  Requires perplexity CLI installed. Verify with `perplexity --help`.
  Config location: ~/.perplexity-cli/
allowed-tools: Bash(perplexity:*), Read
---

# Perplexity CLI Skill

## Overview

Perplexity CLI is a command-line interface for Perplexity AI that allows AI-powered searches directly from the terminal with support for multiple models, streaming output, and file attachments.

## Prerequisites

```bash
# Verify installation
perplexity --help
```

## Quick Reference

| Command | Description |
|---------|-------------|
| `perplexity "query" --mode pro --json` | Search with pro mode and JSON output |

| Mode | Flag | Description |
|------|------|-------------|
| pro | `--mode pro` | Deep search with reasoning (default) |

## Common Operations

### Basic Query
```bash
perplexity "What is quantum computing?" --mode pro --json
```

### Read Query from File, Save Response
```bash
perplexity -f question.md -o answer.md --mode pro --json
```

### Query with Sources 
```bash
perplexity "Climate research" --sources web,scholar --mode pro --json
```

## All Flags

| Flag | Short | Description |
|------|-------|-------------|
| `--json` | | Output in JSON format (REQUIRED for scripts) |
| `--mode` | | Search mode |
| `--sources` | `-s` | Sources: web,scholar,social |
| `--language` | `-l` | Response language (e.g., en-US, pt-BR) |
| `--file` | `-f` | Read query from file |
| `--output` | `-o` | Save response to file |

## Best Practices

1. **ALWAYS use `--mode pro --json`** for all queries (pro mode with JSON output)
2. **DO NOT use `--model` flag** - model is configured by the user in config
3. Use `-f` and `-o` flags for batch processing

## Piping and Scripting

```bash
# Pipe query from stdin (JSON output)
echo "What is Go?" | perplexity --mode pro --json

# Use in scripts (JSON output REQUIRED)
RESPONSE=$(perplexity "Quick answer" --mode pro --json 2>/dev/null)

# Batch processing (JSON output)
cat questions.txt | while read q; do
  perplexity "$q" --mode pro -o "answers/$(echo $q | md5sum | cut -c1-8).md" --json
done
```

Overview

This skill provides a command-line interface to Perplexity AI for fast, AI-powered searches and research directly from your terminal. It supports multiple models, streaming output, file attachments, and structured JSON output for automation. Use it to run single queries, batch jobs, or integrate AI search into shell scripts and pipelines.

How this skill works

The CLI accepts free-text queries or reads queries from files and sends them to Perplexity AI using the configured model. It supports flags for search mode, source selection (web, scholar, social), language, and output destination, and can stream results to stdout or write structured JSON to files. Designed for scripting, the tool returns machine-readable JSON when requested and can be piped into other shell commands for batch processing.

When to use it

  • Run ad-hoc AI searches from a terminal or remote shell where a browser is not practical.
  • Automate research tasks or QA workflows that need structured AI responses (JSON) for downstream processing.
  • Batch-process a list of questions read from a file or pipeline answers to other tools.
  • Integrate Perplexity answers into CI jobs, data pipelines, or monitoring scripts.
  • Quickly fetch citations and web/scholar sources for research summaries.

Best practices

  • Always request JSON output for scripts and automation (--json) to ensure stable parsing.
  • Use pro search mode (--mode pro) for deeper reasoning and more complete responses.
  • Avoid changing model flags in the CLI; manage model selection in your Perplexity configuration instead.
  • Use -f to read queries from files and -o to save responses for reproducible batch runs.
  • Pipe stdin and handle stderr when embedding the CLI in shell scripts to keep output clean.

Example use cases

  • One-off research: perplexity "What is quantum computing?" --mode pro --json for a quick, cited summary.
  • Batch answers: cat questions.txt | while read q; do perplexity "$q" -o answers/$(echo $q | md5sum | cut -c1-8).md --mode pro --json; done
  • CI integration: run a nightly job that queries domain changes and writes JSON results for alerting.
  • File-driven queries: perplexity -f question.md -o answer.md --mode pro --json to keep question/answer pairs in version control.
  • Piping into other tools: echo "Quick answer" | perplexity --mode pro --json | jq '.' to extract fields programmatically.

FAQ

Do I need a specific model flag to choose GPT, Claude, or others?

No. The CLI uses the model configured in your Perplexity account or local config; do not pass a model flag in the CLI.

Is JSON required?

JSON is required for reliable scripting and batch processing. Use --json when your workflow depends on parsing fields programmatically.