home / skills / bankkroll / skills-builder / openai-api

openai-api skill

/skills/openai-api

This skill helps you access and reference official OpenAI API documentation to answer API usage questions.

npx playbooks add skill bankkroll/skills-builder --skill openai-api

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
785 B
---
name: "openai-api"
description: "Scraped from https://docs.openai.com/ Source: https://docs.openai.com."
---

# Openai Api

> Official documentation: https://docs.openai.com

## Overview

This skill provides comprehensive documentation for openai api.

**Total references:** 0 files (~0 tokens)

## Reference Files

Load only the reference files relevant to the user's question:

## Usage Guidelines

1. **Identify relevant sections** - Match the user's question to the appropriate reference file(s)
2. **Load minimally** - Only read files directly relevant to the question to conserve context
3. **Cite sources** - Reference specific sections when answering
4. **Combine knowledge** - For complex questions, you may need multiple reference files

### When to use each reference:

Overview

This skill provides structured, searchable documentation for the OpenAI API, distilled into practical guidance and examples. It bundles endpoint descriptions, authentication patterns, model usage, request/response formats, and common code snippets to help you integrate OpenAI services quickly. The content is concise and focused on actionable tasks developers face when working with the API.

How this skill works

The skill inspects official OpenAI documentation and extracts the most commonly used API calls, authentication flows, error handling, and parameters for models. It organizes information by topic (authentication, completions, chat, embeddings, files, fine-tuning, and rate limits) and supplies minimal example code in Python. For any question, it selects the relevant section and returns focused guidance, parameters to change, and sample payloads.

When to use it

  • Starting a new integration with the OpenAI API and needing quick setup instructions
  • Choosing the right model or endpoint for a given use case (chat, completion, embedding, or fine-tuning)
  • Troubleshooting authentication, rate limits, or common API errors
  • Preparing requests with appropriate headers, payload shapes, and best-practice parameters
  • Converting conceptual prompts into concrete API calls and sample code

Best practices

  • Use short, explicit prompts and iterate with temperature/top_p to tune outputs
  • Prefer streaming for low-latency user experiences and batch embeddings for throughput
  • Handle rate limits and transient errors with exponential backoff and idempotency where applicable
  • Keep API keys secret and rotate them regularly; restrict scopes and IPs when possible
  • Sanitize and validate user input before sending to the API to avoid prompt injection and unsafe content

Example use cases

  • Build a chat-based assistant using the chat completions endpoint with context and system messages
  • Generate embeddings for semantic search and vector similarity retrieval
  • Create summarization pipelines by combining prompt templates with model completions
  • Fine-tune a model on domain-specific data for improved relevance and tone
  • Process and transcribe audio or files using the appropriate API endpoints

FAQ

Which authentication method does the skill document?

It documents API key authentication using the Authorization header (Bearer token) and recommends secure storage and rotation.

How do I choose between chat and completion endpoints?

Use chat endpoints for multi-turn conversational flows and structured system/user messages; use completion endpoints for single-turn text generation or legacy integrations.