home / skills / bahayonghang / my-claude-code-settings / skill-seekers

This skill helps you generate actionable LLM capabilities from documentation, codebases, and repos to accelerate skill creation.

npx playbooks add skill bahayonghang/my-claude-code-settings --skill skill-seekers

Review the files below or copy the command above to add this skill to your agents.

Files (298)
SKILL.md
7.0 KB
# API Reference: upload_skill.py

**Language**: Python

**Source**: `src/skill_seekers/cli/upload_skill.py`

---

## Functions

### upload_skill_api(package_path, target = 'claude', api_key = None)

Upload skill package to LLM platform

Args:
    package_path: Path to skill package file
    target: Target platform ('claude', 'gemini', 'openai')
    api_key: Optional API key (otherwise read from environment)

Returns:
    tuple: (success, message)

**Parameters**:

| Name | Type | Default | Description |
|------|------|---------|-------------|
| package_path | None | - | - |
| target | None | 'claude' | - |
| api_key | None | None | - |

**Returns**: (none)



### main()

**Returns**: (none)


Overview

This skill generates production-ready LLM skills by analyzing documentation, codebases, and GitHub repositories. It extracts intents, prompts, metadata, and implementation hooks, then packages the result for deployment to popular LLM platforms. The tool also includes an upload helper to publish packages to targets like Claude, Gemini, or OpenAI.

How this skill works

The skill scans source documentation, source code, and repository structure to detect commands, functions, usage examples, and configuration. It synthesizes intent descriptions, example prompts, slot definitions, and integration stubs, then assembles a packaged skill artifact. A provided upload helper (upload_skill_api) sends the package to a chosen LLM platform, using a supplied API key or an environment variable when available.

When to use it

  • You want to convert a library, CLI, or docs into a usable LLM skill quickly.
  • Preparing skills for deployment to Claude, Gemini, or OpenAI-compatible platforms.
  • Automating extraction of intents and prompts from large codebases or READMEs.
  • When you need standardized metadata and packaging for skill marketplaces.
  • Prototyping assistant capabilities from existing GitHub projects.

Best practices

  • Keep repository docs and examples small, focused, and well-structured to improve extraction accuracy.
  • Annotate functions and commands with short descriptions and example usage for clearer intent detection.
  • Validate generated prompts and sample outputs manually before publishing.
  • Store API keys securely in environment variables rather than hard-coding.
  • Test the uploaded package on a staging platform before releasing to production.

Example use cases

  • Convert a CLI tool into a chat skill that exposes commands as assistant actions.
  • Create a help-desk skill by extracting FAQs and troubleshooting steps from project docs.
  • Package a data-processing library with example prompts so the assistant can call functions.
  • Automate bulk skill generation for multiple GitHub repos in a monorepo.
  • Publish a skill that wraps deployment scripts to let an assistant orchestrate releases.

FAQ

Which LLM platforms are supported for upload?

The upload helper supports targets labeled 'claude', 'gemini', and 'openai'. Select the target parameter when calling the upload API.

How is the API key provided for uploads?

You can pass an API key directly to the upload helper or let it read the key from an environment variable for safer handling.