home / skills / microck / ordinary-claude-skills / transformers

This skill helps you load, fine-tune, and infer with pre-trained transformer models across NLP, vision, and audio tasks.

npx playbooks add skill microck/ordinary-claude-skills --skill transformers

Review the files below or copy the command above to add this skill to your agents.

Files (6)
SKILL.md
4.8 KB
---
name: transformers
description: This skill should be used when working with pre-trained transformer models for natural language processing, computer vision, audio, or multimodal tasks. Use for text generation, classification, question answering, translation, summarization, image classification, object detection, speech recognition, and fine-tuning models on custom datasets.
---

# Transformers

## Overview

The Hugging Face Transformers library provides access to thousands of pre-trained models for tasks across NLP, computer vision, audio, and multimodal domains. Use this skill to load models, perform inference, and fine-tune on custom data.

## Installation

Install transformers and core dependencies:

```bash
uv pip install torch transformers datasets evaluate accelerate
```

For vision tasks, add:
```bash
uv pip install timm pillow
```

For audio tasks, add:
```bash
uv pip install librosa soundfile
```

## Authentication

Many models on the Hugging Face Hub require authentication. Set up access:

```python
from huggingface_hub import login
login()  # Follow prompts to enter token
```

Or set environment variable:
```bash
export HUGGINGFACE_TOKEN="your_token_here"
```

Get tokens at: https://huggingface.co/settings/tokens

## Quick Start

Use the Pipeline API for fast inference without manual configuration:

```python
from transformers import pipeline

# Text generation
generator = pipeline("text-generation", model="gpt2")
result = generator("The future of AI is", max_length=50)

# Text classification
classifier = pipeline("text-classification")
result = classifier("This movie was excellent!")

# Question answering
qa = pipeline("question-answering")
result = qa(question="What is AI?", context="AI is artificial intelligence...")
```

## Core Capabilities

### 1. Pipelines for Quick Inference

Use for simple, optimized inference across many tasks. Supports text generation, classification, NER, question answering, summarization, translation, image classification, object detection, audio classification, and more.

**When to use**: Quick prototyping, simple inference tasks, no custom preprocessing needed.

See `references/pipelines.md` for comprehensive task coverage and optimization.

### 2. Model Loading and Management

Load pre-trained models with fine-grained control over configuration, device placement, and precision.

**When to use**: Custom model initialization, advanced device management, model inspection.

See `references/models.md` for loading patterns and best practices.

### 3. Text Generation

Generate text with LLMs using various decoding strategies (greedy, beam search, sampling) and control parameters (temperature, top-k, top-p).

**When to use**: Creative text generation, code generation, conversational AI, text completion.

See `references/generation.md` for generation strategies and parameters.

### 4. Training and Fine-Tuning

Fine-tune pre-trained models on custom datasets using the Trainer API with automatic mixed precision, distributed training, and logging.

**When to use**: Task-specific model adaptation, domain adaptation, improving model performance.

See `references/training.md` for training workflows and best practices.

### 5. Tokenization

Convert text to tokens and token IDs for model input, with padding, truncation, and special token handling.

**When to use**: Custom preprocessing pipelines, understanding model inputs, batch processing.

See `references/tokenizers.md` for tokenization details.

## Common Patterns

### Pattern 1: Simple Inference
For straightforward tasks, use pipelines:
```python
pipe = pipeline("task-name", model="model-id")
output = pipe(input_data)
```

### Pattern 2: Custom Model Usage
For advanced control, load model and tokenizer separately:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("model-id")
model = AutoModelForCausalLM.from_pretrained("model-id", device_map="auto")

inputs = tokenizer("text", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
result = tokenizer.decode(outputs[0])
```

### Pattern 3: Fine-Tuning
For task adaptation, use Trainer:
```python
from transformers import Trainer, TrainingArguments

training_args = TrainingArguments(
    output_dir="./results",
    num_train_epochs=3,
    per_device_train_batch_size=8,
)

trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=train_dataset,
)

trainer.train()
```

## Reference Documentation

For detailed information on specific components:
- **Pipelines**: `references/pipelines.md` - All supported tasks and optimization
- **Models**: `references/models.md` - Loading, saving, and configuration
- **Generation**: `references/generation.md` - Text generation strategies and parameters
- **Training**: `references/training.md` - Fine-tuning with Trainer API
- **Tokenizers**: `references/tokenizers.md` - Tokenization and preprocessing

Overview

This skill provides practical guidance and examples for using the Hugging Face Transformers library to load, run, and fine-tune pre-trained transformer models across NLP, vision, audio, and multimodal tasks. It focuses on fast inference with pipelines, precise model loading and device management, tokenization, text generation, and training workflows for custom datasets. Use it to accelerate prototyping and production integration of transformer-based models.

How this skill works

The skill explains core patterns: using pipeline abstractions for one-line inference, loading model and tokenizer pairs for fine-grained control, and using the Trainer API for fine-tuning with mixed precision and distributed training. It covers authentication for Hub models, installing task-specific dependencies, decoding strategies for generation, and tokenization practices to prepare inputs for model inference and training.

When to use it

  • Quick prototyping or demoing with minimal code using pipeline APIs
  • Production inference where device placement and precision need control
  • Fine-tuning a pre-trained model on a custom dataset for higher task accuracy
  • Implementing text generation, classification, QA, summarization, translation, or multimodal inference
  • Working with vision or audio transformer models that require extra libraries

Best practices

  • Prefer pipelines for simple tasks and fast iteration; switch to model+tokenizer for custom preprocessing or advanced control
  • Authenticate to the Hugging Face Hub when using gated or private models and cache tokens in environment variables
  • Use device_map and torch dtype settings to optimize memory and throughput on GPUs/TPUs
  • Apply padding, truncation, and attention masks consistently across training and inference to avoid mismatched inputs
  • When fine-tuning, start with smaller learning rates, use mixed precision, and monitor validation metrics to prevent overfitting

Example use cases

  • Generate creative or instructional text with decoding parameters (temperature, top-k, top-p) for controllable outputs
  • Build a question-answering API that uses a context and pipeline('question-answering') for fast responses
  • Fine-tune a pre-trained classifier on a domain-specific dataset using Trainer and resume training checkpoints
  • Run image classification or object detection with vision transformers and dependencies like timm and pillow
  • Transcribe audio using transformer-based ASR pipelines after installing librosa and soundfile

FAQ

Do I always need to authenticate to load models?

Public models load without a token; gated or private Hub models require a Hugging Face token set via login() or an environment variable.

When should I use pipelines vs manual model/tokenizer loading?

Use pipelines for fast, opinionated inference with defaults. Load model+tokenizer when you need custom preprocessing, control over device_map, or access to model internals for advanced tasks.