home / skills / jackspace / claudeskillz / scientific-pkg-transformers

scientific-pkg-transformers skill

/skills/scientific-pkg-transformers

This skill helps you leverage pre-trained transformer models for text, vision, and audio tasks with easy loading, fine-tuning, and inference.

This is most likely a fork of the transformers skill from microck
npx playbooks add skill jackspace/claudeskillz --skill scientific-pkg-transformers

Review the files below or copy the command above to add this skill to your agents.

Files (3)
SKILL.md
4.8 KB
---
name: transformers
description: This skill should be used when working with pre-trained transformer models for natural language processing, computer vision, audio, or multimodal tasks. Use for text generation, classification, question answering, translation, summarization, image classification, object detection, speech recognition, and fine-tuning models on custom datasets.
---

# Transformers

## Overview

The Hugging Face Transformers library provides access to thousands of pre-trained models for tasks across NLP, computer vision, audio, and multimodal domains. Use this skill to load models, perform inference, and fine-tune on custom data.

## Installation

Install transformers and core dependencies:

```bash
uv pip install torch transformers datasets evaluate accelerate
```

For vision tasks, add:
```bash
uv pip install timm pillow
```

For audio tasks, add:
```bash
uv pip install librosa soundfile
```

## Authentication

Many models on the Hugging Face Hub require authentication. Set up access:

```python
from huggingface_hub import login
login()  # Follow prompts to enter token
```

Or set environment variable:
```bash
export HUGGINGFACE_TOKEN="your_token_here"
```

Get tokens at: https://huggingface.co/settings/tokens

## Quick Start

Use the Pipeline API for fast inference without manual configuration:

```python
from transformers import pipeline

# Text generation
generator = pipeline("text-generation", model="gpt2")
result = generator("The future of AI is", max_length=50)

# Text classification
classifier = pipeline("text-classification")
result = classifier("This movie was excellent!")

# Question answering
qa = pipeline("question-answering")
result = qa(question="What is AI?", context="AI is artificial intelligence...")
```

## Core Capabilities

### 1. Pipelines for Quick Inference

Use for simple, optimized inference across many tasks. Supports text generation, classification, NER, question answering, summarization, translation, image classification, object detection, audio classification, and more.

**When to use**: Quick prototyping, simple inference tasks, no custom preprocessing needed.

See `references/pipelines.md` for comprehensive task coverage and optimization.

### 2. Model Loading and Management

Load pre-trained models with fine-grained control over configuration, device placement, and precision.

**When to use**: Custom model initialization, advanced device management, model inspection.

See `references/models.md` for loading patterns and best practices.

### 3. Text Generation

Generate text with LLMs using various decoding strategies (greedy, beam search, sampling) and control parameters (temperature, top-k, top-p).

**When to use**: Creative text generation, code generation, conversational AI, text completion.

See `references/generation.md` for generation strategies and parameters.

### 4. Training and Fine-Tuning

Fine-tune pre-trained models on custom datasets using the Trainer API with automatic mixed precision, distributed training, and logging.

**When to use**: Task-specific model adaptation, domain adaptation, improving model performance.

See `references/training.md` for training workflows and best practices.

### 5. Tokenization

Convert text to tokens and token IDs for model input, with padding, truncation, and special token handling.

**When to use**: Custom preprocessing pipelines, understanding model inputs, batch processing.

See `references/tokenizers.md` for tokenization details.

## Common Patterns

### Pattern 1: Simple Inference
For straightforward tasks, use pipelines:
```python
pipe = pipeline("task-name", model="model-id")
output = pipe(input_data)
```

### Pattern 2: Custom Model Usage
For advanced control, load model and tokenizer separately:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("model-id")
model = AutoModelForCausalLM.from_pretrained("model-id", device_map="auto")

inputs = tokenizer("text", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
result = tokenizer.decode(outputs[0])
```

### Pattern 3: Fine-Tuning
For task adaptation, use Trainer:
```python
from transformers import Trainer, TrainingArguments

training_args = TrainingArguments(
    output_dir="./results",
    num_train_epochs=3,
    per_device_train_batch_size=8,
)

trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=train_dataset,
)

trainer.train()
```

## Reference Documentation

For detailed information on specific components:
- **Pipelines**: `references/pipelines.md` - All supported tasks and optimization
- **Models**: `references/models.md` - Loading, saving, and configuration
- **Generation**: `references/generation.md` - Text generation strategies and parameters
- **Training**: `references/training.md` - Fine-tuning with Trainer API
- **Tokenizers**: `references/tokenizers.md` - Tokenization and preprocessing

Overview

This skill provides a practical interface to Hugging Face Transformers for working with pre-trained transformer models across NLP, vision, audio, and multimodal tasks. Use it to load models, run fast inference with pipelines, perform controlled text generation, and fine-tune models on custom datasets. It emphasizes common patterns for quick prototyping and advanced custom workflows.

How this skill works

Use the Pipeline API for one-line inference across tasks like text generation, classification, question answering, summarization, translation, image classification, object detection, and audio tasks. For advanced control, load tokenizers and models separately, manage devices and precision, and call model.generate or Trainer for fine-tuning. Authentication is handled via huggingface_hub tokens when accessing restricted hub models.

When to use it

  • Quick prototypes or demos where you need rapid inference with minimal configuration.
  • Building production inference for classification, QA, summarization, or translation tasks.
  • Fine-tuning a pre-trained model on custom datasets to improve domain-specific performance.
  • Working with image or audio models for classification, detection, or speech recognition.
  • Experimenting with decoding strategies for controlled text generation and creative output.

Best practices

  • Start with pipelines for development speed; move to manual model/tokenizer loading when you need performance tuning.
  • Authenticate with a Hugging Face token for private or gated models and set device_map or use accelerate for multi-GPU.
  • Use mixed precision (fp16) and batch inputs to reduce memory and increase throughput during inference and training.
  • Control generation with temperature, top-k/top-p, and max_length; prefer sampling for diversity and beam search for precision.
  • Tokenize and pad/truncate consistently across training and inference to avoid length mismatch issues.

Example use cases

  • Generate marketing copy or code snippets using text-generation pipelines and generation parameters.
  • Build a QA microservice that answers user questions with a question-answering pipeline and context indexing.
  • Fine-tune a sentiment classifier on labeled customer reviews using the Trainer API.
  • Run image classification or object detection for an automated inspection pipeline using vision models.
  • Transcribe audio with a pre-trained speech recognition model and post-process tokens into readable text.

FAQ

Do I always need a Hugging Face token?

No—many models are public. A token is required for private or gated models on the Hub and for rate-limited operations.

When should I use pipelines vs manual model loading?

Use pipelines for fast prototyping and simple inference. Load models/tokenizers manually when you need custom preprocessing, device control, or fine-grained generation/training settings.