home / skills / jeremylongshore / claude-code-plugins-plus-skills / mistral-ci-integration

This skill helps you configure and validate Mistral AI CI/CD pipelines with GitHub Actions and automated tests.

npx playbooks add skill jeremylongshore/claude-code-plugins-plus-skills --skill mistral-ci-integration

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
7.8 KB
---
name: mistral-ci-integration
description: |
  Configure Mistral AI CI/CD integration with GitHub Actions and testing.
  Use when setting up automated testing, configuring CI pipelines,
  or integrating Mistral AI tests into your build process.
  Trigger with phrases like "mistral CI", "mistral GitHub Actions",
  "mistral automated tests", "CI mistral".
allowed-tools: Read, Write, Edit, Bash(gh:*)
version: 1.0.0
license: MIT
author: Jeremy Longshore <[email protected]>
---

# Mistral AI CI Integration

## Overview
Set up CI/CD pipelines for Mistral AI integrations with automated testing.

## Prerequisites
- GitHub repository with Actions enabled
- Mistral AI test API key
- npm/pnpm project configured

## Instructions

### Step 1: Create GitHub Actions Workflow

Create `.github/workflows/mistral-integration.yml`:

```yaml
name: Mistral AI Integration Tests

on:
  push:
    branches: [main, develop]
  pull_request:
    branches: [main]

env:
  NODE_VERSION: '20'

jobs:
  lint-and-type:
    name: Lint & Type Check
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - uses: actions/setup-node@v4
        with:
          node-version: ${{ env.NODE_VERSION }}
          cache: 'npm'

      - run: npm ci

      - name: Type Check
        run: npm run typecheck

      - name: Lint
        run: npm run lint

  unit-tests:
    name: Unit Tests
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - uses: actions/setup-node@v4
        with:
          node-version: ${{ env.NODE_VERSION }}
          cache: 'npm'

      - run: npm ci

      - name: Run Unit Tests
        run: npm test -- --coverage

      - name: Upload Coverage
        uses: codecov/codecov-action@v3
        with:
          file: ./coverage/lcov.info

  integration-tests:
    name: Integration Tests
    runs-on: ubuntu-latest
    needs: [lint-and-type, unit-tests]
    # Only run on main branch or manual trigger
    if: github.ref == 'refs/heads/main' || github.event_name == 'workflow_dispatch'
    env:
      MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
    steps:
      - uses: actions/checkout@v4

      - uses: actions/setup-node@v4
        with:
          node-version: ${{ env.NODE_VERSION }}
          cache: 'npm'

      - run: npm ci

      - name: Run Integration Tests
        run: npm run test:integration
        timeout-minutes: 10

      - name: Upload Test Results
        if: always()
        uses: actions/upload-artifact@v3
        with:
          name: test-results
          path: test-results/
```

### Step 2: Configure Secrets

```bash
# Add API key to repository secrets
gh secret set MISTRAL_API_KEY --body "your-test-api-key"

# Verify secret is set
gh secret list
```

### Step 3: Create Integration Test File

```typescript
// tests/integration/mistral.integration.test.ts
import { describe, it, expect, beforeAll } from 'vitest';
import Mistral from '@mistralai/mistralai';

describe('Mistral AI Integration', () => {
  let client: Mistral;

  beforeAll(() => {
    const apiKey = process.env.MISTRAL_API_KEY;
    if (!apiKey) {
      throw new Error('MISTRAL_API_KEY required for integration tests');
    }
    client = new Mistral({ apiKey });
  });

  it('should list available models', async () => {
    const models = await client.models.list();

    expect(models.data).toBeDefined();
    expect(models.data?.length).toBeGreaterThan(0);

    const modelIds = models.data?.map(m => m.id) || [];
    expect(modelIds).toContain('mistral-small-latest');
  });

  it('should complete a chat request', async () => {
    const response = await client.chat.complete({
      model: 'mistral-small-latest',
      messages: [
        { role: 'user', content: 'Reply with exactly: Integration test passed' }
      ],
      maxTokens: 20,
    });

    expect(response.choices).toBeDefined();
    expect(response.choices?.[0]?.message?.content).toContain('Integration test');
    expect(response.usage?.totalTokens).toBeGreaterThan(0);
  });

  it('should handle streaming responses', async () => {
    const stream = await client.chat.stream({
      model: 'mistral-small-latest',
      messages: [
        { role: 'user', content: 'Count from 1 to 3' }
      ],
      maxTokens: 20,
    });

    const chunks: string[] = [];
    for await (const event of stream) {
      const content = event.data?.choices?.[0]?.delta?.content;
      if (content) {
        chunks.push(content);
      }
    }

    expect(chunks.length).toBeGreaterThan(0);
    expect(chunks.join('')).toBeTruthy();
  });

  it('should generate embeddings', async () => {
    const response = await client.embeddings.create({
      model: 'mistral-embed',
      inputs: ['Hello world'],
    });

    expect(response.data).toBeDefined();
    expect(response.data[0].embedding.length).toBe(1024);
  });
});
```

### Step 4: Create Vitest Integration Config

```typescript
// vitest.integration.config.ts
import { defineConfig } from 'vitest/config';

export default defineConfig({
  test: {
    globals: true,
    environment: 'node',
    include: ['tests/integration/**/*.test.ts'],
    testTimeout: 60000, // 60s for API calls
    hookTimeout: 30000,
    retry: 2, // Retry flaky tests
    reporters: ['verbose', 'junit'],
    outputFile: {
      junit: 'test-results/junit.xml',
    },
  },
});
```

### Step 5: Add Package Scripts

```json
{
  "scripts": {
    "test": "vitest run",
    "test:watch": "vitest",
    "test:integration": "vitest run --config vitest.integration.config.ts",
    "test:coverage": "vitest run --coverage",
    "typecheck": "tsc --noEmit",
    "lint": "eslint src tests --ext .ts"
  }
}
```

## Output
- Automated test pipeline
- PR checks configured
- Coverage reports uploaded
- Integration tests on main branch

## Error Handling
| Issue | Cause | Solution |
|-------|-------|----------|
| Secret not found | Missing configuration | Add secret via `gh secret set` |
| Tests timeout | Slow API | Increase timeout or mock |
| Auth failures | Invalid key | Check secret value |
| Rate limiting | Too many tests | Add delays or reduce test count |

## Examples

### Release Workflow with Mistral Validation

```yaml
# .github/workflows/release.yml
name: Release

on:
  push:
    tags: ['v*']

jobs:
  validate:
    runs-on: ubuntu-latest
    env:
      MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY_PROD }}
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: '20'
      - run: npm ci

      - name: Verify Mistral Integration
        run: npm run test:integration

      - name: Build
        run: npm run build

      - name: Publish
        run: npm publish
        env:
          NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
```

### PR Comment with Test Results

```yaml
- name: Comment PR with Results
  if: github.event_name == 'pull_request'
  uses: actions/github-script@v6
  with:
    script: |
      const fs = require('fs');
      const coverage = fs.readFileSync('coverage/coverage-summary.json', 'utf8');
      const data = JSON.parse(coverage);

      github.rest.issues.createComment({
        owner: context.repo.owner,
        repo: context.repo.repo,
        issue_number: context.issue.number,
        body: `## Test Results

        | Metric | Coverage |
        |--------|----------|
        | Lines | ${data.total.lines.pct}% |
        | Functions | ${data.total.functions.pct}% |
        | Branches | ${data.total.branches.pct}% |
        `
      });
```

### Branch Protection Rules

```yaml
# Configure via GitHub UI or API
required_status_checks:
  strict: true
  contexts:
    - "Lint & Type Check"
    - "Unit Tests"
    - "Integration Tests"
```

## Resources
- [GitHub Actions Documentation](https://docs.github.com/en/actions)
- [Vitest Documentation](https://vitest.dev/)
- [Mistral AI API Reference](https://docs.mistral.ai/api/)

## Next Steps
For deployment patterns, see `mistral-deploy-integration`.

Overview

This skill configures CI/CD integration for Mistral AI using GitHub Actions and Vitest-based integration tests. It provides a ready workflow, test templates, and guidance for secrets, timeouts, and reporting to run Mistral API checks as part of PRs and releases. Use it to ensure AI-driven features are validated in automated pipelines.

How this skill works

The skill adds a GitHub Actions workflow that runs linting, type checks, unit tests, and gated integration tests that use a Mistral API key from repository secrets. Integration tests exercise model listing, chat completions, streaming behavior, and embeddings via the official Mistral client. Test artifacts and coverage are uploaded for PR visibility and release validation.

When to use it

  • When you want automated Mistral API validation on pull requests and main branch merges.
  • When adding integration tests that call Mistral endpoints (models, chat, embeddings, streaming).
  • When enforcing quality gates (lint, typecheck, unit tests) before running live API tests.
  • When releasing packages and you need a production Mistral key validation step.
  • When you want test reports and coverage uploaded for PR comments and CI dashboards.

Best practices

  • Store MISTRAL_API_KEY in repository or environment secrets and reference it in workflows.
  • Only run live integration tests on main or via manual dispatch to limit cost and rate limits.
  • Use reasonable timeouts and retry settings for flaky network/API calls (increase Vitest timeouts).
  • Upload coverage and junit artifacts so CI can surface results and enable branch protection checks.
  • Mock or stub heavy calls in unit tests to keep fast feedback loops; reserve live calls for integration jobs.

Example use cases

  • Add a mistral-integration GitHub Actions workflow that runs on push and PRs and gates deploys.
  • Create integration tests that list models, run chat completions, stream responses, and generate embeddings.
  • Use a release workflow that validates Mistral connectivity with a production API key before publishing.
  • Post PR comments with coverage summary and test metrics using a small script and the coverage artifact.
  • Protect main branch by requiring Lint & Type Check, Unit Tests, and Integration Tests as required status checks.

FAQ

What secrets are required?

Set MISTRAL_API_KEY in GitHub repository secrets. Use a separate production key for release validation if needed.

How do I avoid rate limiting in CI?

Limit integration runs to main or manual triggers, add delays between tests when necessary, and mock calls in unit tests.