home / skills / jeremylongshore / claude-code-plugins-plus-skills / clay-observability

This skill helps you implement comprehensive observability for Clay by configuring metrics, tracing, logging, and alerts across integrations.

npx playbooks add skill jeremylongshore/claude-code-plugins-plus-skills --skill clay-observability

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
5.8 KB
---
name: clay-observability
description: |
  Set up comprehensive observability for Clay integrations with metrics, traces, and alerts.
  Use when implementing monitoring for Clay operations, setting up dashboards,
  or configuring alerting for Clay integration health.
  Trigger with phrases like "clay monitoring", "clay metrics",
  "clay observability", "monitor clay", "clay alerts", "clay tracing".
allowed-tools: Read, Write, Edit
version: 1.0.0
license: MIT
author: Jeremy Longshore <[email protected]>
---

# Clay Observability

## Overview
Set up comprehensive observability for Clay integrations.

## Prerequisites
- Prometheus or compatible metrics backend
- OpenTelemetry SDK installed
- Grafana or similar dashboarding tool
- AlertManager configured

## Metrics Collection

### Key Metrics
| Metric | Type | Description |
|--------|------|-------------|
| `clay_requests_total` | Counter | Total API requests |
| `clay_request_duration_seconds` | Histogram | Request latency |
| `clay_errors_total` | Counter | Error count by type |
| `clay_rate_limit_remaining` | Gauge | Rate limit headroom |

### Prometheus Metrics

```typescript
import { Registry, Counter, Histogram, Gauge } from 'prom-client';

const registry = new Registry();

const requestCounter = new Counter({
  name: 'clay_requests_total',
  help: 'Total Clay API requests',
  labelNames: ['method', 'status'],
  registers: [registry],
});

const requestDuration = new Histogram({
  name: 'clay_request_duration_seconds',
  help: 'Clay request duration',
  labelNames: ['method'],
  buckets: [0.05, 0.1, 0.25, 0.5, 1, 2.5, 5],
  registers: [registry],
});

const errorCounter = new Counter({
  name: 'clay_errors_total',
  help: 'Clay errors by type',
  labelNames: ['error_type'],
  registers: [registry],
});
```

### Instrumented Client

```typescript
async function instrumentedRequest<T>(
  method: string,
  operation: () => Promise<T>
): Promise<T> {
  const timer = requestDuration.startTimer({ method });

  try {
    const result = await operation();
    requestCounter.inc({ method, status: 'success' });
    return result;
  } catch (error: any) {
    requestCounter.inc({ method, status: 'error' });
    errorCounter.inc({ error_type: error.code || 'unknown' });
    throw error;
  } finally {
    timer();
  }
}
```

## Distributed Tracing

### OpenTelemetry Setup

```typescript
import { trace, SpanStatusCode } from '@opentelemetry/api';

const tracer = trace.getTracer('clay-client');

async function tracedClayCall<T>(
  operationName: string,
  operation: () => Promise<T>
): Promise<T> {
  return tracer.startActiveSpan(`clay.${operationName}`, async (span) => {
    try {
      const result = await operation();
      span.setStatus({ code: SpanStatusCode.OK });
      return result;
    } catch (error: any) {
      span.setStatus({ code: SpanStatusCode.ERROR, message: error.message });
      span.recordException(error);
      throw error;
    } finally {
      span.end();
    }
  });
}
```

## Logging Strategy

### Structured Logging

```typescript
import pino from 'pino';

const logger = pino({
  name: 'clay',
  level: process.env.LOG_LEVEL || 'info',
});

function logClayOperation(
  operation: string,
  data: Record<string, any>,
  duration: number
) {
  logger.info({
    service: 'clay',
    operation,
    duration_ms: duration,
    ...data,
  });
}
```

## Alert Configuration

### Prometheus AlertManager Rules

```yaml
# clay_alerts.yaml
groups:
  - name: clay_alerts
    rules:
      - alert: ClayHighErrorRate
        expr: |
          rate(clay_errors_total[5m]) /
          rate(clay_requests_total[5m]) > 0.05
        for: 5m
        labels:
          severity: warning
        annotations:
          summary: "Clay error rate > 5%"

      - alert: ClayHighLatency
        expr: |
          histogram_quantile(0.95,
            rate(clay_request_duration_seconds_bucket[5m])
          ) > 2
        for: 5m
        labels:
          severity: warning
        annotations:
          summary: "Clay P95 latency > 2s"

      - alert: ClayDown
        expr: up{job="clay"} == 0
        for: 1m
        labels:
          severity: critical
        annotations:
          summary: "Clay integration is down"
```

## Dashboard

### Grafana Panel Queries

```json
{
  "panels": [
    {
      "title": "Clay Request Rate",
      "targets": [{
        "expr": "rate(clay_requests_total[5m])"
      }]
    },
    {
      "title": "Clay Latency P50/P95/P99",
      "targets": [{
        "expr": "histogram_quantile(0.5, rate(clay_request_duration_seconds_bucket[5m]))"
      }]
    }
  ]
}
```

## Instructions

### Step 1: Set Up Metrics Collection
Implement Prometheus counters, histograms, and gauges for key operations.

### Step 2: Add Distributed Tracing
Integrate OpenTelemetry for end-to-end request tracing.

### Step 3: Configure Structured Logging
Set up JSON logging with consistent field names.

### Step 4: Create Alert Rules
Define Prometheus alerting rules for error rates and latency.

## Output
- Metrics collection enabled
- Distributed tracing configured
- Structured logging implemented
- Alert rules deployed

## Error Handling
| Issue | Cause | Solution |
|-------|-------|----------|
| Missing metrics | No instrumentation | Wrap client calls |
| Trace gaps | Missing propagation | Check context headers |
| Alert storms | Wrong thresholds | Tune alert rules |
| High cardinality | Too many labels | Reduce label values |

## Examples

### Quick Metrics Endpoint
```typescript
app.get('/metrics', async (req, res) => {
  res.set('Content-Type', registry.contentType);
  res.send(await registry.metrics());
});
```

## Resources
- [Prometheus Best Practices](https://prometheus.io/docs/practices/naming/)
- [OpenTelemetry Documentation](https://opentelemetry.io/docs/)
- [Clay Observability Guide](https://docs.clay.com/observability)

## Next Steps
For incident response, see `clay-incident-runbook`.

Overview

This skill helps you set up end-to-end observability for Clay integrations, covering metrics, distributed traces, structured logs, dashboards, and alerting. It codifies recommended Prometheus metrics, OpenTelemetry tracing patterns, JSON logging conventions, and example Prometheus alert rules. Use it to make Clay operations measurable, debuggable, and alertable.

How this skill works

Instrument application code to emit Prometheus metrics (counters, histograms, gauges) for Clay API calls and errors. Add OpenTelemetry spans around Clay operations to capture latency and failures across services. Emit consistent structured JSON logs for context-rich events. Export metrics to Prometheus, traces to an OpenTelemetry collector/back end, and visualize with Grafana while Prometheus Alertmanager drives alerts.

When to use it

  • When you need visibility into Clay API request rates, latencies, and errors.
  • When troubleshooting cross-service issues that involve Clay integrations.
  • When you want alerts for high error rates, high latency, or service downtime.
  • When building dashboards to track SLA and usage trends for Clay operations.
  • When onboarding new Clay integrations and needing standardized telemetry.

Best practices

  • Instrument every client call with a request counter and a latency histogram to track success/failure and P50/P95/P99 latency.
  • Keep label cardinality low: use stable values like method and status, avoid per-user or per-request identifiers.
  • Propagate tracing context across HTTP/rpc boundaries so spans form complete traces.
  • Log structured JSON with consistent fields: service, operation, duration_ms, and error types.
  • Tune alert thresholds on staging first to avoid alert storms; use short evaluation windows for availability alerts and longer windows for rate-based alerts.

Example use cases

  • Expose /metrics endpoint and scrape with Prometheus to collect clay_requests_total, clay_request_duration_seconds, and clay_errors_total.
  • Wrap Clay client calls in an instrumented function that starts a histogram timer, increments counters, and records error types.
  • Add OpenTelemetry spans around high-latency operations to correlate traces with logs and metrics during incidents.
  • Create Grafana dashboards showing request rate, latency percentiles, and error rate over time.
  • Deploy Prometheus alert rules for ClayHighErrorRate, ClayHighLatency, and ClayDown to notify on-call teams.

FAQ

What metrics should I prioritize first?

Start with total requests, request latency histogram, and error counts. These cover availability, performance, and correctness.

How do I avoid high-cardinality metric issues?

Limit label sets to low-cardinality values (method, status). Drop or aggregate labels that include user IDs or request IDs.