home / skills / bobmatnyc / claude-mpm-skills / digitalocean-agentic-cloud

This skill helps you design, deploy, and manage AI agents on DigitalOcean Gradient AI with GPU-backed workflows and knowledge bases.

npx playbooks add skill bobmatnyc/claude-mpm-skills --skill digitalocean-agentic-cloud

Review the files below or copy the command above to add this skill to your agents.

Files (2)
SKILL.md
3.0 KB
---
name: digitalocean-agentic-cloud
description: DigitalOcean Gradient AI agentic cloud and AI platform for building, training, and deploying AI agents on GPU infrastructure with foundation models, knowledge bases, and agent routes. Use when planning or operating AI agents on DigitalOcean.
progressive_disclosure:
  entry_point:
    summary: "DigitalOcean Gradient AI agentic cloud and AI platform for building, training, and deploying AI agents on GPU infrastructure with foundation models, knowledge bases, and agent routes. Use when plan..."
    when_to_use: "When working with version control, branches, or pull requests."
    quick_start: "1. Review the core concepts below. 2. Apply patterns to your use case. 3. Follow best practices for implementation."
---
# DigitalOcean Agentic Cloud Skill

---
progressive_disclosure:
  entry_point:
    summary: "Gradient AI agentic cloud and AI platform for building, training, and deploying AI agents with GPU infrastructure, knowledge bases, and agent routes."
    when_to_use:
      - "When building or deploying AI agents on DigitalOcean"
      - "When selecting Gradient AI for GPU-backed inference"
      - "When designing agent workflows with knowledge bases and routes"
    quick_start:
      - "Choose Gradient AI Agentic Cloud or Gradient AI Platform"
      - "Select foundation models and GPU resources"
      - "Attach knowledge bases and define agent routes"
      - "Deploy agents and monitor usage"
  token_estimate:
    entry: 90-110
    full: 3000-4200
---

## Overview

DigitalOcean Gradient AI provides managed infrastructure for building and deploying AI agents. Use Agentic Cloud for end-to-end agent workflows and the AI Platform for GPU-powered agent deployment.

## Gradient AI Agentic Cloud

- Build, train, and deploy AI agents on managed infrastructure.
- Use managed resources to run agent workloads without manual GPU orchestration.

## Gradient AI Platform

- Use GPU-powered infrastructure for AI agents and inference.
- Combine foundation models with knowledge bases.
- Configure agent routes to direct traffic and workflows.

## Agent Workflow

- Select the target model and compute profile.
- Prepare datasets and knowledge bases.
- Define agent routes and inference behavior.
- Deploy agents and observe runtime metrics.

## Integration Considerations

- Use object or block storage for datasets and artifacts.
- Align deployment with VPC and access controls.
- Track costs and usage in projects.

## Complementary Skills

When using this skill, consider these related skills (if deployed):

- **digitalocean-storage**: Spaces, Volumes, and NFS for datasets.
- **digitalocean-compute**: GPU Droplets or Kubernetes for adjacent workloads.
- **digitalocean-management**: Monitoring and project organization.

*Note: Complementary skills are optional. This skill is fully functional without them.*

## Resources

**DigitalOcean Docs**:
- Gradient AI Agentic Cloud: https://docs.digitalocean.com/products/gradient-ai-agentic-cloud/
- Gradient AI Platform: https://docs.digitalocean.com/products/gradient-ai-platform/

Overview

This skill describes DigitalOcean Gradient AI Agentic Cloud and Gradient AI Platform for building, training, and deploying AI agents on managed GPU infrastructure. It explains how to combine foundation models, knowledge bases, and agent routes to create scalable agent workflows. Use it for planning, operating, and optimizing agent deployments on DigitalOcean.

How this skill works

The skill outlines the end-to-end agent workflow: choose a foundation model and compute profile, attach datasets and knowledge bases, define agent routes and inference behavior, then deploy and monitor agents on managed GPU infrastructure. It highlights integration points such as object/block storage for artifacts, VPC and access control alignment, and cost/usage tracking tied to DigitalOcean projects.

When to use it

  • Designing or deploying AI agents that need managed GPU infrastructure and scaling.
  • Setting up agent workflows that combine foundation models with external knowledge bases.
  • Selecting compute profiles and GPU-backed inference for production agents.
  • Coordinating storage, networking, and access controls for agent datasets and artifacts.
  • Monitoring runtime metrics and managing costs for agent deployments.

Best practices

  • Choose the appropriate foundation model and GPU profile based on inference latency and cost targets.
  • Store datasets and artifacts in Spaces or block storage to simplify access and persistence.
  • Define clear agent routes to separate responsibilities and control traffic flow between agents and tools.
  • Use VPCs and IAM-like access controls to isolate workloads and secure knowledge bases.
  • Monitor usage and set budgets or alerts to track GPU costs and project spend.

Example use cases

  • Deploying multi-step conversational agents that consult a document knowledge base during inference.
  • Training and fine-tuning agents on GPU instances, then promoting models to platform-backed inference.
  • Routing user requests to specialized agents (e.g., extraction, summarization, search) using agent routes.
  • Running large-batch inference jobs for analytics pipelines with managed GPU scaling.
  • Integrating agent outputs with storage and downstream services within a DigitalOcean project.

FAQ

Do I need other DigitalOcean services to use Agentic Cloud?

No. Agentic Cloud is fully functional on its own, but using Spaces for storage, Droplets/Kubernetes for adjacent compute, and project management tools can simplify integrations.

How do I control cost when running GPU-backed agents?

Select lower-cost compute profiles for noncritical workloads, schedule or autoscale instances, and enable project-level usage alerts to track spend.