home / skills / jeremylongshore / claude-code-plugins-plus-skills / auto-scaling-configurator
This skill generates production-ready auto-scaling configurations for Kubernetes and infrastructure, enabling dynamic scaling, high availability, and secure,
npx playbooks add skill jeremylongshore/claude-code-plugins-plus-skills --skill auto-scaling-configuratorReview the files below or copy the command above to add this skill to your agents.
---
name: configuring-auto-scaling-policies
description: |
This skill configures auto-scaling policies for applications and infrastructure. It generates production-ready configurations based on user requirements, implementing best practices for scalability and security. Use this skill when the user requests help with auto-scaling setup, high availability, or dynamic resource allocation, specifically mentioning terms like "auto-scaling," "HPA," "scaling policies," or "dynamic scaling." This skill provides complete configuration code for various platforms.
---
## Overview
This skill empowers Claude to create and configure auto-scaling policies tailored to specific application and infrastructure needs. It streamlines the process of setting up dynamic resource allocation, ensuring optimal performance and resilience.
## How It Works
1. **Requirement Gathering**: Claude analyzes the user's request to understand the specific auto-scaling requirements, including target metrics (CPU, memory, etc.), scaling thresholds, and desired platform.
2. **Configuration Generation**: Based on the gathered requirements, Claude generates a production-ready auto-scaling configuration, incorporating best practices for security and scalability. This includes HPA configurations, scaling policies, and necessary infrastructure setup code.
3. **Code Presentation**: Claude presents the generated configuration code to the user, ready for deployment.
## When to Use This Skill
This skill activates when you need to:
- Configure auto-scaling for a Kubernetes deployment.
- Set up dynamic scaling policies based on CPU or memory utilization.
- Implement high availability and fault tolerance through auto-scaling.
## Examples
### Example 1: Scaling a Web Application
User request: "I need to configure auto-scaling for my web application in Kubernetes based on CPU utilization. Scale up when CPU usage exceeds 70%."
The skill will:
1. Analyze the request and identify the need for a Kubernetes HPA configuration.
2. Generate an HPA configuration file that scales the web application based on CPU utilization, with a target threshold of 70%.
### Example 2: Scaling Infrastructure Based on Load
User request: "Configure auto-scaling for my infrastructure to handle peak loads during business hours. Scale up based on the number of incoming requests."
The skill will:
1. Analyze the request and determine the need for infrastructure-level auto-scaling policies.
2. Generate configuration code for scaling the infrastructure based on the number of incoming requests, considering peak load times.
## Best Practices
- **Monitoring**: Ensure proper monitoring is in place to track the performance metrics used for auto-scaling decisions.
- **Threshold Setting**: Carefully choose scaling thresholds to avoid excessive scaling or under-provisioning.
- **Testing**: Thoroughly test the auto-scaling configuration to ensure it behaves as expected under various load conditions.
## Integration
This skill can be used in conjunction with other DevOps plugins to automate the entire deployment pipeline, from code generation to infrastructure provisioning.This skill configures auto-scaling policies for applications and infrastructure and generates production-ready configuration code tailored to your requirements. It focuses on safe, scalable defaults and security-aware settings to support high availability and cost-efficient resource use. Use it when you need HPA, cluster autoscaler, or infrastructure-level scaling policies created and tuned for production.
I first extract the scaling goals from your request: target metrics (CPU, memory, request rate), thresholds, cool-downs, and platform (Kubernetes, cloud autoscaler, or custom). Then I produce complete configuration artifacts—HPA manifests, cluster autoscaler settings, cloud provider scaling policies, or IaC snippets—following best practices for metrics, security, and observability. Finally I present the configuration code with deployment notes and testing suggestions so you can apply and validate the changes safely.
Which platforms do you support for auto-scaling configurations?
I generate configurations for Kubernetes (HPA/VPA/Cluster Autoscaler), major cloud providers' autoscaling groups, and generic IaC snippets for Terraform or CloudFormation.
How do you ensure the generated policies are production-ready?
Generated configs use conservative defaults, include monitoring and cooldown settings, enforce secure roles where applicable, and come with testing guidance and deployment notes to validate behavior.