home / skills / cacr92 / wereply / rust-optimization
This skill helps optimize Rust performance and memory usage by applying caching, parallelization, and async strategies to compute-intensive tasks.
npx playbooks add skill cacr92/wereply --skill rust-optimizationReview the files below or copy the command above to add this skill to your agents.
---
name: rust-optimization
description: 当用户要求Rust性能优化、缓存策略、并行计算、内存优化或线性规划加速时使用。
---
# Rust Optimization Skill
## 适用范围
- Rust 性能与内存优化
- 缓存/并行计算策略
- 线性规划与数值计算加速
## 关键规则(Critical Rules)
- 高频静态数据优先缓存,使用 `moka`
- CPU 密集任务使用 `rayon` 或 `tokio::task::spawn_blocking`
- 避免无意义 clone,优先借用切片与引用
- 异步上下文中避免阻塞调用
## 快速模板
### Moka 缓存
```rust
use moka::future::Cache;
use std::time::Duration;
pub struct MaterialCache {
cache: Cache<String, crate::material::material::Material>,
}
impl MaterialCache {
pub fn new() -> Self {
Self {
cache: Cache::builder()
.max_capacity(1000)
.time_to_live(Duration::from_secs(3600))
.build(),
}
}
}
```
### Rayon 并行
```rust
use rayon::prelude::*;
let totals: Vec<f64> = materials
.par_iter()
.map(|m| m.price)
.collect();
```
### 阻塞计算下沉
```rust
let result = tokio::task::spawn_blocking(move || heavy_calc(input))
.await?;
```
## 优化要点
- 大量查询前先裁剪数据范围
- 频繁计算值可缓存,避免重复计算
- 共享只读配置使用 `Arc`,可变状态用 `tokio::sync`
## 检查清单
- [ ] 是否存在重复计算可缓存
- [ ] CPU 密集任务是否并行化
- [ ] 异步路径无阻塞调用
- [ ] clone 明确且必要This skill advises on Rust performance tuning, caching strategies, parallel execution, memory usage reduction, and accelerating linear/numerical workloads. It focuses on practical patterns, libraries, and checklist-driven guidance to make real-world Rust services faster and more resource-efficient. Recommendations balance safe Rust idioms with high-throughput requirements.
The skill inspects hotspots and suggests targeted fixes: introduce in-memory caches for high-frequency static data, offload CPU-bound work to parallel runtimes, and replace needless allocations with borrowing and slices. It recommends concrete crates (moka, rayon, tokio) and patterns for async contexts, memory sharing, and downshifting blocking computations. It also provides a short checklist to validate applied optimizations.
When should I use moka vs a custom cache?
Use moka for most in-memory TTL and capacity-controlled caching needs; implement custom caches only for unusual eviction policies or cross-process consistency.
Is rayon safe to use inside an async Tokio runtime?
Yes, but prefer using rayon for CPU-bound pure computations. For async code, avoid blocking the reactor and consider spawn_blocking for mixed workloads.