home / skills / alchaincyf / huashu-skills / huashu-research

huashu-research skill

/huashu-research

This skill performs structured web research, creates a persistent notes file, and incremental updates to prevent loss from chat interruptions.

npx playbooks add skill alchaincyf/huashu-skills --skill huashu-research

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
2.8 KB
---
name: huashu-research
description: 结构化网络调研流程,确保调研成果增量保存到文件,不因会话截断丢失。当用户说"调研"、"搜索资料"、"帮我查一下"、"了解一下"、"最新信息"时使用此技能。
---

# 调研 Skill

结构化的网络调研流程,核心目标:调研成果实时持久化,防止会话截断丢失工作。

## 何时使用

- 为写文章做前期调研
- 了解新产品、新技术、新发布
- 搜集竞品信息或行业动态
- 任何需要多次 WebSearch 的信息搜索任务

## 执行流程

### Step 1: 立即创建调研文件
- 在开始搜索之前,先创建文件
- 路径:`_knowledge_base/research-<主题>-<YYYYMMDD>.md`
- 初始内容包含:调研目标、关键问题、预期输出

```markdown
# [主题] 调研笔记

调研日期:YYYY-MM-DD
调研目标:[一句话说明]

## 关键问题
1. [问题1]
2. [问题2]
3. [问题3]

## 发现

(调研中逐步填充)

## 来源列表

(每次搜索后追加)
```

### Step 2: 搜索并增量保存
- 每次 WebSearch 后,立即将发现追加到文件
- 每条发现附上来源 URL 和日期
- 遵循信息源优先级(见 SHARED-RULES.md)

### Step 3: 阶段摘要
- 每完成3次搜索,在文件中保存一次「阶段摘要」
- 格式:`### 阶段摘要 (第N轮)` + 当前关键发现

### Step 4: 最终简报
调研结束时,整理文件为结构化简报:

```markdown
## 调研结论

### 关键事实
1. [事实1](来源:URL)
2. [事实2](来源:URL)

### 来源列表
| 来源 | URL | 发布日期 | 可信度 |
|------|-----|---------|--------|
| ... | ... | ... | 高/中/低 |

### 待确认问题
- [还需要进一步验证的点]

### 写作建议
- [基于调研结果,对后续写作的建议]
```

## 关键原则

- **先建文件再搜索**:确保第一次搜索结果就被保存
- **增量保存不等到最后**:每次搜索后立即追加
- **调研和写作分离**:本 Skill 只做调研,不开始写草稿
- **标注可信度**:区分一手信息(官方)和二手信息(媒体/社区)
- **忽略过时信息源**:知乎/百度(2025年前)、营销软文

## 与其他 Skill 的关系

- 调研完成后,用户可触发 /选题生成 来确定写作方向
- 调研文件将作为后续写作的输入素材
- 如果调研中发现的信息适合长期留存,保存到对应的 _knowledge_base 分类目录

## 输出位置

- 调研笔记:`_knowledge_base/research-<主题>-<YYYYMMDD>.md`
- 长期知识:`_knowledge_base/<分类>/<主题>-<YYYYMM>.md`

**最后更新**: 2026-02-06

---

> **花叔出品** | AI Native Coder · 独立开发者
> 公众号「花叔」| 30万+粉丝 | AI工具与效率提升
> 代表作:小猫补光灯(AppStore付费榜Top1)·《一本书玩转DeepSeek》

Overview

This skill implements a structured web-research workflow that ensures findings are saved incrementally to files so work is never lost if a session is interrupted. It focuses on durable, auditable research notes and clear handoff to later writing tasks. Use this skill whenever the user asks to research, search for information, or get updates.

How this skill works

Before any web searches, the skill creates a dated research file with the research goal, key questions, and an empty discoveries section. After every search it appends findings, source URL, and date to the file and marks periodic stage summaries after each few searches. At the end it produces a structured brief with key facts, a source table, outstanding verification items, and writing suggestions.

When to use it

  • Preparing background research for an article or report
  • Checking recent product, technology, or release updates
  • Gathering competitor or industry intelligence
  • Any task requiring multiple web searches and persistent notes
  • When you want research output saved immediately to disk

Best practices

  • Always create the research file before the first search to avoid data loss
  • Append each discovery with URL, date, and a short confidence label (high/medium/low)
  • Add a stage summary after roughly every three searches to capture progress
  • Keep research and draft writing separate — this skill collects evidence, not produce final drafts
  • Move long-term, high-value findings into the knowledge base classification folder

Example use cases

  • Early-stage research for a 1500-word article on a new AI model
  • Monitoring competitor product pages and press releases over a week
  • Collecting and verifying facts for a product announcement brief
  • Compiling sources and evidence before triggering a topic-generation skill
  • Creating a persistent evidence file to hand off to editors or co-authors

FAQ

What file format and path does the skill use?

Research notes are saved as Markdown files under _knowledge_base with names like research-<topic>-<YYYYMMDD>.md; long-term items move to categorized month files.

How often are stage summaries created?

A stage summary is saved after roughly every three search iterations, or earlier if a logical milestone is reached.

How should I rate source credibility?

Prefer primary/official sources first, then reputable media. Mark each source high/medium/low based on origin, date, and independence.