home / skills / refoundai / lenny-skills / evaluating-new-technology

evaluating-new-technology skill

/skills/evaluating-new-technology

This skill helps you evaluate emerging technologies by clarifying problems, assessing maturity, and balancing build and buy for flexible architectures.

npx playbooks add skill refoundai/lenny-skills --skill evaluating-new-technology

Review the files below or copy the command above to add this skill to your agents.

Files (2)
SKILL.md
4.2 KB
---
name: evaluating-new-technology
description: Help users evaluate emerging technologies. Use when someone is assessing new tools, making build vs buy decisions, evaluating AI vendors, or deciding on technical architecture.
---

# Evaluating New Technology

Help the user evaluate emerging technologies using frameworks from 22 product leaders who have made critical technology decisions at companies from Google to Shopify.

## How to Help

When the user asks for help evaluating technology:

1. **Start with the problem** - Clarify what problem they're solving before discussing tools
2. **Assess maturity** - Determine if the technology is stable enough for their use case
3. **Consider build and buy** - Help them find the right mix rather than forcing a binary choice
4. **Plan for change** - Design for modularity since the landscape will shift

## Core Principles

### Tools solve problems, not the reverse
Austin Hay: "I have this adage I always say, which is tools are just meant to solve problems. And the problem set for marketing technologists and business technologists is you focus on the tools." Always define the problem and the people involved before selecting a system or tool.

### Build AND buy, not build vs buy
Austin Hay: "Build and buy as opposed to build versus buy. Build and buy means that both of you can win." Buy tools to handle 90% of standard functionality and build the 'cool' 10% that is unique to your business.

### Evaluate mental bandwidth, not just dollars
Dhanji R. Prasanna: "The savings and costs that there might be in replacing a vendor tool by something you build in-house is probably not worth it in the mental bandwidth that you've lost." Focus technical bandwidth on core competencies, not recreating vendor tools.

### Update your priors constantly
Aparna Chennapragada: "The models couldn't do some things one year ago. My impression of it from trying it a few months ago - that prior needs to be updated. The baby just grew up to be a 15-year-old in a month." Re-test assumptions about what technology can do every few months.

### Bet on abstraction layers
Asha Sharma: "You really need to bet on a platform or some app server type layer that allows you to swap things in and out and not really be beholden to any one technology." Invest in modularity as the AI stack evolves.

### AI guardrails don't work
Sander Schulhoff: "AI guardrails do not work. If someone is determined enough to trick GPT-5, they're going to deal with that guardrail. When these guardrail providers say 'We catch everything,' that's a complete lie." Be skeptical of AI security vendor claims.

### Use the tools yourself
Dhanji R. Prasanna: "I would say really try and use these tools yourself. We learn a lot about how our own workflow can change." Solve a specific, personal problem with new tools to understand their true strengths.

### Context drives AI value
Jeanne Grosser: "Because this whole space is so nascent, often your own esoteric context, your content, your workflow is really key to unlocking the power of the agent." For AI agents, building internally often beats buying.

## Questions to Help Users

- "What specific problem are you trying to solve with this technology?"
- "Is this technology stable enough for production, or still experimental?"
- "What's the mental bandwidth cost of building vs maintaining a vendor relationship?"
- "When did you last test your assumptions about what this technology can do?"
- "How will you swap this out if something better comes along?"
- "Have you actually used this tool to solve a real problem yourself?"

## Common Mistakes to Flag

- **Tool bias** - Picking tools because you've used them before, not because they solve the problem
- **Binary build vs buy thinking** - Missing the opportunity to buy 90% and build the strategic 10%
- **Outdated priors** - Making decisions based on what technology couldn't do six months ago
- **Vendor lock-in** - Betting on specific tools without an abstraction layer for future flexibility
- **Trusting security marketing** - Believing AI guardrail vendors who claim to 'catch everything'

## Deep Dive

For all 27 insights from 22 guests, see `references/guest-insights.md`

## Related Skills

- AI Product Strategy
- Building with LLMs
- Platform Strategy
- Vibe Coding

Overview

This skill helps product leaders and engineers evaluate emerging technologies and make pragmatic build vs buy decisions. It combines frameworks from experienced practitioners to focus on problem definition, maturity assessment, and modular architecture. The goal is actionable guidance that reduces risk and preserves optionality as the tech landscape changes.

How this skill works

I start by clarifying the real problem, user needs, and success metrics before discussing tools. Then I assess the technology’s maturity, maintenance and mental-bandwidth costs, and vendor claims. Finally, I recommend a hybrid approach—buying commodity capabilities and building unique differentiators—while proposing abstraction layers and swap-out plans.

When to use it

  • Choosing whether to build an internal system or buy a vendor product
  • Evaluating new AI models, vendors, or agent platforms for production
  • Designing system architecture that must remain flexible over time
  • Reassessing assumptions after rapid advances in capabilities
  • Selecting tools that will affect team bandwidth and product velocity

Best practices

  • Always start with the problem and desired outcomes, not the tech
  • Prefer buy-for-90% + build-for-10% when it preserves differentiation
  • Measure technical and mental-bandwidth costs, not just dollars
  • Layer abstractions so you can swap vendors or models later
  • Re-test priors and hands-on experiments every few months
  • Be skeptical of absolute security/guardrail claims from vendors

Example use cases

  • Deciding whether to integrate a third-party LLM or build a custom model
  • Choosing an AI agent platform while keeping the option to replace it
  • Assessing a vendor’s security guarantees and operational claims
  • Planning a migration strategy that minimizes vendor lock-in
  • Running a quick hands-on pilot to reveal real workflow impact

FAQ

How do I decide build vs buy quickly?

Define the core differentiator you must own, estimate vendor coverage for the rest, and compare ongoing mental-bandwidth and maintenance costs, not just upfront price.

How often should I re-evaluate a technology?

Re-test key assumptions every 2–6 months for fast-moving areas like AI; for more stable tech, revisit annually or when your product needs change.