Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
81 changes: 81 additions & 0 deletions content/linkedin-sdd-ralph-starter-pt-br.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
# LinkedIn: Spec Driven Development com ralph-starter

## Formato
- Artigo longo no LinkedIn
- Idioma: Portugues brasileiro
- Tom: Profissional, direto, com exemplos praticos
- Publico: Devs brasileiros, tech leads, CTOs

---

## Titulo

Spec Driven Development: por que voce deveria parar de mandar "faz um CRUD" pro agente de IA

---

## Corpo

Nos ultimos meses eu vi uma mudanca silenciosa na forma como devs usam agentes de IA pra codar.

No comeco, todo mundo fazia a mesma coisa: abria o chat, escrevia "cria uma API de autenticacao", rezava, e torcia pro resultado fazer sentido. As vezes dava certo. Na maioria das vezes, nao.

O problema nunca foi o agente. O problema era a especificacao -- ou melhor, a falta dela.

Isso tem nome agora: Spec Driven Development (SDD).

A ideia e simples: antes de codar, voce escreve uma spec clara. Nao um documento de 50 paginas. Uma spec de 10-20 linhas que diz exatamente o que precisa ser feito, como validar, e quais sao os criterios de aceite.

Tem tres ferramentas ganhando tracao nesse espaco:

- **OpenSpec** (Fission AI) -- framework leve e tool-agnostic. Voce cria uma pasta openspec/ com proposal.md, design.md, tasks.md, e specs com keywords RFC 2119 (SHALL, MUST, SHOULD).

- **Spec-Kit** (GitHub) -- mais pesado, com 5 fases (constituicao, especificacao, planejamento, tarefas, implementacao). Bom pra projetos grandes.

- **Kiro** (AWS) -- IDE completa com agentes integrados. Poderoso, mas locked no ecossistema AWS.

Eu construi o ralph-starter justamente pra resolver esse gap. Ele puxa specs de qualquer lugar -- GitHub Issues, Linear, Notion, Figma, OpenSpec -- e roda loops autonomos de codificacao ate a tarefa estar completa.

O fluxo e assim:

```
Spec -> Plano de implementacao -> Agente codifica -> Lint/Build/Testes -> Se falhou, alimenta o erro de volta -> Repete -> Commit + PR
```

Na v0.5.0 a gente adicionou suporte nativo a OpenSpec:

```bash
# Ler specs de um diretorio OpenSpec
ralph-starter run --from openspec:minha-feature

# Validar completude da spec antes de rodar
ralph-starter run --from openspec:minha-feature --spec-validate

# Listar specs disponiveis
ralph-starter spec list

# Validar todas as specs do projeto
ralph-starter spec validate
```

O `--spec-validate` checa se sua spec tem:
- Uma secao de proposta/racional (por que?)
- Keywords RFC 2119 (SHALL, MUST)
- Criterios de aceite (Given/When/Then)
- Design e tasks

E retorna um score de 0 a 100. Se a spec estiver incompleta, o ralph-starter avisa antes de gastar tokens.

O resultado pratico: specs claras = menos iteracoes = menos custo = PRs melhores.

Eu costumava gastar 5 loops e $3+ pra resolver uma tarefa mal especificada. Agora gasto 3 minutos escrevendo uma spec boa e 2 loops resolvem. O custo cai pra ~$0.50.

Se voce esta usando qualquer agente de IA pra codar -- Claude Code, Cursor, Copilot, o que for -- comeca a escrever specs. Serio. E a maior alavanca de produtividade que voce vai encontrar esse ano.

ralph-starter e open source, MIT licensed:
https://github.com/multivmlabs/ralph-starter

---

## Hashtags
#SpecDrivenDevelopment #AICoding #OpenSource #DevTools #ralph-starter #OpenSpec #SDD
120 changes: 120 additions & 0 deletions content/twitter-thread-sdd-ralph-starter.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,120 @@
# Twitter/X Thread: Spec Driven Development with ralph-starter

## Instructions
- Post as a thread (not a single tweet)
- Each tweet under 280 characters
- Include code screenshots where noted

---

## Tweet 1 (Hook)

Spec Driven Development is eating AI coding.

OpenSpec, Spec-Kit, Kiro -- everyone's building spec frameworks now.

Here's why specs matter more than prompts, and how ralph-starter fits in:

---

## Tweet 2 (The problem)

The #1 mistake with AI coding agents:

"Add authentication to the app"

3 words. Zero context. The agent guesses everything. You spend 5 iterations fixing what a 10-line spec would've nailed in 2.

---

## Tweet 3 (What is SDD)

Spec Driven Development = write a clear spec BEFORE the agent touches code.

Not a 50-page doc. A focused spec:
- What to build (proposal)
- How to build it (design)
- How to verify it (acceptance criteria)

10-20 lines. 3 minutes to write.

---

## Tweet 4 (The landscape)

Three SDD tools gaining traction:

OpenSpec -- lightweight, tool-agnostic, fluid phases
Spec-Kit -- GitHub's heavyweight 5-phase framework
Kiro -- AWS's full IDE with built-in agents

Each has tradeoffs. None connects to your existing workflow.

---

## Tweet 5 (ralph-starter's angle)

ralph-starter takes a different approach:

Your specs already live in GitHub Issues, Linear tickets, Notion docs, Figma files.

Why rewrite them? Pull from where they are, run autonomous loops until done.

```
ralph-starter run --from github --project myorg/repo
ralph-starter run --from openspec:my-feature
```

---

## Tweet 6 (New: OpenSpec + spec-validate)

Just shipped in v0.5.0:

Native OpenSpec support + spec validation.

```
ralph-starter spec validate
ralph-starter run --from openspec:auth --spec-validate
```

Checks for RFC 2119 keywords (SHALL/MUST), acceptance criteria, design sections. Scores 0-100.

Low score = bad spec = wasted tokens.

---

## Tweet 7 (The numbers)

Before specs: 5 loops, $3+, wrong output
After specs: 2 loops, ~$0.50, correct output

The spec IS the leverage. Not the model. Not the prompt engineering. The spec.

---

## Tweet 8 (Multi-agent)

ralph-starter works with any agent:

- Claude Code
- Cursor
- Codex CLI
- OpenCode
- Amp (Sourcegraph)

No lock-in. No IDE requirement. CLI that runs anywhere.

---

## Tweet 9 (CTA)

ralph-starter is open source, MIT licensed.

Pull specs from GitHub/Linear/Notion/Figma/OpenSpec.
Run autonomous coding loops.
Ship faster.

https://github.com/multivmlabs/ralph-starter

Star it if SDD resonates.
125 changes: 125 additions & 0 deletions docs/blog/2026-04-08-spec-driven-development-ralph-starter.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,125 @@
---
slug: spec-driven-development-ralph-starter
title: Spec Driven Development with ralph-starter
authors: [ruben]
tags: [ralph-starter, sdd, openspec, specs, workflow]
description: How ralph-starter brings Spec Driven Development to any AI coding agent, with native OpenSpec support, spec validation, and multi-source spec fetching.
image: /img/blog/sdd-ralph-starter.png
---

Comment on lines +1 to +9
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 The blog post references a missing OG image /img/blog/sdd-ralph-starter.png that doesn't exist in docs/static/img/blog/. This will produce a broken og:image meta tag, causing social preview images to fail when the post is shared on Twitter/X, LinkedIn, etc.

Extended reasoning...

What the bug is and how it manifests

The blog post front matter at line 7 contains image: /img/blog/sdd-ralph-starter.png. Docusaurus uses this field to populate the <meta property="og:image"> tag in the generated HTML. When a user shares the blog URL on LinkedIn, Twitter, or any other platform that reads Open Graph tags, the social card will attempt to load the missing image and display a broken or empty preview.

The specific code path that triggers it

Docusaurus reads the image key from the blog post front matter and injects it into the HTML <head> as <meta property="og:image" content="https://your-site.com/img/blog/sdd-ralph-starter.png" />. The static file docs/static/img/blog/sdd-ralph-starter.png is the expected source, but it was never added. The verifier confirmed the directory contains 14 other blog images but not sdd-ralph-starter.png.

Why existing code doesn't prevent it

The Docusaurus config has onBrokenMarkdownImages: 'warn' (not 'throw'), so the build will succeed without surfacing this as a hard error. The front matter image field is not validated against the static file system at build time in the same way broken markdown links are. The PR's own test plan also marks "Docs build passes" as unchecked ([ ]), indicating this area was not fully verified before opening the PR.

What the impact would be

Any share of this blog post URL on social media will render with a broken preview image. Given the PR explicitly adds social content (LinkedIn article, Twitter thread) intended to promote this exact blog post, the broken OG image undercuts the social sharing effort the PR is designed to support.

How to fix it

Add the missing file: create or export a suitable cover image and save it to docs/static/img/blog/sdd-ralph-starter.png. Alternatively, change the image field in the front matter to reference an existing image such as /img/blog/specs-new-code.png or /img/blog/validation-driven-dev.png.

Step-by-step proof

  1. Open docs/blog/2026-04-08-spec-driven-development-ralph-starter.md, line 7: image: /img/blog/sdd-ralph-starter.png
  2. Check docs/static/img/blog/: files present are ai-agents-comparison.png, auto-mode-github.png, claude-code-setup.png, cost-tracking.png, figma-to-code.png, connect-your-tools.png, first-ralph-loop.png, linear-workflow.png, ralph-wiggum-technique.png, validation-driven-dev.png, specs-new-code.png, vs-manual.png, why-autonomous-coding.png, why-i-built-ralph-starter.png — no sdd-ralph-starter.png.
  3. Build docs → succeeds with no hard error (warn-only config).
  4. Open the published blog post and view page source → <meta property="og:image" content=".../img/blog/sdd-ralph-starter.png" /> points to a 404.
  5. Paste the URL into the Twitter Card Validator or LinkedIn Post Inspector → broken/missing thumbnail.

Spec Driven Development is the biggest shift in AI coding since agents learned to run tests. Here is how ralph-starter fits in.

<!-- truncate -->

## The problem with "just prompt it"

Most people use AI coding agents the same way: type a sentence, hit enter, hope for the best. "Add user auth." "Fix the sidebar." Three words and vibes.

I did this for weeks. The agent would generate something that looked plausible but missed what I actually wanted. I blamed the tool, but the problem was me. I was not giving it enough context.

Then I started writing specs -- not essays, just 10-20 lines describing what I actually wanted, how to verify it, and where things should go. The difference was night and day. 2 loops instead of 5. $0.50 instead of $3. Correct output instead of close-but-wrong.

This pattern has a name now: **Spec Driven Development (SDD)**.

## The SDD landscape

Three frameworks are leading the SDD conversation:

| Tool | Philosophy | Lock-in |
|------|-----------|---------|
| **OpenSpec** (Fission AI) | Lightweight, fluid, tool-agnostic | None |
| **Spec-Kit** (GitHub) | Heavyweight, rigid 5-phase gates | GitHub ecosystem |
| **Kiro** (AWS) | Full IDE with built-in agents | AWS account required |

OpenSpec organizes specs into changes with `proposal.md`, `design.md`, `tasks.md`, and requirement specs using RFC 2119 keywords (SHALL, MUST, SHOULD). It is the lightest of the three.

Spec-Kit enforces five phases: constitution, specification, plan, tasks, implement. Thorough but heavy.

Kiro bundles everything into a VS Code fork with agent hooks and EARS notation. Powerful but locked to AWS.

## Where ralph-starter fits

ralph-starter takes a different angle: **your specs already exist somewhere**.

They are in GitHub Issues. Linear tickets. Notion docs. Figma designs. OpenSpec directories. Why rewrite them in a new format?

ralph-starter pulls specs from where they already live:

```bash
# From GitHub issues
ralph-starter run --from github --project myorg/myrepo --label "ready"

# From OpenSpec directories
ralph-starter run --from openspec:add-auth

# From Linear tickets
ralph-starter run --from linear --project "Mobile App"

# From a Notion doc
ralph-starter run --from notion --project "https://notion.so/spec-abc123"
```

Then it runs autonomous loops: build context, spawn agent, collect output, run validation (lint/build/test), commit, repeat until done.

## New in v0.5.0: OpenSpec + spec validation

We just shipped native OpenSpec support and a spec validator:

```bash
# List all OpenSpec changes in the project
ralph-starter spec list

# Validate spec completeness (0-100 score)
ralph-starter spec validate

# Validate before running -- stops if spec is too thin
ralph-starter run --from openspec:my-feature --spec-validate
```

The validator checks for:
- Proposal or rationale section (why are we building this?)
- RFC 2119 keywords (SHALL, MUST -- formal requirements)
- Given/When/Then acceptance criteria (testable conditions)
- Design section (how to build it)
- Task breakdown (implementation steps)

A spec scoring below 40/100 gets flagged before the agent starts. This saves tokens on underspecified work.

## The new spec command

`ralph-starter spec` gives you a CLI for spec operations:

```bash
# Validate all specs in the project
ralph-starter spec validate

# List available specs (auto-detects OpenSpec, Spec-Kit, or raw)
ralph-starter spec list

# Show completeness summary
ralph-starter spec summary
```

It auto-detects whether you are using OpenSpec format, GitHub Spec-Kit format, or plain markdown specs.

## The numbers

| Metric | Without specs | With specs |
|--------|--------------|------------|
| Loops per task | 5 | 2 |
| Cost per task | ~$3.00 | ~$0.50 |
| Output accuracy | Hit or miss | Consistent |
| Time writing spec | 0 min | 3 min |

The 3 minutes spent writing a spec save 15 minutes of iteration and debugging. The spec is the leverage.

## What is next

We are working on:
- **Spec coverage tracking** -- which requirements have been implemented?
- **Spec-to-test generation** -- Given/When/Then to test stubs
- **Living specs** -- specs that update as implementation diverges

SDD is not a fad. It is the natural evolution of AI-assisted coding. The spec is the interface between human intent and machine execution. The clearer the spec, the better the output.

ralph-starter is open source, MIT licensed: [github.com/multivmlabs/ralph-starter](https://github.com/multivmlabs/ralph-starter)
52 changes: 52 additions & 0 deletions docs/docs/cli/figma.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
---
sidebar_position: 12
title: figma
description: Interactive Figma-to-code wizard
keywords: [cli, figma, wizard, design, integration]
---

# ralph-starter figma

Interactive wizard for building code from Figma designs.

## Synopsis

```bash
ralph-starter figma [options]
```

## Description

The `figma` command launches an interactive wizard that guides you through selecting a Figma file, choosing a mode (spec, tokens, components, assets, content), and running an autonomous coding loop to implement the design.

For non-interactive usage and detailed mode documentation, see [Figma Source](/docs/sources/figma).

## Options

| Option | Description | Default |
|--------|-------------|---------|
| `--commit` | Auto-commit after tasks | false |
| `--push` | Push commits to remote | false |
| `--pr` | Create pull request when done | false |
| `--validate` | Run validation after iterations | true |
| `--no-validate` | Skip validation | - |
| `--max-iterations <n>` | Maximum loop iterations | 50 |
| `--agent <name>` | Agent to use | auto-detect |

## Examples

```bash
# Launch the interactive wizard
ralph-starter figma

# With auto-commit and PR creation
ralph-starter figma --commit --pr

# Using a specific agent
ralph-starter figma --agent claude-code --max-iterations 10
```

## See Also

- [Figma Source](/docs/sources/figma) - Detailed mode documentation, authentication, and troubleshooting
- [run](/docs/cli/run) - Non-interactive Figma usage via `--from figma`
2 changes: 1 addition & 1 deletion docs/src/components/HeroSection/index.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -205,7 +205,7 @@ export default function HeroSection(): React.ReactElement {
<span className={styles.integrationLabel}>Integrations</span>
<div className={styles.integrationLogos}>
{[
{ id: 'figma' as const, to: '/docs/cli/figma', src: '/img/figma-logo.svg', alt: 'Figma' },
{ id: 'figma' as const, to: '/docs/sources/figma', src: '/img/figma-logo.svg', alt: 'Figma' },
{ id: 'github' as const, to: '/docs/sources/github', src: '/img/github logo.webp', alt: 'GitHub' },
{ id: 'linear' as const, to: '/docs/sources/linear', src: '/img/linear.jpeg', alt: 'Linear' },
{ id: 'notion' as const, to: '/docs/sources/notion', src: '/img/notion logo.png', alt: 'Notion' },
Expand Down
Loading
Loading