Skip to main content

Research Workflow

What This Is

The Research Workflow is a systematic approach to conducting and preserving marketing research that compounds over time. Unlike SaaS tools where research gets lost or overwritten, this workflow:
  • Preserves historical context (date-stamped executions)
  • Creates audit trails (strategy references research)
  • Enables temporal comparison (see how markets evolve)
  • Prevents marketing debt (no orphaned files)
  • Builds institutional memory (research accumulates)
This is temporal research architecture—designed for the reality that markets change, competitors evolve, and insights have shelf lives.

Core Concept: Temporal Research

The Problem with Traditional Research

Most research tools treat research as point-in-time snapshots:
❌ Traditional approach:
   Run research → Save to "Competitor Analysis.doc"
   Run research again → Overwrite "Competitor Analysis.doc"
   Result: Historical context lost, can't see evolution
This creates problems:
  • Can’t compare how competitor positioning changed from Q1 to Q4
  • Lost insights when files are overwritten
  • No way to see if market trends are accelerating or reversing
  • Research becomes orphaned (disconnected from strategy)

The Vibeflow Solution: Date-Stamped Executions

✅ Vibeflow approach:
   Run research → /research/competitor-analysis/execution/2025-10-20/
   Run research again → /research/competitor-analysis/execution/2025-10-25/
   Result: Both preserved, can compare evolution
This enables:
  • Historical comparison (October vs November competitor positioning)
  • Trend analysis (is the market moving toward us or away?)
  • Verifiable claims (strategy footnotes reference specific research)
  • Institutional memory (nothing is lost)
Key insight: Research is temporal by nature. The architecture should reflect this.

The Three-Folder Pattern

Every research domain follows a consistent Input → Process → Output structure:
/research/{domain}/
├── RESEARCH.md           ← Progressive disclosure guide (entry point)
├── /data/                ← INPUT: Raw materials provided
├── /execution/           ← PROCESS: Date-stamped research runs
└── /exports/             ← OUTPUT: Final deliverables

1. /data/ (Input)

Purpose: Store raw materials that research will analyze Contains:
  • Customer interview transcripts
  • Survey data (CSV, JSON)
  • Competitor documents
  • Market reports (PDFs)
  • User feedback
  • Any source material
Characteristics:
  • Static (doesn’t change during research)
  • Organized by type or source
  • Referenced by execution runs
Example structure:
/research/customer-insight/data/
├── /interviews/
│   ├── customer-001-transcript.md
│   ├── customer-002-transcript.md
│   └── customer-003-transcript.md
├── /surveys/
│   └── q4-satisfaction-survey.csv
└── /feedback/
    └── support-tickets-oct-2025.json

2. /execution/ (Process)

Purpose: Date-stamped research runs (where work happens) Contains:
  • PLAN.md (research approach)
  • TODO.md (progress tracking)
  • Working notes
  • Analysis files
  • Findings documents
Naming convention: /execution/{YYYY-MM-DD}/ Characteristics:
  • Temporal (each run is date-stamped)
  • Complete (includes plan, process, findings)
  • Preserves context (notes show reasoning)
  • Never overwritten (new dates for new runs)
Example structure:
/research/customer-insight/execution/
├── /2025-10-20/          ← First research run
│   ├── PLAN.md
│   ├── TODO.md
│   ├── notes.md
│   └── findings.md
├── /2025-11-15/          ← Second research run (1 month later)
│   ├── PLAN.md
│   ├── TODO.md
│   ├── comparison-to-oct.md  ← Shows evolution
│   └── findings.md
└── /2025-12-10/          ← Third run
    └── ...

3. /exports/ (Output)

Purpose: Polished, client-ready deliverables Contains:
  • Final reports
  • Presentation decks
  • Executive summaries
  • Data visualizations
  • Deliverable assets
Characteristics:
  • Polished (client-facing quality)
  • Versioned by source execution
  • May reference multiple execution runs
  • Formats: PDF, PPTX, MD
Example structure:
/research/customer-insight/exports/
├── customer-insight-report-2025-10-20.pdf
├── customer-insight-report-2025-11-15.pdf
├── executive-summary-q4-2025.md
└── trend-analysis-oct-dec-2025.pdf  ← Compares multiple runs

RESEARCH.md (Progressive Disclosure)

Location: /research/{domain}/RESEARCH.md Purpose: Entry point that guides agents (and humans) to relevant research

Standard RESEARCH.md Template

# Research: [Domain Name]

**Domain:** [Brief description of what this research covers]
**Last Updated:** [Date]

---

## Overview

[1-2 paragraphs explaining what this research domain covers, why it matters, and how it informs strategy]

## Current State

**Latest execution:** [Link to most recent execution run]
**Status:** [Active | Archived | In Progress]
**Key findings:** [Bullet points from latest run]

---

## Research Runs

### Active

- **[2025-11-15]** - [Brief description] - [Link to execution]
  - Focus: [What this run investigated]
  - Status: [In Progress | Complete]

### Historical

- **[2025-10-20]** - [Brief description] - [Link to execution]
  - Focus: [What this run investigated]
  - Key insight: [Main finding]

---

## Data Sources

**Location:** `/research/{domain}/data/`

**Available data:**
- [Data type 1] - [Description] - [Last updated]
- [Data type 2] - [Description] - [Last updated]

---

## Exported Deliverables

**Location:** `/research/{domain}/exports/`

**Available reports:**
- [Report name] - [Date] - [Link]
- [Report name] - [Date] - [Link]

---

## How This Research Informs Strategy

**Referenced in:**
- [Strategy file path] - [Which claims reference this research]
- [Strategy file path] - [Which claims reference this research]

**Audit trail example:**
[^claim-name]: Customer research, `/research/{domain}/execution/2025-10-20/findings.md:42`

---

## Related Research Domains

- [Related domain 1] - [How it connects]
- [Related domain 2] - [How it connects]

---

**For agents:** This is the navigation file. Load relevant execution runs or data as needed for your task.

Complete Research Workflow

Step 1: Define Research Domain

Question: What are you researching? Examples:
  • customer-insight - Understanding customer pain points, motivations, behavior
  • competitor-landscape - Competitive positioning, messaging, product features
  • category-trends - Market evolution, emerging themes, industry shifts
  • audience-psychographics - Persona development, decision-making patterns
Action: Create domain directory structure
/research/{domain}/
├── RESEARCH.md
├── /data/
├── /execution/
└── /exports/

Step 2: Add Data Sources

Question: What raw materials will research analyze? Action: Add input data to /data/ folder
/research/customer-insight/data/
├── /interviews/
│   └── [interview transcripts]
├── /surveys/
│   └── [survey data]
└── /feedback/
    └── [user feedback]
Best practices:
  • Organize by type (interviews, surveys, reports)
  • Use clear naming conventions
  • Include metadata (date, source, context)
  • Don’t modify originals (preserve as-is)

Step 3: Run Research (Temporal Execution)

Invocation: Use plan/implement pattern
Marketing Architect: "plan: Analyze customer pain points from Q4 interviews"

Operations Manager creates PLAN.md in:
/research/customer-insight/execution/2025-10-21/PLAN.md

Marketing Architect: [Reviews, approves]

Marketing Architect: "implement"

Operations Manager:
  - Creates TODO.md
  - Delegates to Brand Analyst
  - Brand Analyst uses "Analyzing Qualitative Data" skill
  - Processes interview transcripts from /data/interviews/
  - Documents findings in findings.md
  - Updates TODO.md with progress
  - Returns analysis
Result:
/research/customer-insight/execution/2025-10-21/
├── PLAN.md              ← Research approach
├── TODO.md              ← Progress tracking
├── notes.md             ← Working analysis
└── findings.md          ← Final insights

Step 4: Export Deliverables

Question: What’s the client-facing output? Action: Create polished deliverables in /exports/ Examples:
  • Executive summary (PDF)
  • Research report (Markdown → PDF)
  • Presentation deck (PPTX)
  • Data visualizations (charts, graphs)
/research/customer-insight/exports/
└── customer-insight-report-2025-10-21.pdf

Step 5: Reference in Strategy

Question: How does this research back up brand strategy? Action: Add footnotes in strategy files Example:
In /strategy/messaging/value-propositions.md:

Our customers struggle with productivity tools that add complexity
instead of reducing it.[^productivity-paradox]

[^productivity-paradox]: Customer research,
`/research/customer-insight/execution/2025-10-21/findings.md:42`
This creates an audit trail:
Strategy claim
    ↓ footnote reference
Research finding
    ↓ analyzed from
Raw data (interview transcript)
Result: Strategy is defensible, verifiable, not made up.

Step 6: Run Research Again (Temporal Comparison)

When: When markets change, new data is available, or time has passed Action: Create new dated execution run
/research/customer-insight/execution/2025-11-15/
├── PLAN.md
├── TODO.md
├── comparison-to-oct.md  ← NEW: Compare to 2025-10-21 run
└── findings.md
In comparison-to-oct.md:
# Comparison: October 2025 vs November 2025

## What Changed

**October findings:**
- Pain point: Complex onboarding (mentioned by 8/10 customers)

**November findings:**
- Pain point: Complex onboarding (mentioned by 5/10 customers)
- NEW pain point: Integration challenges (mentioned by 7/10 customers)

## Interpretation

Customer pain points are shifting from onboarding to integrations.
This suggests our onboarding improvements are working, but integration
experience is now the primary friction point.

## Recommendation

Update strategy to emphasize integration simplicity in messaging.
Result: You can see market evolution, not just point-in-time snapshots.

Temporal Execution Patterns

Pattern 1: Periodic Research

Use case: Regular cadence (monthly, quarterly) Example:
/research/competitor-landscape/execution/
├── /2025-10-01/  ← Q4 start
├── /2025-11-01/  ← Month 2
├── /2025-12-01/  ← Month 3
└── /2026-01-01/  ← Q1 start
Benefits:
  • Consistent intervals
  • Easy to compare period-over-period
  • Builds trend data

Pattern 2: Event-Driven Research

Use case: Research triggered by external events Example:
/research/competitor-landscape/execution/
├── /2025-10-15/  ← Competitor A launched new product
├── /2025-11-03/  ← Competitor B rebranded
└── /2025-12-20/  ← Industry report published
Benefits:
  • Captures market shifts as they happen
  • Context preserved (notes explain trigger)
  • Flexible timing

Pattern 3: Iterative Refinement

Use case: Research evolves with new data Example:
/research/audience-psychographics/execution/
├── /2025-10-01/  ← Initial persona research (10 interviews)
├── /2025-10-15/  ← Additional data (20 more interviews)
└── /2025-11-01/  ← Validation research (survey of 200)
Benefits:
  • Research compounds
  • Each run builds on previous
  • Can reference earlier findings

How Research Backs Strategy

The Audit Trail Pattern

Content (Layer 1: Output)
    ↓ references
Strategy (Layer 2: Brand guidelines)
    ↓ references (footnotes)
Research (Layer 3: Insights)
    ↓ analyzed from
Raw Data (Layer 4: Source material)

Example End-to-End

1. Raw Data:
/research/customer-insight/data/interviews/customer-005.md:

"I tried 3 different productivity tools and they all made my life MORE
complicated. I just want something that works without a manual."
2. Research Finding:
/research/customer-insight/execution/2025-10-21/findings.md:42

Pattern identified: 8 out of 10 customers described existing tools as
"adding complexity" rather than reducing it. This represents a
consistent pain point across segments.
3. Strategy Claim:
/strategy/messaging/value-propositions.md:

Our customers are drowning in complex tools that promise simplicity
but deliver confusion.[^productivity-paradox]

[^productivity-paradox]: Customer research,
`/research/customer-insight/execution/2025-10-21/findings.md:42`
4. Content Output:
Blog post (generated by Content Writer):

"Tired of productivity tools that add more work to your plate?
You're not alone. We built [Product] differently—no complex setup,
no endless configuration, just the simplicity you actually need."

Why This Matters

Without audit trails:
  • ❌ Strategy is “vibes” (made up, not defensible)
  • ❌ Content is generic (AI slop)
  • ❌ No way to verify claims
  • ❌ Research disconnected from outputs
With audit trails:
  • ✅ Strategy is evidence-based
  • ✅ Content is specific and credible
  • ✅ Claims are verifiable
  • ✅ Research directly informs outputs

Progressive Disclosure in Action

Agent Workflow Example

Request: “Create blog post about our approach to simplicity” Agent reasoning:
1. Need brand voice → Read /strategy/voice/index.md
2. Need messaging themes → Read /strategy/messaging/pillars.md
3. Pillar references research → Load footnote reference
4. Read /research/customer-insight/execution/2025-10-21/findings.md:42
5. Now have: tone + theme + research-backed claim
6. Generate content
Files loaded: 3-4 (efficient, progressive) Result: Brand-consistent content backed by real research, not AI slop.

RESEARCH.md as Navigation

RESEARCH.md exists so agents know:
  • Which research runs are available
  • Which is most recent
  • Where to find specific data
  • How research connects to strategy
Without RESEARCH.md:
  • Agent searches/guesses
  • May load wrong execution run
  • Inefficient file access
  • More tokens wasted
With RESEARCH.md:
  • Agent reads navigation file
  • Loads exactly what’s needed
  • Efficient context usage
  • Clear audit trail

Real-World Examples

Example 1: Competitor Analysis

Setup:
/research/competitor-analysis/
├── RESEARCH.md
├── /data/
   ├── /websites/ Scraped content
   └── /marketing-materials/ Collected assets
├── /execution/
   ├── /2025-10-01/ Q4 start
   └── /2025-12-01/ Q4 end
└── /exports/
    └── competitor-landscape-q4-2025.pdf
Workflow:
1. Collect data: Scrape competitor websites, save to /data/websites/
2. Run research: "plan: Analyze competitor positioning"
3. Create execution: /execution/2025-10-01/
4. Analyze: Brand Analyst uses "Conducting Competitive Research" skill
5. Document findings: findings.md shows positioning patterns
6. Export report: Create polished PDF for /exports/
7. Reference in strategy: Footnotes in /strategy/messaging/positioning.md
8. Three months later: Run again in /execution/2025-12-01/
9. Compare: See how competitors shifted messaging
10. Update strategy: Based on new competitive landscape

Example 2: Customer Insight Research

Setup:
/research/customer-insight/
├── RESEARCH.md
├── /data/
   ├── /interviews/ Transcripts
   ├── /surveys/ Quant data
   └── /support-tickets/ User feedback
├── /execution/
   ├── /2025-10-15/ Interview analysis
   └── /2025-11-01/ Survey validation
└── /exports/
    └── customer-insight-report-2025-11.pdf
Workflow:
1. Gather interviews: 10 customer transcripts → /data/interviews/
2. Run qualitative research: Analyze interviews (2025-10-15)
3. Identify patterns: Pain points, motivations, language
4. Validate with survey: Create survey based on interview themes
5. Run quantitative research: Analyze survey (2025-11-01)
6. Synthesize: Combine qual + quant insights
7. Export report: Customer persona + key findings
8. Update strategy: Voice, messaging, positioning all reference this research
9. Generate content: All content now backed by real customer language
Setup:
/research/category-trends/
├── RESEARCH.md
├── /data/
   └── /industry-reports/ External research
├── /execution/
   ├── /2025-09-01/ Pre-launch baseline
   ├── /2025-10-01/ Month 1
   ├── /2025-11-01/ Month 2
   └── /2025-12-01/ Month 3
└── /exports/
    └── trend-analysis-q4-2025.pdf
Workflow:
1. Monthly execution: Track industry trends over Q4
2. Each run: Web research + industry report analysis
3. Compare month-over-month: See acceleration/deceleration
4. Spot emerging themes: New keywords, concepts gaining traction
5. Update messaging: Strategy evolves with category trends
6. Quarterly report: Synthesize 3 months of evolution
7. Strategic decisions: Based on trend velocity

Integration with Plan/Implement Pattern

Research workflow embeds the plan/implement pattern:
Define research domain

Add data sources to /data/

"plan: Analyze [research question]"

PLAN.md created in /execution/2025-10-21/

Marketing Architect approves

"implement"

TODO.md tracks research execution

Findings documented

Export deliverables to /exports/

Reference in strategy (footnotes)

Generate content backed by research

Anti-Patterns to Avoid

❌ Overwriting Previous Research

Bad:
/research/competitor-analysis/findings.md  ← Gets overwritten each run
Good:
/research/competitor-analysis/execution/
├── /2025-10-01/findings.md
└── /2025-11-01/findings.md  ← Both preserved

❌ Research Without Strategy Connection

Bad:
Research exists in /research/ but strategy never references it
→ Orphaned research, no impact
Good:
Strategy files contain footnotes pointing to specific research findings
→ Audit trail, verifiable claims

❌ No Progressive Disclosure

Bad:
No RESEARCH.md → Agents search/guess which research to use
Good:
RESEARCH.md guides agents to relevant research runs
→ Efficient, targeted loading

❌ Mixing Input/Process/Output

Bad:
/research/competitor-analysis/
├── transcript.txt           ← Input
├── analysis-notes.md        ← Process
└── final-report.pdf         ← Output
All mixed together
Good:
/research/competitor-analysis/
├── /data/transcript.txt         ← Input
├── /execution/2025-10-21/       ← Process
└── /exports/final-report.pdf    ← Output
Clear separation

❌ Undated Execution Runs

Bad:
/research/competitor-analysis/execution/
├── /first-run/
└── /second-run/
Can't tell when these happened
Good:
/research/competitor-analysis/execution/
├── /2025-10-01/
└── /2025-11-01/
Clear temporal order

Success Criteria

You’re doing research correctly when: ✅ Execution runs are date-stamped (YYYY-MM-DD format) ✅ Three-folder pattern is followed (data/execution/exports) ✅ RESEARCH.md provides navigation ✅ Multiple runs exist (can compare over time) ✅ Strategy footnotes reference research ✅ Audit trails are complete (content → strategy → research → data) ✅ Historical context is preserved (nothing overwritten) ✅ Each domain has clear scope You’re doing it wrong when: ❌ Research gets overwritten (no dates) ❌ Folders are mixed (input/process/output not separated) ❌ No RESEARCH.md (no navigation) ❌ Strategy doesn’t reference research (orphaned) ❌ Only one execution run ever (not using temporal pattern) ❌ Can’t tell when research was done (no dates) ❌ Raw data mixed with findings

Summary

The Research Workflow is temporal research architecture that:
  • Preserves history (date-stamped executions)
  • Creates audit trails (strategy → research → data)
  • Enables comparison (see market evolution)
  • Builds institutional memory (nothing is lost)
  • Prevents marketing debt (systematic organization)
The three-folder pattern: data (input) → execution (process) → exports (output) Key differentiator: Unlike SaaS tools where research gets lost, vibeflow research compounds over time. This is how you build research infrastructure that makes your brand strategy defensible, verifiable, and grounded in reality—not AI slop.