Skip to content
Question-Driven Specification: understand first, then build 🧠

Question-Driven Specification: understand first, then build 🧠

16 April 2026·Sandro Lain
Sandro Lain

Question-Driven Specification

We’ve been told for years that the main problem with software development was writing code fast enough. Now that we have tools capable of generating it in seconds, an uncomfortable truth emerges: the bottleneck was never the keyboard, it was understanding. And indeed, many projects continue to stumble at the same point as always: vague requirements, undeclared assumptions, details left to the mercy of fate.

The funny and slightly tragic thing is that AI can amplify this defect. If you start with a confused request, you’ll get a confused answer but much more elegant, much faster and much more dangerous. Essentially: automation of misunderstanding.

Question-Driven Specification is born right here. Not as a high-sounding methodology to hang on slides, but as a very concrete discipline: instead of rushing toward the solution, you begin with questions that make the problem less opaque.

Before asking a machine to build something, we should be able to ask it what we’re missing to truly understand it.

The real defect of requirements is fake clarity 🎭

When we say a requirement is “clear,” we often mean something different: it seems clear enough for us to start. It’s not the same thing. In fact, sometimes it’s the antechamber to very expensive rework.

A feature described in three lines can seem harmless. Then you discover that nobody defined what happens on error, who sees what, which data is mandatory, how to handle exceptions, what performance limits exist, what security constraints were implied, and which assumptions lived peacefully in one person’s head. Code arrives later. Chaos was usually already there.

Here, individual talent doesn’t make the difference. What makes the difference is the courage to stop and say: we don’t have a specification yet, we have a provisional narrative. And as long as it remains a narrative, every implementation risks being an arbitrary translation.

It’s the same technical honesty I discussed in Unknowns-Driven Development: if you don’t make explicit what you don’t know, the project doesn’t become more solid. It just becomes more theatrical.

Questions don’t slow you down: they prevent running in the wrong direction 🧭

The central idea is simple: use questions not as an accessory to discovery, but as the engine of the specification. Not decorative questions like “any other comments?”, but questions that force you to expose gaps, ambiguities, and hidden dependencies.

Questions like these genuinely change the level of conversation:

  • what are we assuming without verifying it?
  • which edge cases are we ignoring because they spoil our optimism?
  • how do data enter, change, fail, or disappear in ways we haven’t described?
  • where are security, consistency, or performance taken for granted without actually being so?

It’s not an interrogation for sadists. It’s design hygiene. And here AI can become useful not so much because “it knows everything,” but because it can do something very practical: interrogate with patience and without getting tired. If you use it well, it doesn’t replace your thinking. It forces you to make it more explicit.

The human developer, in this scheme, remains the decision source and the contextual knowledge source: they know the domain, weigh tradeoffs, take responsibility for choices. The AI agent, instead, works best when it stops pretending to universal wisdom and becomes what it can truly be useful for: a detector of unknowns, gaps, and omissions. It doesn’t decide for you. It shows you where you’re deciding with too little material.

In this sense, it’s a natural extension of what I wrote in Communicating with Artificial Intelligence: the value doesn’t lie in getting fast answers, but in improving the quality of questions. Because a vague request isn’t democratic, it’s just inefficient.

The right question doesn’t automatically produce the right solution, but it eliminates many wrong ones. And already that, in software, is almost a luxury.

A specification born from iteration, not revelation ✍️

Question-Driven Specification starts from an assumption that’s very unheroic but very useful: at the beginning the problem is incomplete. Instead of pretending otherwise, it takes that incompleteness and transforms it into method.

In my most consolidated approach, this cycle lives in a dedicated directory, not in a chat destined to evaporate. The agent generates markdown files with questions and explicit spaces where I can answer; I open the files, fill them in, reread, and launch the next iteration. The advantage is very concrete: the history of questions and answers remains, can be corrected, discussed, compared over time, and even versioned as a decision history.

This makes it harder to confuse a reasoned decision with a lucky gamble. No magic, no dashboards with important airs. Just text, versioning, and useful friction.

The point is that the specification isn’t “written” once and done: it’s refined progressively. With each iteration, the space for arbitrary interpretation shrinks, and documentation stops being an administrative artifact to become an operational one.

For me the most interesting effect was seeing my own limits more clearly: wrong assumptions, premature simplifications, points I thought were clear that weren’t at all. This friction improved the way I design more or less complex architectures, especially when themes enter the picture that the team thinks they already understand just because they’ve mentioned them in passing: responsibility, error handling, sensitive data, attack surface, audit, role segregation. Until these aspects become explicit questions, even security remains a marginal note with ambitions beyond its real space.

Markdown, here, remains almost perfect: simple, versionable, readable, collaborative. You don’t need a temple of tooling to achieve clarity. You need a container that doesn’t pretend to be more important than its contents.

If you then want to improve how you construct interactions with models, the guide on Prompt Engineering can be a good operational complement. But the point remains: the prompt is not a magic formula. It’s a form of thought design.

The real flow, seen from outside 🔁

If you want to tell it without unnecessary romanticism, the process looks roughly like this: initial draft, reorganization by the agent, generation of a questions file, user fills in answers and clarifications, integration into the specification, new iteration of questions, and loop until the document stops being vague and becomes solid enough to guide development.

    sequenceDiagram
  participant U as User
  participant A as AI Agent
  participant F as Markdown file

  U->>A: Provides initial draft of problem or feature
  A->>F: Creates or reorganizes initial document
  A->>F: Writes a questions file
  U->>F: Fills file with answers and clarifications
  U->>A: Sends answers for new iteration
  A->>F: Integrates answers and updates specification
  alt Specification not yet sufficient
    A->>F: Writes new questions file
    U->>F: Fills in new file
    U->>A: Sends new clarifications
    A->>F: Updates specification again
  else Specification sufficiently clear
    A->>F: Consolidates specification for development, review, and testing
  end
  

The interesting part is that the loop doesn’t serve to produce more text, but to produce less ambiguity. If blind spots emerge at every iteration, the method is working. If instead everything seems clear right away, usually it’s not design maturity: it’s optimism with good formatting.

From specification to tests, before code 🧪

When the specification reaches sufficient completeness, you can use it as a basis to have the AI generate test cases and transform them into the first concrete artifact of the work. It’s an interesting step because it shifts focus from “write the implementation right away” to “tell me first how we verify it’s correct.”

In practice, a well-done QDS lends itself very well to a TDD flow: first clarify behavior, constraints, edge cases and error conditions; then derive tests from that specification; only then do you actually start developing. It doesn’t guarantee miracles, obviously. But it greatly reduces the likelihood of writing elegant code that implements a poorly understood requirement badly.

AI as critical reviewer, not village oracle 🤖

One of the most common distortions in using AI is treating it as a generator of final answers. Much more useful, very much more useful, is using it as a critical reviewer of our blind spots.

If you ask it to propose the solution right away, it will tend to fill gaps with plausible inferences. If instead you ask it to find ambiguities, implicit assumptions, weak points, and uncovered scenarios, then you’re using it in a much more adult way. Not to close reasoning quickly, but to keep it open long enough to make it reliable.

This difference matters a lot. Because a machine that completes blank spaces can seem efficient, but often delivers you apparent coherence. A machine that returns uncomfortable questions to you, instead, forces you to actually decide.

And this is where the moral part of the story enters: asking serious questions is an act of responsibility. It means giving up the shortcut of “we’ll think about it later,” which is an adorable phrase until “later” coincides with production.

When it really works, and when you’re just over-philosophizing 🧱

This approach is particularly useful when the domain is ambiguous, integrations are critical, constraints are many, or the cost of error is high. Distributed systems, products with intricate business rules, multi-tenant SaaS, regulated contexts, architectures with many external dependencies: there, well-made questions are worth more than a brilliant false start.

Of course, the opposite side also exists. If you need to validate a very small idea, throw together an exploratory prototype, or understand if a direction makes sense in a few hours, formalizing too early can become an exercise in process aesthetics. In those cases you don’t need a liturgy. You need discernment.

The practical rule, if we want to avoid pointless slogans, is this: the more it costs to get it wrong, the more it’s worth interrogating before implementing. The lower the risk and more reversible, the more you can tolerate an incomplete specification and learn by doing. It’s not dogma. It’s proportion.

Questions don’t serve to slow everything down. They serve to slow down only where error would cost too much.

The best specification isn’t the most detailed, but the most honest ✅

Ultimately, Question-Driven Specification is not a cult of complexity. It’s an invitation to clarity. It reminds us that in software the problem is rarely the absence of solutions: those always arrive, often in industrial quantities. The problem is choosing a solution when the problem hasn’t yet been defined well enough.

In an era where we can generate code, tests, refactoring, and even architectures with almost offensive ease, the competitive advantage shifts elsewhere: understand better, articulate better, ask better. It’s not glamorous, I know. But then again, neither is rebuilding half your system after three sprints.

If you truly want to try this approach, don’t start with an overly elaborate mental framework. Start with the next ambiguous feature that comes your way. Write it in markdown. Ask the AI not to propose solutions but only to generate questions. Answer with discipline. Repeat the cycle until the specification stops looking like a promise and starts looking like a contract.

It’s less spectacular than a miraculous prompt. But it works better. And, not insignificantly, it keeps you from building software on top of intuitions dressed up as requirements.

Last updated on