Top-Down vs Bottom-Up Specifications with Speckit: What Didn’t Work #1802
Replies: 5 comments 1 reply
-
|
In case you haven't you can use |
Beta Was this translation helpful? Give feedback.
-
|
It depends quite a bit on the agent and the LLM model you are using on how large of a specification it can handle. Your top-down approach is how I would do it to anyway. Not only because of the agent and model restrictions, but also because I want to be able to review what was produced and the more features asked to implement the more you have to review. |
Beta Was this translation helpful? Give feedback.
-
|
Sounds simple but with large projects like this, I tend to start with what I can get away with, an MVP if you like, but really a minimum almost just getting the app running in an environment, then each thing you want to add becomes a "feature" which can have its own spec etc, for example 1, build the basic homepage, including rollout to your environment etc, these are quite small features, but you get the idea, each stage builds on the previous, in a slice thought the eventual whole system. You could get your AI to figure out the best slices though the whole system given a basic list of required features, but a spec should always be a slice though the whole system, if you slice by say website, backend, database, then you will find getting things to join up correctly becomes an issue, and you don't get to see anything until the whole thing is completed. |
Beta Was this translation helpful? Give feedback.
-
|
I use codex GPT-5.3-Codex. I created a technical constitution with engineering rules aligned to my architectural and technological requirements. Start with this specification The generated output produced only a scaffolded backend structure (e.g., Laravel project layout) without full dependency installation, environment configuration and runtime readiness. The same behavior occurred on the frontend. The model generated architectural folders and partial code artifacts, but not a fully buildable, and executable system. Should framework initialization be explicitly enforced in the constitution or plan, for example: Additionally, should I include links to the official Laravel and Angular documentation? If so, where should they be placed? |
Beta Was this translation helpful? Give feedback.
-
|
Your experience matches a pattern I keep seeing: monolithic specs fail because the model can't tell which part is architecture vs which part is a UI constraint vs which part defines the output format. When everything is one blob, the model has to guess what matters most for any given generation step. The modular approach (001-foundation, 002-authentication, etc.) works better because each spec has a narrower scope. But even within a single spec, the same problem exists at a smaller scale. Is this part a constraint? A goal? An example of expected output? If the model can't distinguish these, it hallucinates or ignores pieces. What's helped in my work is making the type of each instruction explicit. Not just splitting by feature, but splitting by instruction type: role, constraints, output format, context, examples. Each type tells the model how to use that piece of information differently. I built flompt around this idea. It decomposes prompts into 12 typed semantic blocks on a visual canvas and compiles to XML. For your case, imagine each spec having explicit blocks for "architectural constraints," "expected output structure," "tech stack rules," and "success criteria" instead of mixing them all into prose. The model would know exactly which rules to enforce during generation vs which parts describe the desired result. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Recently, I experimented with Speckit to generate parts of a platform I’m building: UniContr, a full-stack system for managing university teaching contracts.
First attempt: a single global monolithic top-down specification
My initial idea was to write a specification describing the entire platform in one shot.
The first generated code was basically unusable and required refining the specification repeatedly with ad-hoc corrections.
Second attempt: top-down modular specifications (somewhat resembling a bottom-up workflow)
The second strategy was to start from the architecture and grow the system incrementally.
Instead of one large specification, I split the platform into a sequence of smaller specifications with a technical constitution:
001-unicontr-platform-foundation
002-authentication
003-user-management
004-role-permissions
005-contract-management
006-contract-documents
007-audit-logging
008-notifications
009-search-filtering
010-dashboard-reporting
The result was slightly better and more controllable. However, the first generated code is still rarely usable as-is. Usually, it still requires refinement specifications with specific technical and graphical details.
Ultimately, AI code generation works best with well-structured prompts — I welcome any tips or experiences on using Speckit effectively in production.
Beta Was this translation helpful? Give feedback.
All reactions