Skip to main content

πŸ“‹ Full Specification

πŸ“‹ Context for Full Specification Work​

This prompt contains a complete set of methodologies and standards for creating detailed project specifications. Includes:

  • EARS (Easy Approach to Requirements Syntax) - structured approach to formulating requirements
  • Methodology - step-by-step processes for creating specifications
  • Requirements Phases - detailed description of requirements gathering and analysis stages
  • Design Phases - architectural planning processes
  • Task Phases - task management and prioritization
  • Steering Documents - standards and guiding principles

Use when: You need to create a complete, detailed project specification with maximum elaboration of all aspects.

πŸ“‹ Full Specification

This prompt contains a complete set of methodologies and standards for creating detailed project specifications.


πŸ“š Source Documents​


πŸ“‹ Standards and Methodological References

This section systematically organizes key industry standards, methodologies, and best practices that form the foundation for a Software Design Document (SDD). Applying these approaches ensures:

  • Clarity and unambiguity of requirements
  • High quality of architecture and documentation
  • Minimization of risks and ambiguities in development
  • Compliance with international quality standards

EARS (Easy Approach to Requirements Syntax)​

A structured approach to formulating requirements that ensures clarity, verifiability, and unambiguity through standardized templates.

Key EARS Templates​

1. WHEN (Event-Driven Requirements)​

Purpose: Describing the system’s response to specific events or triggers
Format: WHEN [event/trigger] THEN [system] SHALL [action]

Examples:

  • WHEN the user clicks the "Save" button THEN the system SHALL validate all form fields
  • WHEN a file upload exceeds 10 MB THEN the system SHALL display an error message
  • WHEN the user’s session expires THEN the system SHALL redirect to the login page

2. IF (State-Driven Requirements)​

Purpose: Describing system behavior under specific conditions
Format: IF [condition] THEN [system] SHALL [action]

Examples:

  • IF the user is not authenticated THEN the system SHALL deny access to protected resources
  • IF the database connection fails THEN the system SHALL display a maintenance message
  • IF the user has administrator privileges THEN the system SHALL display the admin panel

3. WHILE (Continuous Requirements)​

Purpose: Describing persistent system behavior while in a specified state
Format: WHEN [state] THEN [system] SHALL [continuous behavior]

Examples:

  • WHEN a file is uploading THEN the system SHALL display a progress indicator
  • WHEN the user is typing THEN the system SHALL provide real-time validation feedback
  • WHEN the system is processing a request THEN the system SHALL prevent duplicate submissions

4. WHERE (Context-Dependent Requirements)​

Purpose: Constraining a requirement to a specific context or environment
Format: WHERE [context] THEN [system] SHALL [behavior]

Examples:

  • WHERE the user is on a mobile device THEN the system SHALL use a responsive layout
  • WHERE the application runs in production mode THEN the system SHALL log errors to an external service
  • WHERE multiple users edit simultaneously THEN the system SHALL handle conflicts gracefully

EARS Application Guidelines​

CategoryRecommendationsExamples
Syntaxβ€’ Use active voice β€’ Consistently use the term "system" instead of synonyms❌ "Fields must be validated" βœ… "The system SHALL validate fields"
Specificityβ€’ Avoid vague terms β€’ Specify quantitative parameters❌ "Fast response" βœ… "Response time under 300 ms"
Structureβ€’ One requirement = one statement β€’ Clear verification criteria❌ "The system SHALL validate and save" βœ… Two separate requirements

EARS Anti-Patterns​

🚫 Compound Requirements
Example: "WHEN the user enters data THEN the system SHALL validate and save"
Solution: Split into two distinct requirements with separate triggers

🚫 Ambiguous Conditions
Example: "WHEN data is entered THEN the system SHALL process"
Solution: Specify exact conditions ("WHEN all mandatory fields are filled")

🚫 Implementation Details
Example: "WHEN the form is submitted THEN the system SHALL use a REST API"
Solution: Focus on the outcome ("...the system SHALL save the data")


Industry Standards​

Standard Purpose: Providing a structured approach to documenting requirements through an SRS (Software Requirements Specification).

Key Characteristics of a High-Quality SRS​

  • Completeness: All functional and non-functional requirements are accounted for
  • Unambiguity: No ambiguous or vague phrasing
  • Verifiability: Each requirement includes verification criteria
  • Consistency: No contradictions between requirements
  • Traceability: Clear linkage to sources and lifecycle phases
1. Introduction
- Purpose of the document
- Scope
- Definitions, acronyms, and abbreviations
- References to related documents

2. Overall Description
- System context
- User characteristics
- Implementation constraints
- Assumptions and dependencies

3. Specific Requirements
- Functional requirements (FR-001, FR-002...)
- Non-functional requirements (NFR-001...)
- Interfaces
- Data requirements

4. Appendices
- Traceability matrix
- Diagrams
- Sample scenarios

Requirements Specification Format​

Each requirement must include:

  • Unique identifier (e.g., FR-001)
  • Short title
  • Detailed description
  • Source (customer/document)
  • Priority (Must/Should/Could)
  • Acceptance criteria
  • Dependencies

Architectural Principles and Methodologies​

SOLID Principles​

PrincipleDescriptionAnti-Pattern
Single ResponsibilityEach component should serve only one actor/user roleGod Object
Open/ClosedOpen for extension, closed for modificationFrequent changes to base code
Liskov SubstitutionSubclasses must be substitutable for their base classesViolation of inheritance contracts
Interface SegregationPrefer specific interfaces over general-purpose ones"Fat" interfaces
Dependency InversionDepend on abstractions, not concrete implementationsTight coupling to concrete implementations of higher layers

Domain-Driven Design (DDD)​

Key Concepts:

  • Ubiquitous Language: Shared terminology between business analysts and developers
  • Bounded Contexts: Clear separation of domain areas
  • Aggregates: Grouping of objects within a transactional boundary
  • Domain Events: Recording significant business occurrences

Implementation Recommendations:

  1. Create a domain glossary
  2. Identify the Core Domain
  3. Apply design patterns (Entities, Value Objects, Repositories)
  4. Implement an event bus for inter-context communication

Requirements Elicitation Methodologies​

User Stories​

Format:
As a [role], I want [functionality] so that [business value]

Quality Criteria (INVEST):

  • Independent: Independent of other stories
  • Negotiable: Open to refinement and discussion
  • Valuable: Delivers tangible value
  • Estimable: Can be sized or estimated
  • Small: Fits within a single iteration
  • Testable: Has clear acceptance criteria

Example:
As a sales manager, I want to filter orders by date so that I can analyze weekly revenue.
Acceptance Criteria:

  • WHEN a date range is entered THEN the system SHALL display orders within that period
  • WHEN an invalid date is selected THEN the system SHALL show a helpful hint

Use Cases​

Standard Structure:

1. Name
2. Actors
3. Preconditions
4. Main Success Scenario (step-by-step sequence)
5. Alternative Flows
6. Postconditions
7. Exceptions

Recommendations:

  • Limit the main flow to a maximum of 9 steps
  • Number alternative flows (e.g., 3a, 3b)
  • Each step must include an actor action and the system’s response

Documentation Standards​

Technical Documentation Requirements​

ElementRecommendationsAnti-Patterns
Structureβ€’ Logical sequence β€’ Consistent heading hierarchyMixing levels of detail
Styleβ€’ Active voice β€’ Precise phrasingPassive constructions ("must be done")
Terminologyβ€’ Glossary at the beginning β€’ Consistent terminologySynonymy within the same document
Visualizationβ€’ Diagrams for complex processes β€’ Data schemasExcessive graphics without explanations

Documentation Types and Their Standards​

1. API Documentation​

Required Elements:

  • Description of all endpoints with HTTP methods
  • Request/response examples in JSON/YAML
  • Error codes with explanations
  • Authentication parameters
  • Rate limits

Recommendation: Auto-generate using Swagger/OpenAPI

2. User Documentation​

Structure:

- Quick Start: 5-minute guide
- Core Scenarios: Step-by-step instructions
- Advanced Features: In-depth exploration
- FAQ: Solutions to common issues
- Community links

3. Architectural Documentation​

Required Sections:

  • Context diagram (C4 Model Level 1)
  • Container diagram (C4 Model Level 2)
  • Key architectural decisions (ADR)
  • Technology matrix
  • Architecture evolution roadmap

These standards and methodologies should be adapted to the specific needs of each project, maintaining a balance between formality and practical applicability. Regular documentation reviews involving all stakeholders ensure its relevance and quality throughout the project lifecycle.

πŸš€ Specification-Driven Development Methodology

Specification-driven development is a systematic approach to developing software features that emphasizes thorough planning, clear documentation, and structured implementation. This methodology transforms vague feature ideas into clearly defined, implementable solutions through a three-phase process that ensures quality, maintainability, and successful delivery.

Core Philosophy​

Clarity Before Code​

The fundamental principle of specification-driven development is that clarity of thought and purpose must precede implementation. This approach requires investing time upfront to thoroughly understand requirements, design solutions, and plan implementation before writing a single line of code.

When a team dedicates time to upfront planning:

  • Uncertainty is reduced – developers clearly understand exactly what needs to be built and why
  • Rework is minimized – the likelihood of discovering critical issues late in the process decreases significantly
  • Implementation accuracy improves – the probability of delivering precisely the solution the business needs increases

This principle is especially critical in today’s environment, where software requirements grow increasingly complex while delivery timelines become tighter. The investment in initial planning pays dividends many times over during implementation and subsequent maintenance phases.

Iterative Refinement​

Each phase of the specification process is designed for iterative refinement rather than a one-time pass. Unlike a linear progression from idea to implementation, the methodology encourages continuous revisiting of earlier stages to adjust and clarify.

This approach offers several key advantages:

  • Early problem detection – technical complexities, requirement mismatches, or unforeseen dependencies are identified during the design phase when they are cheapest to fix
  • Gradual confidence building – each iteration deepens understanding of the problem and increases confidence in the chosen direction
  • Flexibility amid uncertainty – allows course correction as new information emerges without sacrificing a structured approach

Iterativity manifests at all process levelsβ€”from refining requirements through exploring technical alternatives to detailing the implementation plan. This creates a solid foundation for subsequent work.

Documentation as Communication​

In this methodology, specifications serve not merely as formal planning documents but as key communication instruments that fulfill several critically important functions:

  • Stakeholder alignment – documented requirements and designs become a common language for developers, managers, customers, and other project participants
  • Decision rationale preservation – captures not only what was decided but why, which is crucial when making future changes
  • Context provision for future maintenance – new team members can quickly understand the system through documented decisions
  • Creation of long-term assets – well-written specifications retain value even after the initial implementation is complete

This philosophy treats documentation not as a necessary evil but as an investment in the project’s future, yielding returns through improved understanding, reduced risk, and simplified maintenance.


Benefits of Specification-Driven Development​

Reduced Risk and Uncertainty​

Through thorough planning before implementation, specification-driven development significantly reduces the risk of building incorrect functionality or encountering unexpected technical issues. The systematic approach helps identify and resolve problems early in the process when fixes require minimal effort.

Concrete manifestations of this benefit include:

  • Preventing the delivery of features that don’t match what was requested
  • Early identification of contradictory or unrealistic requirements
  • Discovery of technical constraints before coding begins
  • Elimination of stakeholder misunderstandings at an early stage

Improved Quality and Maintainability​

Features developed through the specification process typically demonstrate higher quality and are easier to maintain. This stems from several factors:

  • Clear requirements establish a foundation for more comprehensive testing and validation
  • Thoughtful design leads to better architecture and separation of responsibilities
  • Proactive error-handling planning reduces the number of bugs in production
  • Documented decision rationale facilitates future modifications and enhancements

Collectively, these aspects result in more reliable, testable, and modifiable codeβ€”particularly valuable for long-term projects.

Enhanced Collaboration​

Specifications provide a common language and shared understanding among all project participants:

  • Developers gain a clear picture of what needs to be implemented
  • Testers can prepare test cases in parallel with development
  • Project managers see the complete picture of requirements and complexity
  • Customers can confirm their needs have been correctly understood

This improved communication reduces misunderstandings, minimizes rework, and enables more effective collaboration among all stakeholders throughout the project lifecycle.

Better Estimation and Planning​

The detailed planning inherent in specification-driven development enables more accurate estimation of time and resource requirements. When teams invest time in analyzing requirements and designing before implementation begins, project managers and developers can make better-informed decisions:

  • More accurate effort estimates – understanding full complexity enables realistic timelines
  • Efficient resource allocation – knowledge of dependencies helps optimally assign tasks
  • Transparent expectation management – customers gain a clear understanding of what will be delivered and when
  • Flexible scope management – enables prioritization based on clearly defined requirements

This is especially valuable under resource constraints and tight deadlines, where every minute of planning saves hours of implementation.

Knowledge Preservation​

One of the most undervalued benefits is the ability of specifications to serve as living documentation that preserves critically important project knowledge:

  • Design rationale – why specific architectural decisions were made
  • Requirement context – how business needs translated into technical solutions
  • Change history – how and why requirements evolved over time
  • Warnings and pitfalls – known issues and recommendations for avoiding them

This knowledge remains accessible long after the original developers have moved on to other projects, significantly reducing "knowledge debt" and simplifying project handover to new team members.


Comparison with Other Development Methodologies​

Traditional Waterfall Development​

Similarities:

  • Both approaches emphasize the importance of upfront planning and documentation
  • Both follow a sequential, phase-based development approach

Key Differences:

  • Iterativity within phases: Specification-driven development encourages refinement and validation at each stage, whereas the waterfall model assumes strictly sequential progression without returns
  • Living documents: Specifications are designed to evolve as working documents, while waterfall documentation is often frozen after approval
  • Scale of application: The methodology is optimized for feature-level development rather than entire projects, making it more flexible
  • Integration with modern practices: Specification-driven development accounts for working with AI tools and contemporary agile practices

Agile Development​

Similarities:

  • Both approaches value working software and customer collaboration
  • Both embrace iterative refinement and feedback as integral parts of the process

Key Differences:

  • Depth of upfront design: Specification-driven development places greater emphasis on thorough design before implementation, whereas classic Agile often defers design until implementation time
  • Documentation structure: The methodology prescribes more structured documentation requirements, while Agile traditionally focuses on "working software over comprehensive documentation"
  • Compatibility: Specification-driven development is designed to work within agile frameworks rather than replace them, making it complementary to Agile rather than an alternative
  • Scale of application: Can be applied to individual features within agile sprints, providing structure where needed

Test-Driven Development (TDD)​

Similarities:

  • Both approaches emphasize defining success criteria before implementation
  • Both use iterative cycles (red-green-refactor in TDD corresponds to requirements-design-implementation in specification-driven development)

Key Differences:

  • Level of abstraction: Specification-driven development operates at a higher level, covering not just individual modules but also system interactions
  • Scope of coverage: Includes business requirements and system design, not just test cases
  • Practice integration: Can incorporate TDD practices within its implementation phase as one of many tools
  • Context: Provides broader context encompassing not only technical aspects but also business goals and user needs

Design-First Development​

Similarities:

  • Both approaches prioritize design and planning before actual coding
  • Both create detailed technical specifications before implementation

Key Differences:

  • Requirements gathering: Specification-driven development includes an explicit, structured requirements-gathering phase using techniques like EARS, whereas Design-First often assumes requirements are already defined
  • Task planning: Provides a more structured approach to task decomposition and implementation planning
  • AI optimization: Specifically designed with AI-assisted development workflows in mind
  • Requirements standardization: Incorporates specific methodologies like EARS (Easy Approach to Requirements Syntax) for creating clear, testable requirements

The Three-Phase Approach​

Phase 1: Requirements Gathering​

Objective: Transform vague feature ideas into clear, testable requirements that can be unambiguously understood by all stakeholders.

Key Activities:

  • Capturing user stories that express not only what needs to be done but also why, focusing on value to the user or business
  • Defining acceptance criteria using the EARS (Easy Approach to Requirements Syntax) methodology, which helps create clear, unambiguous, and testable requirements
  • Identifying edge cases and constraints, including non-functional requirements such as performance, security, and scalability
  • Validating completeness and feasibility through checking for contradictions, gaps, and technical realism

Benefits:

  • Ensures shared understanding among all stakeholders about what will be built
  • Provides clear success criteria for subsequent implementation and testing phases
  • Reduces the risk of scope creep and functionality drift during development
  • Creates a foundation for test development and result validation even before coding begins

Phase 2: Design Documentation​

Objective: Create a comprehensive technical implementation plan that defines architectural decisions, system structure, and key interactions.

Key Activities:

  • Exploring technical approaches and constraints, including analysis of possible solution options and their comparison against criteria
  • Defining system architecture and component interactions, with emphasis on interfaces and responsibility boundaries
  • Specifying data models and interfaces, including formal API definitions, data schemas, and communication protocols
  • Planning error-handling, testing, and monitoring strategies to ensure system reliability and maintainability

Benefits:

  • Identifies potential technical problems and complexities before coding begins, when fixes are cheaper
  • Enables more accurate effort and resource estimation through deep problem understanding
  • Provides a clear roadmap for implementation, reducing cognitive load on developers
  • Documents design decisions and their rationale, which is critical for future maintenance

Phase 3: Task Planning​

Objective: Break down the design into executable, sequential implementation steps that can be distributed among developers and tracked throughout the cycle.

Key Activities:

  • Transforming design elements into concrete coding tasks with clear inputs and outputs
  • Sequencing tasks to ensure incremental progress and enable early validation
  • Defining clear objectives and completion criteria for each task to enable objective progress assessment
  • Linking to original requirements to ensure traceability and confirmation that all functionality aspects are covered

Benefits:

  • Makes large features manageable through decomposition into logical, independent parts
  • Enables parallel work by multiple developers with minimal conflicts
  • Simplifies progress tracking and early bottleneck identification
  • Reduces cognitive load on developers by allowing focus on one task at a time
  • Facilitates code review and quality assurance through clear responsibility separation

Lightweight Specifications​

Principles of Lightweight Specifications​

Lightweight specifications represent an adapted approach to the methodology that preserves its key benefits when working with small features, bug fixes, and rapid iterations. The primary goal is to ensure sufficient planning without excessive bureaucracy, maintaining balance between thorough preparation and rapid implementation.

Key principles:

  • Proportional effort – documentation volume corresponds to task complexity
  • Minimalism – documentation is limited to what’s necessary for understanding and verification
  • Flexibility – ability to expand specifications when unexpected complexity emerges
  • Practicality – focus on what genuinely helps implementation rather than formal requirements

Specification Complexity Decision Tree​

flowchart TD
A[New work item] --> B{Effort > 1 day?}
B -->|No| C[Micro-specification]
B -->|Yes| D{Multiple components?}

D -->|No| E{New patterns/technologies?}
E -->|No| F[Rapid specification]
E -->|Yes| G[Standard specification]

D -->|Yes| G[Standard specification]

C --> K[Minimal documentation]
F --> L[Streamlined process]
G --> M[Full three-phase process]
I --> M

Types of Lightweight Specifications​

Micro-Specification​

Applied to: Bug fixes, text changes, configuration updates, minor UI changes (less than 1 day effort).

Characteristics:

  • Minimal documentation focusing on the essence of changes, often as comments in the task tracking system
  • Brief rationale and clear acceptance criteria sufficient for verification
  • No formal design or detailed task planning – decisions are made directly during implementation
  • Documentation limited to the minimum necessary for understanding and verification, often including only what and why without detailed how

Example: Fix typo in welcome message text

Rapid Specification​

Applied to: Small features, API endpoint additions, database schema changes, component modifications (1-3 days effort).

Characteristics:

  • Simplified requirements gathering focusing on key user stories and acceptance criteria
  • Direct transformation of requirements into implementation tasks without a separate formal design phase – design is integrated into task planning
  • Clear acceptance criteria and definition of "done" for each task
  • Maintained traceability between requirements and tasks through explicit links

Example: As a user, I want to see my last login time

Dynamic Specification Level Adaptation​

Indicators for Elevating Specification Level​

For micro-specifications:

  • Implementation takes significantly longer than initially estimated
  • Non-obvious dependencies between components are discovered
  • Complex edge cases emerge that weren’t considered in initial criteria
  • Coordination with other teams or systems becomes necessary

For rapid specifications:

  • Complex design questions arise during implementation requiring serious analysis
  • Hidden dependencies on other systems or components are discovered
  • Significant implications for performance, security, or scalability emerge
  • Additional stakeholder alignment is needed due to scope expansion

Adaptation Process​

  1. Current state assessment: Analyze reasons for increased complexity and identify specific areas requiring additional elaboration

  2. Identify missing elements: Determine which aspects need additional specification – requirements, design, or task details

  3. Specification enhancement: Add necessary elements focused on solving identified problems without completely rewriting existing documentation

  4. Change alignment: Discuss the expanded specification with stakeholders and obtain confirmation

  5. Implementation continuation: Proceed with coding using the enhanced specification as guidance

This flexible approach ensures balance between necessary structure and operational speed, enabling specification efforts to scale according to actual project needs while preserving the core principles and benefits of specification-driven development.


Steering Documents​

Steering documents are project working guidelines.

They contain project-specific standards and conventions that help teams work more cohesively. These documents aim to "guide" developers when working with the project and address two problems:

  • Documenting information not related to specific specifications (e.g., GitFlow decisions)
  • Documenting information that repeats across specifications (e.g., global testing strategy, technology stack, or code style)

Core principles:

  • The list of steering documents is individual for each stack, team, and solution
  • The list of steering documents may change during solution evolution
  • Different steering documents are relevant at different stages of working with specifications
  • Steering documents constitute the project’s shared context
  • It’s recommended to create separate tasks in specifications for maintaining steering document currency
  • It’s recommended to reference specific steering documents in specification designs
  • It’s recommended to create multiple atomic steering documents rather than a few large ones
  • You can have steering documents that apply to the entire solution as well as those specific to particular components or modules

Conclusion​

Specification-driven development represents a balanced approach that combines the benefits of thorough planning with the flexibility required for modern software development. This methodology doesn’t require rigid adherence to formal processes but provides a structure that can be adapted to specific project needs.

By following the three-phase methodology and applying lightweight specifications where appropriate, development teams can achieve an optimal balance between preparation and implementation. This enables them to:

  • Create higher-quality software with fewer bugs
  • Reduce project risks through early problem detection
  • Improve communication among all process participants
  • Preserve project knowledge for long-term maintenance

The methodology is especially effective when combined with modern AI-assisted development tools, as the structured approach to requirements, design, and task planning provides the clear context that AI systems need for maximum effectiveness. AI assistants can better understand tasks and propose more accurate solutions when requirements and designs are clearly defined.

The adaptive nature of lightweight specifications makes the methodology universalβ€”it can be applied in various contexts, from minor bug fixes to large projects, ensuring an optimal balance between preparation and implementation. This makes specification-driven development a powerful tool in the modern developer’s arsenal, helping create better software more efficiently and with lower risk.

πŸ“ Requirements Phase Documentation

The requirements phase forms the foundation of specification-driven development, where vague feature ideas are transformed into clear, testable requirements using the EARS (Easy Approach to Requirements Syntax) format. This phase ensures shared understanding among all stakeholders about what needs to be built before proceeding to design and implementation stages.

Purpose and Objectives​

The requirements phase serves to:

  • Transform vague ideas into specific, measurable, and testable requirements
  • Establish clear acceptance criteria for evaluating feature success
  • Create shared understanding among all project participants
  • Provide a basis for decision-making during design and implementation phases
  • Enable effective testing and validation strategies

Step-by-Step Process​

Step 1: Initial Requirements Generation​

Objective: Create an initial draft of requirements based on a feature idea

Process:

  1. Analyze the feature idea: Break down the core concept into user scenarios
  2. Identify user roles: Determine all participants interacting with the feature
  3. Formulate user stories: Describe in the format "As a [role], I want [feature] so that [benefit]"
  4. Translate into EARS requirements: Convert user stories into testable acceptance criteria

Key Principles:

  • Start with user experience, not technical implementation
  • Focus on observable and measurable system behavior
  • Systematically consider edge cases and error scenarios
  • Think about the complete user journey, not isolated steps

Step 2: Structuring Requirements in EARS Format​

Objective: Formalize requirements in a standardized, testable format

Document Structure:

### Requirement 1
**User Story:** As a [role], I want [feature] so that [benefit]

#### Acceptance Criteria
1. WHEN [event] THEN the system SHALL [response]
2. IF [precondition] THEN the system SHALL [response]
3. WHEN [event] AND [condition] THEN the system SHALL [response]

[Additional requirements...]

Core EARS Templates​

1. Simple Event-Response (most common template)
Used for direct system responses to user actions

Example:
WHEN the user clicks the "Submit" button THEN the system SHALL validate form data

2. Conditional Behavior
Applied when an action depends on the current system state

Example:
IF the user is authenticated THEN the system SHALL display the user dashboard

3. Complex Conditions
Combines multiple conditions using logical operators

Example:
WHEN the user submits the form AND all required fields are filled THEN the system SHALL process the submission

4. Error Handling
Describes system behavior in exceptional situations

Example:
WHEN the user submits invalid data THEN the system SHALL display specific error messages

EARS Application Guidelines​

  • WHEN: Always start with a triggering event (user action or system event)
  • IF: Use to describe preconditions that must be true
  • THEN: Clearly define expected system behavior (always use "SHALL")
  • AND/OR: Use to combine conditions, but avoid excessive complexity
  • SHALL: Use consistently for mandatory requirements (do not mix with "MAY" or "SHOULD")

Formulation Tips​

  • Avoid technical implementation details ("the system uses Redis")
  • Do not use vague terms ("fast," "convenient")
  • Each requirement must be independent and testable
  • Verify requirement completeness: happy path, edge cases, errors

Step 3: Requirements Validation​

Validation Criteria:

  • Each requirement is testable and measurable
  • Requirements cover normal, edge, and error scenarios
  • User stories provide clear business value
  • Acceptance criteria are specific and unambiguous
  • Requirements are independent and non-conflicting
  • All user roles and their interactions are accounted for

Verification Questions:

  • How will we verify this requirement is fulfilled?
  • Is the expected behavior clearly defined?
  • What assumptions are embedded in this requirement?
  • What happens upon failure or in exceptional situations?
  • Are all user scenarios covered?

Step 4: Iterative Refinement​

Refinement Process:

  1. Stakeholder review: Gather feedback on completeness and accuracy
  2. Gap identification: Find missing scenarios or ambiguous wording
  3. Ambiguity resolution: Eliminate vague or conflicting requirements
  4. Add missing details: Include edge cases and error handling
  5. Business value verification: Confirm each requirement serves a specific purpose

Recommendations:

  • Implement one change per iteration to track modifications
  • Record approval from all stakeholders after changes
  • Document rationale for key decisions
  • Maintain an appropriate level of detail: specific enough for clarity, but not at implementation level

Final Requirements Checklist​

Completeness​

  • All user roles and scenarios are accounted for
  • Normal, edge, and error cases are covered
  • All interactions have defined system responses
  • Business rules and constraints are explicitly documented

Clarity​

  • Requirements use precise, unambiguous language
  • Technical jargon is either absent or clearly defined
  • Wording maintains a user-centric perspective
  • Expected behavior is concrete and measurable

Consistency​

  • EARS format is applied consistently throughout the document
  • Terminology is uniform across the entire document
  • Requirements do not contradict each other
  • Similar scenarios follow a unified template

Testability​

  • Each requirement can be verified through testing
  • Success criteria are observable and quantitatively measurable
  • Both input conditions and expected results are specified
  • Wording is specific enough to develop test cases

Examples of Well-Formulated Requirements​

Example 1: User Registration Feature​

User Story: As a new user, I want to create an account so that I can access personalized features.

Acceptance Criteria:

  1. WHEN the user provides a valid email and password THEN the system SHALL create a new account
  2. WHEN the user provides an existing email THEN the system SHALL display the error "Email already registered"
  3. WHEN the user provides an email in an invalid format THEN the system SHALL display the error "Invalid email format"
  4. WHEN the user provides a password shorter than 8 characters THEN the system SHALL display the error "Password must be at least 8 characters long"
  5. WHEN account creation is successful THEN the system SHALL send a confirmation email within 30 seconds
  6. WHEN account creation is successful THEN the system SHALL redirect to the welcome page

Example 2: Data Validation Feature​

User Story: As a user, I want my input validated in real time so that I can avoid submitting incorrect information.

Acceptance Criteria:

  1. WHEN the user enters data into a required field THEN the system SHALL remove the error highlight for that field
  2. WHEN the user submits a form with empty required fields THEN the system SHALL highlight missing fields in red
  3. WHEN the user enters data that does not match the required format THEN the system SHALL display format requirements below the input field
  4. WHEN all fields pass validation THEN the system SHALL enable the submit button
  5. IF validation fails THEN the system SHALL keep the submit button disabled
  6. WHEN the user hovers over the tooltip icon THEN the system SHALL display an example of correct format

Common Mistakes and How to Avoid Them​

Mistake 1: Vague Wording​

Problem:
"The system must be fast and convenient"

Consequences:
Impossible to verify fulfillment; multiple interpretations

How to Fix:
"WHEN the user requests data THEN the system SHALL display the result within 2 seconds for 95% of requests"

Mistake 2: Including Implementation Details​

Problem:
"The system must use Redis for data caching"

Consequences:
Limits implementation options; focuses on technology rather than outcome

How to Fix:
"WHEN the user repeatedly requests frequently used data THEN the system SHALL return the result within 500 ms"

Mistake 3: Missing Error Handling​

Problem:
Only describing the "happy path" without edge cases

Consequences:
Functionality gaps; unexpected failures during operation

How to Fix:
For each main scenario, add 2–3 error-handling and boundary-condition scenarios

Mistake 4: Untestable Requirements​

Problem:
"The interface must be intuitive"

Consequences:
Impossible to confirm requirement fulfillment

How to Fix:
"WHEN a new user first accesses the system THEN the system SHALL provide an onboarding tour enabling completion of core actions in no more than 3 clicks"

Document Template​

# Requirements for "[Brief Feature Name]"

**Business Objective:** [Description of the feature's business goal and its value to the product/customer]
**Scope:** [Boundaries of functionalityβ€”what is included/excluded]
**Related Documents:** [Links to technical specifications, user research, etc.]

---

## [Requirement/Feature Name]

**User Story:**
As a [user role], I want [feature description] so that [business value/benefit]

### Acceptance Criteria

*(Select the appropriate EARS template and fill in according to instructions)*

1. **[Simple Event-Response]**
WHEN [specific event/user action] THEN the system SHALL [observable result]
*[Example: WHEN the user clicks the "Save" button THEN the system SHALL save changes to the database]*

2. **[Conditional Behavior]**
IF [precondition/system state] THEN the system SHALL [observable result]
*[Example: IF the cart contains items THEN the system SHALL display the total amount]*

3. **[Complex Condition]**
WHEN [event] AND [additional condition] THEN the system SHALL [observable result]
*[Example: WHEN the user enters a password AND password length < 8 characters THEN the system SHALL display an error]*

4. **[Error Handling]**
WHEN [exceptional situation] THEN the system SHALL [error-handling action]
*[Example: WHEN the server connection is lost THEN the system SHALL display the notification "Check your internet connection"]*

*[Repeat this structure for each independent requirement]*

---

## Requirements Quality Checklist

*(Completed after finalizing the document)*

| Criterion | Completed | Comment |
| ---------------------------------------------------------------------- | --------- | ------- |
| All requirements are testable and measurable | ☐ | |
| Normal, edge, and error scenarios are covered | ☐ | |
| No technical implementation details included | ☐ | |
| No vague wording ("fast," "convenient") | ☐ | |
| All requirements are independent and non-conflicting | ☐ | |
| Input conditions and expected results specified for each requirement | ☐ | |

🎨 Design Phase Documentation

The design phase is a critically important stage in software development, transforming approved requirements into a detailed technical implementation plan. This document serves as the official guide for the entire team, providing a single point of reference for architects, developers, testers, and stakeholders.

Key Value of the Design Phase:

  • Creates a "bridge" between business requirements and technical implementation
  • Identifies and resolves potential issues before coding begins
  • Ensures transparency and alignment in decision-making
  • Serves as the foundation for effort estimation and work planning
  • Reduces risks of misunderstandings and rework during implementation

Purpose and Objectives​

Create a technically sound, implementable, and verifiable design that fully aligns with approved requirements and is ready to be broken down into implementation tasks.

Requirements Transformation

  • Translate functional requirements into architectural components
  • Convert non-functional requirements (performance, security) into technical solutions

Conduct Targeted Research

  • Analyze technological options for critical components
  • Study best practices and design patterns
  • Assess risks and constraints of chosen solutions

Define System Structure

  • Identify key modules and their interactions
  • Design interfaces between components
  • Develop data models and their transformations

Plan for Reliability

  • Design error handling and recovery mechanisms
  • Define testing strategies across all levels
  • Ensure compliance with security requirements

Document Decisions

  • Record all architectural decisions with justification
  • Establish traceability between requirements and design elements
  • Prepare materials for handover to developers

Principles of Research Integration​

Defining the Research Scope:

  • Focus on critical decisions. Investigate only those aspects that directly impact architecture or involve high uncertainty. For example, choosing between synchronous and asynchronous payment processing requires in-depth analysis, whereas selecting a UI color scheme can rely on existing guidelines.
  • Time-boxed efforts. Set clear time limits for research activities.
  • Actionable insights. Collect and retain only information that directly influences decision-making.

Documenting Research:

  • Contextual linkage. Each finding must explicitly reference a specific requirement or problem.
  • Source references. Include direct links to documentation, articles, and code examples.
  • Integration into design. Do not store research in isolation. Embed key findings directly into relevant sections of the architectural document.
  • Decision rationale. For every decision made, specify:
    • Alternatives considered
    • Evaluation criteria
    • Reasons for selecting the current option
    • Potential trade-offs

Step-by-Step Process​

Step 1: Requirements Analysis and Research Planning​

Objective: Gain deep understanding of requirements, identify areas needing research, and clearly define scope and boundaries.

This step may be skipped if no new technologies or architectural patterns are planned for adoption.

Process:

  1. Thorough requirements review
  2. Identify research areas
  3. Plan research activities. For each area, define the research objective and success criteria
  4. Establish research boundaries

What to Document:

  • Project context and alignment with business goals
  • List of research topics with justification for prioritization
  • Expected outcomes and their architectural impact
  • Research boundaries and completion criteria

Step 2: Conducting Research and Building Context​

Objective: Gather sufficient information to make informed architectural decisions while avoiding excessive analysis.

This step may be skipped if no new technologies or architectural patterns are planned for adoption.

Process:

  1. Information gathering

    • Review official documentation, technical blogs, and case studies
    • Conduct experimental tests (proof-of-concept) for critical features
    • Consult internal or external experts
  2. Option evaluation

    • For each option, assess:
      • Technical characteristics
      • Required effort
      • Potential risks
      • Alignment with non-functional requirements
  3. Document research outcomes

    • Create a concise summary focused on decisions
    • Provide source references for verification
    • Note uncertainties and need for further research
  4. Make preliminary decisions

    • Formulate recommendations based on research
    • Specify rationale and potential trade-offs

What to Document:

  • Key findings linked to specific requirements
  • Comparison of alternatives with criteria-based evaluation
  • Justification for selected technologies and patterns
  • Source references and verification materials
  • Uncertainties and recommendations for resolution

Step 3: Creating System Architecture​

Objective: Define a high-level solution structure that fully satisfies requirements and is ready for detailed elaboration.

Architecture Components:

  1. System Overview

    • Component diagram (C4 model recommended)
    • Brief description of primary data flows
    • Integration points with existing infrastructure
  2. Component Architecture

    • List of core modules with purpose descriptions
    • Responsibility boundaries for each component
    • Component interactions (synchronous/asynchronous)
  3. Data Flow

    • Description of key entity lifecycles
    • Data storage locations at each stage
    • Data transformations between components
  4. Integration Points

    • External systems and APIs with version specifications
    • Communication protocols and data formats
    • Strategies for handling external system unavailability
  5. Technology Stack

    • Explicit justification for each technology choice
    • Versions of tools used
    • Migration plan for legacy components

What to Document:

  • Architecture diagram with explanations
  • Justification for chosen architectural style (microservices, monolith, etc.)
  • How the architecture satisfies functional and non-functional requirements
  • Risks of architectural decisions and mitigation strategies

Important: Describe only components necessary to fulfill current requirements. Avoid designing "for the future" without explicit requirements.


Step 4: Defining Components and Interfaces​

Objective: Detail the internal system structure and component interaction mechanisms to ensure readiness for implementation.

Component Design Elements:

  1. Component responsibilities. Clear description of what each component does
  2. Interface definitions
  3. Dependency relationships
  4. Configuration and setup

What to Document:

  • Separate subsection for each component with complete description
  • Example requests and responses for all interfaces
  • Sequence diagrams for key scenarios
  • Component-level error handling rules

Step 5: Data Model Design​

Objective: Define data structures and processing rules that ensure integrity and compliance with business rules.

Data Model Elements:

  1. Entity definitions: data and responsibilities
  2. Entity relationships
  3. Validation rules and business logic for entities
  4. Storage strategies

What to Document:

  • ERD diagram with explanations
  • Complete field descriptions for each entity
  • Example data for key scenarios
  • Data migration strategies when schema changes occur

Step 6: Planning Error Handling and Edge Cases​

Objective: Ensure system reliability during failures by defining clear strategies for handling all possible scenarios.

Error Handling Design:

  1. Categorize errors. System errors, data validation errors, etc.
  2. Error response strategies
  3. User experience in error scenarios
  4. Recovery mechanisms
  5. Monitoring mechanisms

What to Document:

  • Error matrix for each key operation with handling strategies
  • Error handling examples for critical scenarios
  • Log and metric formats for error tracking
  • Incident response procedures for critical failures

Step 7: Defining Testing Strategy​

Objective: Ensure implementation quality through a well-thought-out testing strategy covering all system aspects.

Testing Strategy Elements:

  1. Define testing levels
  2. Test coverage: coverage criteria and priorities
  3. Test scenarios
  4. Testing tools
  5. Quality checkpoints

What to Document:

  • Required test level and type for each component
  • Quality metrics and target values
  • Example test scenarios for key features
  • Integration of testing strategy with development process

Step 8: Final Design Quality Review​

Objective: Ensure the design is complete, understandable, implementation-ready, and meets all quality criteria.

Quality Checklist:

CategoryVerification CriterionVerification Method
CompletenessAll requirements addressed in designRequirements-to-design mapping
Core system components definedDiagram and description review
Data models cover all required entitiesERD and description analysis
Error handling covers expected failure modesError matrix verification
ClarityDesign decisions clearly explainedDocument review by new developer
Component responsibilities well-definedComponent description review
Component interfaces specifiedAPI contract analysis
Technical choices include justificationResearch section verification
FeasibilityDesign technically achievable with chosen technologiesExpert consultation
Performance requirements can be metOptimization strategy analysis
Security requirements addressedSecurity measure verification
Implementation complexity aligns with project estimatesDeveloper assessment
TraceabilityDesign elements linked to specific requirementsTraceability matrix
All requirements covered by design componentsCompleteness verification
Design decisions support requirement fulfillmentCompliance analysis
Testing strategy validates requirement satisfactionTest scenario verification

Common Design Mistakes​

Mistake 1: Over-Engineering​

Problem: Designing for requirements that don't exist or adding functionality "for the future."

Symptoms:

  • Design includes components with no direct link to current requirements
  • Complex abstractions for scenarios that may never materialize
  • Extended design timeline without clear benefit

Solution:

  • Focus strictly on current requirements
  • Apply the YAGNI principle (You Aren't Gonna Need It)
  • Design should be extensible but not implement unused features
  • Regularly ask: "Which specific requirement justifies this component?"

Mistake 2: Poorly Specified Interfaces​

Problem: Vague component boundaries and interactions leading to implementation misunderstandings.

Symptoms:

  • Unclear component responsibilities
  • Missing clear API contracts
  • Numerous clarification questions during implementation

Solution:

  • Clearly define inputs, outputs, and errors for each interface
  • Use formal specifications (OpenAPI, Protobuf)
  • Include example requests and responses
  • Conduct interface reviews with developers before implementation

Mistake 3: Ignoring Non-Functional Requirements​

Problem: Focusing only on functional behavior while neglecting performance, security, and other non-functional aspects.

Symptoms:

  • No mention of response time, load capacity, or security
  • Missing scaling or failover strategies
  • Undefined quality metrics

Solution:

  • Explicitly document all non-functional requirements in a dedicated section
  • Specify measurable metrics for each (e.g., "Response time < 500ms at 1000 RPS")
  • Include design elements that ensure NFR compliance
  • Verify NFR compliance during final review

Mistake 4: Technology-Driven Design​

Problem: Selecting technologies before fully understanding requirements, leading to suboptimal solutions.

Symptoms:

  • Technologies mentioned before requirements are defined
  • Technology comparisons without specific task alignment
  • Unnecessary complexity from using "trendy" technologies

Solution:

  • Let requirements drive technology choices, not vice versa
  • For each technology, specify the exact requirement it satisfies
  • Consider simple solutions before complex ones
  • Use a technology evaluation matrix with requirement-based criteria

Mistake 5: Inadequate Error Handling Design​

Problem: Designing only for the "happy path" while ignoring failure scenarios.

Symptoms:

  • Missing error handling section
  • No recovery strategies for failures
  • Undefined user error messages

Solution:

  • Explicitly design error handling alongside main workflows
  • Define possible failure scenarios for each operation
  • Include retry mechanisms, fallback strategies, and monitoring in design
  • Ensure user experience is considered for all scenarios

Resolving Design Issues​

Issue: Design Becomes Overly Complex​

Symptoms:

  • Design document exceeds 2500 lines without clear necessity
  • Too many components and interactions
  • Difficulty explaining architecture to new team members

Solution:

  • Return to requirements and remove elements without direct linkage
  • Consider phased implementation (MVP + subsequent iterations)
  • Refactor architecture by consolidating related components

Issue: Requirements Don't Map to Design​

Symptoms:

  • Difficulty tracing requirements to design elements
  • Some requirements missing from design
  • No clear connection between business goals and technical decisions

Solution:

  • Create a requirements-to-design traceability matrix
  • Conduct step-by-step verification of each requirement
  • Add requirement references to relevant design sections

Issue: Technology Choices Are Unclear​

Symptoms:

  • Multiple viable options without clear selection criteria
  • Missing justification for chosen technologies

Solution:

  • Define decision criteria based on requirements and constraints
  • Create a comparison matrix with key criteria evaluation
  • Conduct proof-of-concept for critical decisions
  • Document selection rationale including trade-offs

Issue: Design Lacks Implementation Details​

Symptoms:

  • Many questions during implementation
  • Ambiguity in interfaces and contracts

Solution:

  • Add specific example requests and responses
  • Clarify data formats and error codes
  • Include sequence diagrams for key scenarios

Conclusion​

A design document is not merely a formal artifact but a living tool that ensures successful project implementation. High-quality design:

  • Reduces errors and rework
  • Accelerates development through clear guidance
  • Simplifies knowledge transfer among team members
  • Serves as the foundation for implementation quality assessment

Key Principles of Effective Design:

  1. Minimal sufficiency β€” design exactly what's needed for implementation
  2. Practical orientation β€” focus on solutions ready for implementation
  3. Decision transparency β€” every decision must have clear justification
  4. Living document β€” regularly update design as new information emerges

Document Template​

# Design "[Brief Feature Name]"

[Brief description of business objectives, alignment with corporate strategy, key stakeholders]

[Clear system boundaries definition: what's included/excluded in the solution]

---

## System Architecture

[General description of feature workflow, component listing, their relationships, and data flows between them]

---

## Components and Interfaces

**For each key component, create a subsection:**

### [Component Name]

[What the component does]

[Relationships with other components]

[Link to source requirements]

**Non-Functional Requirements**

[Non-functional requirements for the component]

- Performance: [Metrics + strategies]
- Security: [Protection mechanisms]
- Reliability: [Failover strategies]

**Error Handling Strategy**

[Error handling strategy]

**Testing Strategy**

* [Test case]
* Test case description
* [Another test case]
* Test case description

---

## System Entities

### [Entity Name]

[Entity description]

[Link to source requirements]

**Entity Methods**

* [Entity method]
* Method description and behavior
* [Another entity method]
* Method description and behavior

**Entity Data**

* [Entity field]
* Field description and behavior
* [Another entity field]
* Field description and behavior

**Testing Strategy**

* [Test case]
* Test case description
* [Another test case]
* Test case description

---

## Requirements Quality Checklist

*(Completed after document finalization)*

| Criterion | Completed | Comment |
| -------------------------------------------------------------------- | --------- | ------- |
| All requirements have unambiguous representation in design | ☐ | |
| Non-functional requirements translated into measurable technical solutions | ☐ | |
| Error handling designed for all key scenarios | ☐ | |
| Data model covers all business entities and rules | ☐ | |
| Testing strategy defined with levels and quality metrics | ☐ | |
| Design follows minimal sufficiency principle (YAGNI) | ☐ | |
| Traceability system exists from requirements β†’ design elements | ☐ | |

βœ… Task Phase Documentation

The task phase is the final phase of the specification-driven development process, transforming an approved design into a structured implementation plan composed of discrete, executable development tasks. This phase serves as a bridge between planning and execution, breaking down complex system designs into manageable steps that can be incrementally carried out by development teams or AI agents.

As the third phase in the Requirements β†’ Design β†’ Tasks workflow, the task phase ensures that all meticulous planning and design efforts translate into systematic, trackable implementation progress.

Purpose and Objectives​

The task phase serves to:

  • Transform design components into concrete development activities
  • Sequence tasks for optimal development flow and early validation
  • Create clear, actionable prompts for implementation
  • Establish dependencies and build order among tasks
  • Ensure incremental progress with testable milestones
  • Provide a roadmap for systematic feature development

Step-by-Step Process​

Step 1: Design Analysis and Task Identification​

Objective: Break down the design into implementable components

Task List Formation Principles:

  1. Review Design Components: Identify all system components that need to be built or modified
  2. Map to Code Artifacts: Determine which files, classes, and functions must be created or altered
  3. Account for Testing Requirements: Plan test creation alongside implementation
  4. Sequence for Early Validation: Order tasks to enable rapid validation of core functionality
  5. Link to Requirements: Reference specific requirements being implemented, ensuring traceability from task to user value

Task Creation Principles:

  • Focus on concrete activities (writing, modifying, testing code)
  • Each task must produce working, testable code
  • Tasks must build incrementally upon previous work

Step 2: Task Structuring and Hierarchy​

Objective: Decompose tasks into subtasks

Task Organization Principles:

  1. Maximum Two Levels: Use only top-level tasks and subtasks (avoid deeper nesting)
  2. Logical Grouping: Group related tasks under meaningful categories
  3. Sequential Dependencies: Order tasks so each builds upon prior work
  4. Testable Increments: Each task must result in testable functionality

Task Execution Sequencing Principles:

  • Core First: Build core functionality before optional features
  • Risk First: Address uncertain or complex tasks early
  • Value First: Implement high-value features that can be quickly tested
  • Dependency-Driven: Respect technical dependencies between components
  • Foundation First: Implement core interfaces and data models before dependent components
  • Bottom-Up Approach: Develop low-level utilities before high-level functions
  • Test-Driven Sequencing: Write tests alongside or before implementation
  • Integration Points: Plan component integration as components are built

Task Hierarchy Template:

## Task

[Task details, links to requirements and design]

- [ ] 1.1 [Implementation subtask]
- [Subtask details, links to requirements and design]
- [ ] 1.2 [Next specific task]
- [Subtask details, links to requirements and design]

## Next Task

- [Task details, links to requirements and design]

- [ ] 2.1 [Implementation subtask]
- [Subtask details, links to requirements and design]

Step 3: Task Definition and Specification​

Objective: Enrich subtask details with the following information:

  1. Clear Objective: Specify exactly which code needs to be written or modified
  2. Implementation Details: Identify specific files, components, or functions to create
  3. Requirements Traceability: Reference specific requirements being implemented
  4. Design Traceability: Reference the design being implemented
  5. Acceptance Criteria: Define how to verify task completion
  6. Testing Expectations: Specify which tests must be written or updated

Step 4: Validation and Refinement​

Task Quality Criteria:

  1. Actionable: Can be executed without requiring further clarification
  2. Specific: Clearly state which files, functions, or components to create
  3. Testable: Produce code that can be tested and validated
  4. Incremental: Build upon previous tasks without large complexity jumps
  5. Complete: Cover all aspects of the design requiring implementation

Validation Questions:

  • Can a developer begin implementation based solely on this task description?
  • Does this task produce working, testable code?
  • Are the requirements being implemented clearly identified?
  • Does this task logically build upon previous tasks?
  • Is the task scope appropriate (not too large, not too small)?

Quality Checklist​

Before finalizing the task list, verify:

Completeness:

  • All design components are covered by implementation tasks
  • All requirements are addressed by one or more tasks
  • Testing tasks are included for all core functionality
  • Integration tasks connect all components

Clarity:

  • Each task has a clear, specific objective
  • Task descriptions specify which files/components to create
  • Requirement references are included for every task
  • Completion criteria are explicit or implicitly clear

Sequencing:

  • Tasks are ordered to respect dependencies
  • Early tasks establish foundations for subsequent work
  • Core functionality is implemented before optional features
  • Integration tasks follow component implementation

Implementability:

  • Each task is appropriately sized for implementation
  • No tasks require external dependencies or manual processes
  • Task complexity increases gradually

Addressing Task Planning Issues​

Issue: Tasks Are Too Vague​

Symptoms: Developers cannot start implementation from task descriptions
Solution: Add more specific implementation details, including file/component names

Issue: Task Dependencies Are Unclear​

Symptoms: Tasks cannot be completed due to missing prerequisites
Solution: Review task sequence and add missing foundational tasks

Issue: Task-to-Requirement Linkage Is Unclear​

Symptoms: Difficulty tracing tasks back to user value
Solution: Add requirement references and validate coverage

Document Template​

# Tasks for "[Brief Feature Name]"

## [Task Name]

[Task details, links to requirements and design]

- [ ] 1.1 [Implementation subtask]
- [Subtask details, links to requirements and design]
- [ ] 1.2 [Next specific task]
- [Subtask details, links to requirements and design]

## [Next Task Name]

- [Task details, links to requirements and design]

- [ ] 2.1 [Implementation subtask]
- [Subtask details, links to requirements and design]

## Task List Quality Checklist

*(Completed before finalizing the task list)*

| Criterion | Completed | Comment |
| ------------------------------------------------------------------------- | --------- | ------- |
| All design components are covered by implementation tasks | ☐ | |
| All requirements are addressed by one or more tasks | ☐ | |
| Testing tasks are included for all core functionality | ☐ | |
| Integration tasks connect all components | ☐ | |
| Each task has a clear, specific objective | ☐ | |
| Task descriptions specify which files/components to create | ☐ | |
| Requirement references are included for every task | ☐ | |
| Completion criteria are defined for each task | ☐ | |
| Tasks are sequenced to respect dependencies | ☐ | |
| Early tasks establish foundations for subsequent work | ☐ | |
| Core functionality is implemented before optional features | ☐ | |
| Integration tasks follow component implementation | ☐ | |
| Each task has an appropriate size for implementation | ☐ | |
| No tasks require external dependencies or manual processes | ☐ | |
| Task complexity increases gradually | ☐ | |

🎯 Steering Documents

Objective: Ensure consistency, quality, and efficiency in development by creating and maintaining a set of living, atomic documents that serve as the project’s shared context and guide the team in implementing any work itemsβ€”from micro-specifications to large features.

Core Principles​

Steering documents are not static artifacts but dynamic knowledge management and quality assurance tools. Their creation and maintenance are governed by the following key principles:

  1. Atomicity and Focus: Each document must focus on a single, specific topic (e.g., git_workflow, react_component_structure, postgres_naming_conventions). Avoid creating monolithic, all-encompassing manuals.
  2. Living Documentation: Documents must be regularly updated as the project, technologies, and best practices evolve. Outdated documentation is worse than no documentation at all.
  3. Practical Orientation: Content must be directly applicable by developers in their day-to-day work. Focus on the β€œhow” and β€œwhy,” not abstract theories.
  4. Contextuality: Documents may be global (applying to the entire solution) or local (specific to a particular module, microservice, or component). Clearly indicate the scope of applicability.
  5. Integration into Workflow: Steering documents are an integral part of the specification-driven development process. They must be explicitly referenced in design specifications and considered during task planning.
  6. Ownership of Currency: It is recommended to create dedicated tasks within specifications for maintaining the currency of steering documents, especially after significant changes to the codebase or infrastructure.

Primary Categories of Steering Documents​

To ensure comprehensive coverage of the development lifecycle, steering documents are grouped into the following categories:

1. Development Environment and Infrastructure Standards​

  • Objective: Ensure consistency and reproducibility of local and CI/CD environments.
  • Example Documents:
    • development_environment_setup: Local setup procedures, dependencies, IDE configuration.
    • environment_variables_management: Rules for naming, storing, and managing environment variables.
    • build_and_deployment_processes: Build scripts, CI/CD pipeline configurations, deployment procedures across environments.
    • infrastructure_as_code_standards: Standards for Terraform, CloudFormation, etc.

2. Code Quality Standards​

  • Objective: Maintain high code quality, readability, and maintainability.
  • Example Documents:
    • language_style_guide_[language]: Style guides (e.g., python_style_guide, typescript_style_guide).
    • naming_conventions: Conventions for naming variables, functions, classes, files, and database objects.
    • code_organization_patterns: Patterns for structuring projects, modules, and components.
    • code_documentation_requirements: Requirements for comments, docstrings, and documentation generation.

3. Git Workflow Standards​

  • Objective: Ensure a predictable and efficient collaborative source code management process.
  • Example Documents:
    • git_branching_strategy: Branch naming conventions (e.g., feature/, hotfix/, release/).
    • commit_message_format: Commit message format (e.g., Conventional Commits).
    • pull_request_process: Procedures for creating, reviewing, and merging PRs (description requirements, checklists).
    • code_review_guidelines: Guidelines for reviewers and authors (what to check, how to provide feedback).

4. Technology- and Architecture-Specific Standards​

  • Objective: Establish unified design and implementation rules for the project’s key technologies.
  • Example Documents:
    • frontend_architecture_patterns: UI development patterns (e.g., React/Vue component structure, state management).
    • backend_api_design: Standards for designing REST/gRPC APIs (versioning, response structure, error codes, OpenAPI/Swagger documentation).
    • database_design_and_migration: Rules for database schema design, migration conventions, and indexing strategies.
    • testing_strategy_[level]: Global testing strategies (e.g., unit_testing_strategy, e2e_testing_strategy, framework selection, coverage requirements).

5. Security, Performance, and Observability​

  • Objective: Lay the foundation for building reliable, secure, and easily diagnosable systems.
  • Example Documents:
    • security_practices: Applied security practices (input validation, session management, secret handling, dependency scanning).
    • performance_optimization_guidelines: Optimization guidelines (caching, asynchronous processing, profiling).
    • monitoring_and_alerting_standards: Standards for logging, metrics, tracing, and alert configuration.

6. Business Context and Architecture​

  • Objective: Ensure shared understanding of the domain and the system’s high-level structure.
  • Example Documents (core documents, always create):
    • tech_stack: Explicit justification for each technology in the stack, including versions.
    • domain_glossary: Glossary of key business terms and concepts.
    • project_dataflow: High-level description of data flows within the system.
    • context_diagram_c4: C4 context diagram (Level 1; likec4 syntax recommended).
    • container_diagram_c4: C4 container diagram (Level 2; likec4 syntax recommended).

Step-by-Step Creation and Maintenance Process​

Step 1: Needs Assessment and Planning​

  • Objective: Determine which steering document is needed and plan its creation.
  • Process:
    1. Project Analysis: Review the current codebase to identify inconsistencies or areas where the absence of standards leads to errors.
    2. Gap Identification: Determine which standard category is missing or requires updating.
    3. Prioritization: Assess the potential impact of the document. Priority order: security > code quality > workflow > performance.
    4. Template and Scope Selection: Decide whether the document will be global or component-specific. Choose an appropriate level of detail (lightweight for small teams, comprehensive for enterprise solutions).
    5. Task Creation: Record the need to create or update the document as a distinct work item in the backlog.

Step 2: Content Development​

  • Objective: Produce a practical, clear, and useful document.
  • Process:
    1. Research Best Practices: Use industry standards (e.g., Google Style Guides, 12-factor app) as a foundation.
    2. Project Contextualization: Adapt general practices to the project’s specific technologies, constraints, and goals.
    3. Include Examples: Always provide concrete, working code samples, configuration files, or diagrams. β€œShow, don’t tell.”
    4. Link Integration: Connect the new document to other steering documents and relevant sections of design specifications.
    5. Formalization: Use clear, unambiguous language. Avoid terms like β€œshould” or β€œrecommended” where β€œmust” or β€œmust not” can be used instead.

Step 3: Validation and Approval​

  • Objective: Ensure the document is useful, actionable, and error-free.
  • Quality Criteria:
    • Actionability: Can a developer immediately apply these rules in practice?
    • Clarity: Is the document understandable to a new team member?
    • Consistency: Does it contradict other steering documents or approved design specifications?
    • Completeness: Does it cover all key aspects of the stated topic?
    • Relevance: Is it based on the current state of the project and technologies?
  • Process: Conduct a document review with key developers and architects. Obtain formal approval from the technical lead.

Step 4: Maintenance and Evolution​

  • Objective: Keep the document current throughout the project’s lifecycle.
  • Process:
    1. Regular Review: Periodically verify the currency of all steering documents.
    2. Update upon Changes: Any significant change to architecture, technology stack, or processes must be accompanied by updates to relevant steering documents. This may be handled as a separate task within the corresponding specification.
    3. Remove Obsolete Content: Do not hesitate to delete documents or sections that are no longer relevant. Maintain β€œminimal sufficiency.”

Steering Documents Quality Checklist​

(Completed after document creation or update)

CriterionDoneComment
Document focuses on a single, specific topic (atomic)☐
Content is practical and directly applicable by developers☐
Includes concrete, working examples☐
Provides justification for key rules and decisions☐
Contains no confidential data or secrets☐
No conflicts with other steering documents☐
Language is clear, unambiguous, and uses β€œmust”/β€œmust notβ€β˜
Includes links to related steering documents and specifications☐
Specifies scope (global/component) and owners☐
Plans for regular review and updates☐