π― Steering
π― Context for Steeringβ
Specialized prompt for leadership and management. Includes:
- Steering Documents - principles of effective leadership
- Management Methodology - direction and management processes
- Requirements Phases - guidance through requirements
- Design Phases - management of architectural decisions
- Task Phases - task and resource management
- Steering Documents - management standards and procedures
Use when: You need to manage a project, guide a team, or make leadership decisions.
Specialized prompt for leadership, team management, and making leadership decisions.
π Source Documentsβ
ποΈ π Standards and Methodological References
Industry standards, methodologies, and best practices for SDD
ποΈ π§ Methodology
Step-by-step processes for creating specifications
ποΈ π Requirements Phase
Detailed stages of collecting and analyzing requirements
ποΈ ποΈ Design Phase
Processes of architectural planning
ποΈ β Tasks Phase
Management of tasks and their prioritization
ποΈ π Steering Documents
Standards and guiding principles
π Standards and Methodological References
This section systematically organizes key industry standards, methodologies, and best practices that form the foundation for a Software Design Document (SDD). Applying these approaches ensures:
- Clarity and unambiguity of requirements
- High quality of architecture and documentation
- Minimization of risks and ambiguities in development
- Compliance with international quality standards
EARS (Easy Approach to Requirements Syntax)β
A structured approach to formulating requirements that ensures clarity, verifiability, and unambiguity through standardized templates.
Key EARS Templatesβ
1. WHEN (Event-Driven Requirements)β
Purpose: Describing the systemβs response to specific events or triggers
Format: WHEN [event/trigger] THEN [system] SHALL [action]
Examples:
- WHEN the user clicks the "Save" button THEN the system SHALL validate all form fields
- WHEN a file upload exceeds 10 MB THEN the system SHALL display an error message
- WHEN the userβs session expires THEN the system SHALL redirect to the login page
2. IF (State-Driven Requirements)β
Purpose: Describing system behavior under specific conditions
Format: IF [condition] THEN [system] SHALL [action]
Examples:
- IF the user is not authenticated THEN the system SHALL deny access to protected resources
- IF the database connection fails THEN the system SHALL display a maintenance message
- IF the user has administrator privileges THEN the system SHALL display the admin panel
3. WHILE (Continuous Requirements)β
Purpose: Describing persistent system behavior while in a specified state
Format: WHEN [state] THEN [system] SHALL [continuous behavior]
Examples:
- WHEN a file is uploading THEN the system SHALL display a progress indicator
- WHEN the user is typing THEN the system SHALL provide real-time validation feedback
- WHEN the system is processing a request THEN the system SHALL prevent duplicate submissions
4. WHERE (Context-Dependent Requirements)β
Purpose: Constraining a requirement to a specific context or environment
Format: WHERE [context] THEN [system] SHALL [behavior]
Examples:
- WHERE the user is on a mobile device THEN the system SHALL use a responsive layout
- WHERE the application runs in production mode THEN the system SHALL log errors to an external service
- WHERE multiple users edit simultaneously THEN the system SHALL handle conflicts gracefully
EARS Application Guidelinesβ
Category | Recommendations | Examples |
---|---|---|
Syntax | β’ Use active voice β’ Consistently use the term "system" instead of synonyms | β "Fields must be validated" β "The system SHALL validate fields" |
Specificity | β’ Avoid vague terms β’ Specify quantitative parameters | β "Fast response" β "Response time under 300 ms" |
Structure | β’ One requirement = one statement β’ Clear verification criteria | β "The system SHALL validate and save" β Two separate requirements |
EARS Anti-Patternsβ
π« Compound Requirements
Example: "WHEN the user enters data THEN the system SHALL validate and save"
Solution: Split into two distinct requirements with separate triggers
π« Ambiguous Conditions
Example: "WHEN data is entered THEN the system SHALL process"
Solution: Specify exact conditions ("WHEN all mandatory fields are filled")
π« Implementation Details
Example: "WHEN the form is submitted THEN the system SHALL use a REST API"
Solution: Focus on the outcome ("...the system SHALL save the data")
Industry Standardsβ
IEEE 830-1998: IEEE Recommended Practice for Software Requirements Specificationsβ
Standard Purpose: Providing a structured approach to documenting requirements through an SRS (Software Requirements Specification).
Key Characteristics of a High-Quality SRSβ
- Completeness: All functional and non-functional requirements are accounted for
- Unambiguity: No ambiguous or vague phrasing
- Verifiability: Each requirement includes verification criteria
- Consistency: No contradictions between requirements
- Traceability: Clear linkage to sources and lifecycle phases
Recommended SRS Structureβ
1. Introduction
- Purpose of the document
- Scope
- Definitions, acronyms, and abbreviations
- References to related documents
2. Overall Description
- System context
- User characteristics
- Implementation constraints
- Assumptions and dependencies
3. Specific Requirements
- Functional requirements (FR-001, FR-002...)
- Non-functional requirements (NFR-001...)
- Interfaces
- Data requirements
4. Appendices
- Traceability matrix
- Diagrams
- Sample scenarios
Requirements Specification Formatβ
Each requirement must include:
- Unique identifier (e.g., FR-001)
- Short title
- Detailed description
- Source (customer/document)
- Priority (Must/Should/Could)
- Acceptance criteria
- Dependencies
Architectural Principles and Methodologiesβ
SOLID Principlesβ
Principle | Description | Anti-Pattern |
---|---|---|
Single Responsibility | Each component should serve only one actor/user role | God Object |
Open/Closed | Open for extension, closed for modification | Frequent changes to base code |
Liskov Substitution | Subclasses must be substitutable for their base classes | Violation of inheritance contracts |
Interface Segregation | Prefer specific interfaces over general-purpose ones | "Fat" interfaces |
Dependency Inversion | Depend on abstractions, not concrete implementations | Tight coupling to concrete implementations of higher layers |
Domain-Driven Design (DDD)β
Key Concepts:
- Ubiquitous Language: Shared terminology between business analysts and developers
- Bounded Contexts: Clear separation of domain areas
- Aggregates: Grouping of objects within a transactional boundary
- Domain Events: Recording significant business occurrences
Implementation Recommendations:
- Create a domain glossary
- Identify the Core Domain
- Apply design patterns (Entities, Value Objects, Repositories)
- Implement an event bus for inter-context communication
Requirements Elicitation Methodologiesβ
User Storiesβ
Format:
As a [role], I want [functionality] so that [business value]
Quality Criteria (INVEST):
- Independent: Independent of other stories
- Negotiable: Open to refinement and discussion
- Valuable: Delivers tangible value
- Estimable: Can be sized or estimated
- Small: Fits within a single iteration
- Testable: Has clear acceptance criteria
Example:
As a sales manager, I want to filter orders by date so that I can analyze weekly revenue.
Acceptance Criteria:
- WHEN a date range is entered THEN the system SHALL display orders within that period
- WHEN an invalid date is selected THEN the system SHALL show a helpful hint
Use Casesβ
Standard Structure:
1. Name
2. Actors
3. Preconditions
4. Main Success Scenario (step-by-step sequence)
5. Alternative Flows
6. Postconditions
7. Exceptions
Recommendations:
- Limit the main flow to a maximum of 9 steps
- Number alternative flows (e.g., 3a, 3b)
- Each step must include an actor action and the systemβs response
Documentation Standardsβ
Technical Documentation Requirementsβ
Element | Recommendations | Anti-Patterns |
---|---|---|
Structure | β’ Logical sequence β’ Consistent heading hierarchy | Mixing levels of detail |
Style | β’ Active voice β’ Precise phrasing | Passive constructions ("must be done") |
Terminology | β’ Glossary at the beginning β’ Consistent terminology | Synonymy within the same document |
Visualization | β’ Diagrams for complex processes β’ Data schemas | Excessive graphics without explanations |
Documentation Types and Their Standardsβ
1. API Documentationβ
Required Elements:
- Description of all endpoints with HTTP methods
- Request/response examples in JSON/YAML
- Error codes with explanations
- Authentication parameters
- Rate limits
Recommendation: Auto-generate using Swagger/OpenAPI
2. User Documentationβ
Structure:
- Quick Start: 5-minute guide
- Core Scenarios: Step-by-step instructions
- Advanced Features: In-depth exploration
- FAQ: Solutions to common issues
- Community links
3. Architectural Documentationβ
Required Sections:
- Context diagram (C4 Model Level 1)
- Container diagram (C4 Model Level 2)
- Key architectural decisions (ADR)
- Technology matrix
- Architecture evolution roadmap
These standards and methodologies should be adapted to the specific needs of each project, maintaining a balance between formality and practical applicability. Regular documentation reviews involving all stakeholders ensure its relevance and quality throughout the project lifecycle.
π Specification-Driven Development Methodology
Specification-driven development is a systematic approach to developing software features that emphasizes thorough planning, clear documentation, and structured implementation. This methodology transforms vague feature ideas into clearly defined, implementable solutions through a three-phase process that ensures quality, maintainability, and successful delivery.
Core Philosophyβ
Clarity Before Codeβ
The fundamental principle of specification-driven development is that clarity of thought and purpose must precede implementation. This approach requires investing time upfront to thoroughly understand requirements, design solutions, and plan implementation before writing a single line of code.
When a team dedicates time to upfront planning:
- Uncertainty is reduced β developers clearly understand exactly what needs to be built and why
- Rework is minimized β the likelihood of discovering critical issues late in the process decreases significantly
- Implementation accuracy improves β the probability of delivering precisely the solution the business needs increases
This principle is especially critical in todayβs environment, where software requirements grow increasingly complex while delivery timelines become tighter. The investment in initial planning pays dividends many times over during implementation and subsequent maintenance phases.
Iterative Refinementβ
Each phase of the specification process is designed for iterative refinement rather than a one-time pass. Unlike a linear progression from idea to implementation, the methodology encourages continuous revisiting of earlier stages to adjust and clarify.
This approach offers several key advantages:
- Early problem detection β technical complexities, requirement mismatches, or unforeseen dependencies are identified during the design phase when they are cheapest to fix
- Gradual confidence building β each iteration deepens understanding of the problem and increases confidence in the chosen direction
- Flexibility amid uncertainty β allows course correction as new information emerges without sacrificing a structured approach
Iterativity manifests at all process levelsβfrom refining requirements through exploring technical alternatives to detailing the implementation plan. This creates a solid foundation for subsequent work.
Documentation as Communicationβ
In this methodology, specifications serve not merely as formal planning documents but as key communication instruments that fulfill several critically important functions:
- Stakeholder alignment β documented requirements and designs become a common language for developers, managers, customers, and other project participants
- Decision rationale preservation β captures not only what was decided but why, which is crucial when making future changes
- Context provision for future maintenance β new team members can quickly understand the system through documented decisions
- Creation of long-term assets β well-written specifications retain value even after the initial implementation is complete
This philosophy treats documentation not as a necessary evil but as an investment in the projectβs future, yielding returns through improved understanding, reduced risk, and simplified maintenance.
Benefits of Specification-Driven Developmentβ
Reduced Risk and Uncertaintyβ
Through thorough planning before implementation, specification-driven development significantly reduces the risk of building incorrect functionality or encountering unexpected technical issues. The systematic approach helps identify and resolve problems early in the process when fixes require minimal effort.
Concrete manifestations of this benefit include:
- Preventing the delivery of features that donβt match what was requested
- Early identification of contradictory or unrealistic requirements
- Discovery of technical constraints before coding begins
- Elimination of stakeholder misunderstandings at an early stage
Improved Quality and Maintainabilityβ
Features developed through the specification process typically demonstrate higher quality and are easier to maintain. This stems from several factors:
- Clear requirements establish a foundation for more comprehensive testing and validation
- Thoughtful design leads to better architecture and separation of responsibilities
- Proactive error-handling planning reduces the number of bugs in production
- Documented decision rationale facilitates future modifications and enhancements
Collectively, these aspects result in more reliable, testable, and modifiable codeβparticularly valuable for long-term projects.
Enhanced Collaborationβ
Specifications provide a common language and shared understanding among all project participants:
- Developers gain a clear picture of what needs to be implemented
- Testers can prepare test cases in parallel with development
- Project managers see the complete picture of requirements and complexity
- Customers can confirm their needs have been correctly understood
This improved communication reduces misunderstandings, minimizes rework, and enables more effective collaboration among all stakeholders throughout the project lifecycle.
Better Estimation and Planningβ
The detailed planning inherent in specification-driven development enables more accurate estimation of time and resource requirements. When teams invest time in analyzing requirements and designing before implementation begins, project managers and developers can make better-informed decisions:
- More accurate effort estimates β understanding full complexity enables realistic timelines
- Efficient resource allocation β knowledge of dependencies helps optimally assign tasks
- Transparent expectation management β customers gain a clear understanding of what will be delivered and when
- Flexible scope management β enables prioritization based on clearly defined requirements
This is especially valuable under resource constraints and tight deadlines, where every minute of planning saves hours of implementation.
Knowledge Preservationβ
One of the most undervalued benefits is the ability of specifications to serve as living documentation that preserves critically important project knowledge:
- Design rationale β why specific architectural decisions were made
- Requirement context β how business needs translated into technical solutions
- Change history β how and why requirements evolved over time
- Warnings and pitfalls β known issues and recommendations for avoiding them
This knowledge remains accessible long after the original developers have moved on to other projects, significantly reducing "knowledge debt" and simplifying project handover to new team members.
Comparison with Other Development Methodologiesβ
Traditional Waterfall Developmentβ
Similarities:
- Both approaches emphasize the importance of upfront planning and documentation
- Both follow a sequential, phase-based development approach
Key Differences:
- Iterativity within phases: Specification-driven development encourages refinement and validation at each stage, whereas the waterfall model assumes strictly sequential progression without returns
- Living documents: Specifications are designed to evolve as working documents, while waterfall documentation is often frozen after approval
- Scale of application: The methodology is optimized for feature-level development rather than entire projects, making it more flexible
- Integration with modern practices: Specification-driven development accounts for working with AI tools and contemporary agile practices
Agile Developmentβ
Similarities:
- Both approaches value working software and customer collaboration
- Both embrace iterative refinement and feedback as integral parts of the process
Key Differences:
- Depth of upfront design: Specification-driven development places greater emphasis on thorough design before implementation, whereas classic Agile often defers design until implementation time
- Documentation structure: The methodology prescribes more structured documentation requirements, while Agile traditionally focuses on "working software over comprehensive documentation"
- Compatibility: Specification-driven development is designed to work within agile frameworks rather than replace them, making it complementary to Agile rather than an alternative
- Scale of application: Can be applied to individual features within agile sprints, providing structure where needed
Test-Driven Development (TDD)β
Similarities:
- Both approaches emphasize defining success criteria before implementation
- Both use iterative cycles (red-green-refactor in TDD corresponds to requirements-design-implementation in specification-driven development)
Key Differences:
- Level of abstraction: Specification-driven development operates at a higher level, covering not just individual modules but also system interactions
- Scope of coverage: Includes business requirements and system design, not just test cases
- Practice integration: Can incorporate TDD practices within its implementation phase as one of many tools
- Context: Provides broader context encompassing not only technical aspects but also business goals and user needs
Design-First Developmentβ
Similarities:
- Both approaches prioritize design and planning before actual coding
- Both create detailed technical specifications before implementation
Key Differences:
- Requirements gathering: Specification-driven development includes an explicit, structured requirements-gathering phase using techniques like EARS, whereas Design-First often assumes requirements are already defined
- Task planning: Provides a more structured approach to task decomposition and implementation planning
- AI optimization: Specifically designed with AI-assisted development workflows in mind
- Requirements standardization: Incorporates specific methodologies like EARS (Easy Approach to Requirements Syntax) for creating clear, testable requirements
The Three-Phase Approachβ
Phase 1: Requirements Gatheringβ
Objective: Transform vague feature ideas into clear, testable requirements that can be unambiguously understood by all stakeholders.
Key Activities:
- Capturing user stories that express not only what needs to be done but also why, focusing on value to the user or business
- Defining acceptance criteria using the EARS (Easy Approach to Requirements Syntax) methodology, which helps create clear, unambiguous, and testable requirements
- Identifying edge cases and constraints, including non-functional requirements such as performance, security, and scalability
- Validating completeness and feasibility through checking for contradictions, gaps, and technical realism
Benefits:
- Ensures shared understanding among all stakeholders about what will be built
- Provides clear success criteria for subsequent implementation and testing phases
- Reduces the risk of scope creep and functionality drift during development
- Creates a foundation for test development and result validation even before coding begins
Phase 2: Design Documentationβ
Objective: Create a comprehensive technical implementation plan that defines architectural decisions, system structure, and key interactions.
Key Activities:
- Exploring technical approaches and constraints, including analysis of possible solution options and their comparison against criteria
- Defining system architecture and component interactions, with emphasis on interfaces and responsibility boundaries
- Specifying data models and interfaces, including formal API definitions, data schemas, and communication protocols
- Planning error-handling, testing, and monitoring strategies to ensure system reliability and maintainability
Benefits:
- Identifies potential technical problems and complexities before coding begins, when fixes are cheaper
- Enables more accurate effort and resource estimation through deep problem understanding
- Provides a clear roadmap for implementation, reducing cognitive load on developers
- Documents design decisions and their rationale, which is critical for future maintenance
Phase 3: Task Planningβ
Objective: Break down the design into executable, sequential implementation steps that can be distributed among developers and tracked throughout the cycle.
Key Activities:
- Transforming design elements into concrete coding tasks with clear inputs and outputs
- Sequencing tasks to ensure incremental progress and enable early validation
- Defining clear objectives and completion criteria for each task to enable objective progress assessment
- Linking to original requirements to ensure traceability and confirmation that all functionality aspects are covered
Benefits:
- Makes large features manageable through decomposition into logical, independent parts
- Enables parallel work by multiple developers with minimal conflicts
- Simplifies progress tracking and early bottleneck identification
- Reduces cognitive load on developers by allowing focus on one task at a time
- Facilitates code review and quality assurance through clear responsibility separation
Lightweight Specificationsβ
Principles of Lightweight Specificationsβ
Lightweight specifications represent an adapted approach to the methodology that preserves its key benefits when working with small features, bug fixes, and rapid iterations. The primary goal is to ensure sufficient planning without excessive bureaucracy, maintaining balance between thorough preparation and rapid implementation.
Key principles:
- Proportional effort β documentation volume corresponds to task complexity
- Minimalism β documentation is limited to whatβs necessary for understanding and verification
- Flexibility β ability to expand specifications when unexpected complexity emerges
- Practicality β focus on what genuinely helps implementation rather than formal requirements
Specification Complexity Decision Treeβ
flowchart TD
A[New work item] --> B{Effort > 1 day?}
B -->|No| C[Micro-specification]
B -->|Yes| D{Multiple components?}
D -->|No| E{New patterns/technologies?}
E -->|No| F[Rapid specification]
E -->|Yes| G[Standard specification]
D -->|Yes| G[Standard specification]
C --> K[Minimal documentation]
F --> L[Streamlined process]
G --> M[Full three-phase process]
I --> M
Types of Lightweight Specificationsβ
Micro-Specificationβ
Applied to: Bug fixes, text changes, configuration updates, minor UI changes (less than 1 day effort).
Characteristics:
- Minimal documentation focusing on the essence of changes, often as comments in the task tracking system
- Brief rationale and clear acceptance criteria sufficient for verification
- No formal design or detailed task planning β decisions are made directly during implementation
- Documentation limited to the minimum necessary for understanding and verification, often including only what and why without detailed how
Example: Fix typo in welcome message text
Rapid Specificationβ
Applied to: Small features, API endpoint additions, database schema changes, component modifications (1-3 days effort).
Characteristics:
- Simplified requirements gathering focusing on key user stories and acceptance criteria
- Direct transformation of requirements into implementation tasks without a separate formal design phase β design is integrated into task planning
- Clear acceptance criteria and definition of "done" for each task
- Maintained traceability between requirements and tasks through explicit links
Example: As a user, I want to see my last login time
Dynamic Specification Level Adaptationβ
Indicators for Elevating Specification Levelβ
For micro-specifications:
- Implementation takes significantly longer than initially estimated
- Non-obvious dependencies between components are discovered
- Complex edge cases emerge that werenβt considered in initial criteria
- Coordination with other teams or systems becomes necessary
For rapid specifications:
- Complex design questions arise during implementation requiring serious analysis
- Hidden dependencies on other systems or components are discovered
- Significant implications for performance, security, or scalability emerge
- Additional stakeholder alignment is needed due to scope expansion
Adaptation Processβ
-
Current state assessment: Analyze reasons for increased complexity and identify specific areas requiring additional elaboration
-
Identify missing elements: Determine which aspects need additional specification β requirements, design, or task details
-
Specification enhancement: Add necessary elements focused on solving identified problems without completely rewriting existing documentation
-
Change alignment: Discuss the expanded specification with stakeholders and obtain confirmation
-
Implementation continuation: Proceed with coding using the enhanced specification as guidance
This flexible approach ensures balance between necessary structure and operational speed, enabling specification efforts to scale according to actual project needs while preserving the core principles and benefits of specification-driven development.
Steering Documentsβ
Steering documents are project working guidelines.
They contain project-specific standards and conventions that help teams work more cohesively. These documents aim to "guide" developers when working with the project and address two problems:
- Documenting information not related to specific specifications (e.g., GitFlow decisions)
- Documenting information that repeats across specifications (e.g., global testing strategy, technology stack, or code style)
Core principles:
- The list of steering documents is individual for each stack, team, and solution
- The list of steering documents may change during solution evolution
- Different steering documents are relevant at different stages of working with specifications
- Steering documents constitute the projectβs shared context
- Itβs recommended to create separate tasks in specifications for maintaining steering document currency
- Itβs recommended to reference specific steering documents in specification designs
- Itβs recommended to create multiple atomic steering documents rather than a few large ones
- You can have steering documents that apply to the entire solution as well as those specific to particular components or modules
Conclusionβ
Specification-driven development represents a balanced approach that combines the benefits of thorough planning with the flexibility required for modern software development. This methodology doesnβt require rigid adherence to formal processes but provides a structure that can be adapted to specific project needs.
By following the three-phase methodology and applying lightweight specifications where appropriate, development teams can achieve an optimal balance between preparation and implementation. This enables them to:
- Create higher-quality software with fewer bugs
- Reduce project risks through early problem detection
- Improve communication among all process participants
- Preserve project knowledge for long-term maintenance
The methodology is especially effective when combined with modern AI-assisted development tools, as the structured approach to requirements, design, and task planning provides the clear context that AI systems need for maximum effectiveness. AI assistants can better understand tasks and propose more accurate solutions when requirements and designs are clearly defined.
The adaptive nature of lightweight specifications makes the methodology universalβit can be applied in various contexts, from minor bug fixes to large projects, ensuring an optimal balance between preparation and implementation. This makes specification-driven development a powerful tool in the modern developerβs arsenal, helping create better software more efficiently and with lower risk.
π Requirements Phase Documentation
The requirements phase forms the foundation of specification-driven development, where vague feature ideas are transformed into clear, testable requirements using the EARS (Easy Approach to Requirements Syntax) format. This phase ensures shared understanding among all stakeholders about what needs to be built before proceeding to design and implementation stages.
Purpose and Objectivesβ
The requirements phase serves to:
- Transform vague ideas into specific, measurable, and testable requirements
- Establish clear acceptance criteria for evaluating feature success
- Create shared understanding among all project participants
- Provide a basis for decision-making during design and implementation phases
- Enable effective testing and validation strategies
Step-by-Step Processβ
Step 1: Initial Requirements Generationβ
Objective: Create an initial draft of requirements based on a feature idea
Process:
- Analyze the feature idea: Break down the core concept into user scenarios
- Identify user roles: Determine all participants interacting with the feature
- Formulate user stories: Describe in the format "As a [role], I want [feature] so that [benefit]"
- Translate into EARS requirements: Convert user stories into testable acceptance criteria
Key Principles:
- Start with user experience, not technical implementation
- Focus on observable and measurable system behavior
- Systematically consider edge cases and error scenarios
- Think about the complete user journey, not isolated steps
Step 2: Structuring Requirements in EARS Formatβ
Objective: Formalize requirements in a standardized, testable format
Document Structure:
### Requirement 1
**User Story:** As a [role], I want [feature] so that [benefit]
#### Acceptance Criteria
1. WHEN [event] THEN the system SHALL [response]
2. IF [precondition] THEN the system SHALL [response]
3. WHEN [event] AND [condition] THEN the system SHALL [response]
[Additional requirements...]
Core EARS Templatesβ
1. Simple Event-Response (most common template)
Used for direct system responses to user actions
Example:
WHEN the user clicks the "Submit" button THEN the system SHALL validate form data
2. Conditional Behavior
Applied when an action depends on the current system state
Example:
IF the user is authenticated THEN the system SHALL display the user dashboard
3. Complex Conditions
Combines multiple conditions using logical operators
Example:
WHEN the user submits the form AND all required fields are filled THEN the system SHALL process the submission
4. Error Handling
Describes system behavior in exceptional situations
Example:
WHEN the user submits invalid data THEN the system SHALL display specific error messages
EARS Application Guidelinesβ
- WHEN: Always start with a triggering event (user action or system event)
- IF: Use to describe preconditions that must be true
- THEN: Clearly define expected system behavior (always use "SHALL")
- AND/OR: Use to combine conditions, but avoid excessive complexity
- SHALL: Use consistently for mandatory requirements (do not mix with "MAY" or "SHOULD")
Formulation Tipsβ
- Avoid technical implementation details ("the system uses Redis")
- Do not use vague terms ("fast," "convenient")
- Each requirement must be independent and testable
- Verify requirement completeness: happy path, edge cases, errors
Step 3: Requirements Validationβ
Validation Criteria:
- Each requirement is testable and measurable
- Requirements cover normal, edge, and error scenarios
- User stories provide clear business value
- Acceptance criteria are specific and unambiguous
- Requirements are independent and non-conflicting
- All user roles and their interactions are accounted for
Verification Questions:
- How will we verify this requirement is fulfilled?
- Is the expected behavior clearly defined?
- What assumptions are embedded in this requirement?
- What happens upon failure or in exceptional situations?
- Are all user scenarios covered?
Step 4: Iterative Refinementβ
Refinement Process:
- Stakeholder review: Gather feedback on completeness and accuracy
- Gap identification: Find missing scenarios or ambiguous wording
- Ambiguity resolution: Eliminate vague or conflicting requirements
- Add missing details: Include edge cases and error handling
- Business value verification: Confirm each requirement serves a specific purpose
Recommendations:
- Implement one change per iteration to track modifications
- Record approval from all stakeholders after changes
- Document rationale for key decisions
- Maintain an appropriate level of detail: specific enough for clarity, but not at implementation level
Final Requirements Checklistβ
Completenessβ
- All user roles and scenarios are accounted for
- Normal, edge, and error cases are covered
- All interactions have defined system responses
- Business rules and constraints are explicitly documented
Clarityβ
- Requirements use precise, unambiguous language
- Technical jargon is either absent or clearly defined
- Wording maintains a user-centric perspective
- Expected behavior is concrete and measurable
Consistencyβ
- EARS format is applied consistently throughout the document
- Terminology is uniform across the entire document
- Requirements do not contradict each other
- Similar scenarios follow a unified template
Testabilityβ
- Each requirement can be verified through testing
- Success criteria are observable and quantitatively measurable
- Both input conditions and expected results are specified
- Wording is specific enough to develop test cases
Examples of Well-Formulated Requirementsβ
Example 1: User Registration Featureβ
User Story: As a new user, I want to create an account so that I can access personalized features.
Acceptance Criteria:
- WHEN the user provides a valid email and password THEN the system SHALL create a new account
- WHEN the user provides an existing email THEN the system SHALL display the error "Email already registered"
- WHEN the user provides an email in an invalid format THEN the system SHALL display the error "Invalid email format"
- WHEN the user provides a password shorter than 8 characters THEN the system SHALL display the error "Password must be at least 8 characters long"
- WHEN account creation is successful THEN the system SHALL send a confirmation email within 30 seconds
- WHEN account creation is successful THEN the system SHALL redirect to the welcome page
Example 2: Data Validation Featureβ
User Story: As a user, I want my input validated in real time so that I can avoid submitting incorrect information.
Acceptance Criteria:
- WHEN the user enters data into a required field THEN the system SHALL remove the error highlight for that field
- WHEN the user submits a form with empty required fields THEN the system SHALL highlight missing fields in red
- WHEN the user enters data that does not match the required format THEN the system SHALL display format requirements below the input field
- WHEN all fields pass validation THEN the system SHALL enable the submit button
- IF validation fails THEN the system SHALL keep the submit button disabled
- WHEN the user hovers over the tooltip icon THEN the system SHALL display an example of correct format
Common Mistakes and How to Avoid Themβ
Mistake 1: Vague Wordingβ
Problem:
"The system must be fast and convenient"
Consequences:
Impossible to verify fulfillment; multiple interpretations
How to Fix:
"WHEN the user requests data THEN the system SHALL display the result within 2 seconds for 95% of requests"
Mistake 2: Including Implementation Detailsβ
Problem:
"The system must use Redis for data caching"
Consequences:
Limits implementation options; focuses on technology rather than outcome
How to Fix:
"WHEN the user repeatedly requests frequently used data THEN the system SHALL return the result within 500 ms"
Mistake 3: Missing Error Handlingβ
Problem:
Only describing the "happy path" without edge cases
Consequences:
Functionality gaps; unexpected failures during operation
How to Fix:
For each main scenario, add 2β3 error-handling and boundary-condition scenarios
Mistake 4: Untestable Requirementsβ
Problem:
"The interface must be intuitive"
Consequences:
Impossible to confirm requirement fulfillment
How to Fix:
"WHEN a new user first accesses the system THEN the system SHALL provide an onboarding tour enabling completion of core actions in no more than 3 clicks"
Document Templateβ
# Requirements for "[Brief Feature Name]"
**Business Objective:** [Description of the feature's business goal and its value to the product/customer]
**Scope:** [Boundaries of functionalityβwhat is included/excluded]
**Related Documents:** [Links to technical specifications, user research, etc.]
---
## [Requirement/Feature Name]
**User Story:**
As a [user role], I want [feature description] so that [business value/benefit]
### Acceptance Criteria
*(Select the appropriate EARS template and fill in according to instructions)*
1. **[Simple Event-Response]**
WHEN [specific event/user action] THEN the system SHALL [observable result]
*[Example: WHEN the user clicks the "Save" button THEN the system SHALL save changes to the database]*
2. **[Conditional Behavior]**
IF [precondition/system state] THEN the system SHALL [observable result]
*[Example: IF the cart contains items THEN the system SHALL display the total amount]*
3. **[Complex Condition]**
WHEN [event] AND [additional condition] THEN the system SHALL [observable result]
*[Example: WHEN the user enters a password AND password length < 8 characters THEN the system SHALL display an error]*
4. **[Error Handling]**
WHEN [exceptional situation] THEN the system SHALL [error-handling action]
*[Example: WHEN the server connection is lost THEN the system SHALL display the notification "Check your internet connection"]*
*[Repeat this structure for each independent requirement]*
---
## Requirements Quality Checklist
*(Completed after finalizing the document)*
| Criterion | Completed | Comment |
| ---------------------------------------------------------------------- | --------- | ------- |
| All requirements are testable and measurable | β | |
| Normal, edge, and error scenarios are covered | β | |
| No technical implementation details included | β | |
| No vague wording ("fast," "convenient") | β | |
| All requirements are independent and non-conflicting | β | |
| Input conditions and expected results specified for each requirement | β | |
π¨ Design Phase Documentation
The design phase is a critically important stage in software development, transforming approved requirements into a detailed technical implementation plan. This document serves as the official guide for the entire team, providing a single point of reference for architects, developers, testers, and stakeholders.
Key Value of the Design Phase:
- Creates a "bridge" between business requirements and technical implementation
- Identifies and resolves potential issues before coding begins
- Ensures transparency and alignment in decision-making
- Serves as the foundation for effort estimation and work planning
- Reduces risks of misunderstandings and rework during implementation
Purpose and Objectivesβ
Create a technically sound, implementable, and verifiable design that fully aligns with approved requirements and is ready to be broken down into implementation tasks.
Requirements Transformation
- Translate functional requirements into architectural components
- Convert non-functional requirements (performance, security) into technical solutions
Conduct Targeted Research
- Analyze technological options for critical components
- Study best practices and design patterns
- Assess risks and constraints of chosen solutions
Define System Structure
- Identify key modules and their interactions
- Design interfaces between components
- Develop data models and their transformations
Plan for Reliability
- Design error handling and recovery mechanisms
- Define testing strategies across all levels
- Ensure compliance with security requirements
Document Decisions
- Record all architectural decisions with justification
- Establish traceability between requirements and design elements
- Prepare materials for handover to developers
Principles of Research Integrationβ
Defining the Research Scope:
- Focus on critical decisions. Investigate only those aspects that directly impact architecture or involve high uncertainty. For example, choosing between synchronous and asynchronous payment processing requires in-depth analysis, whereas selecting a UI color scheme can rely on existing guidelines.
- Time-boxed efforts. Set clear time limits for research activities.
- Actionable insights. Collect and retain only information that directly influences decision-making.
Documenting Research:
- Contextual linkage. Each finding must explicitly reference a specific requirement or problem.
- Source references. Include direct links to documentation, articles, and code examples.
- Integration into design. Do not store research in isolation. Embed key findings directly into relevant sections of the architectural document.
- Decision rationale. For every decision made, specify:
- Alternatives considered
- Evaluation criteria
- Reasons for selecting the current option
- Potential trade-offs
Step-by-Step Processβ
Step 1: Requirements Analysis and Research Planningβ
Objective: Gain deep understanding of requirements, identify areas needing research, and clearly define scope and boundaries.
This step may be skipped if no new technologies or architectural patterns are planned for adoption.
Process:
- Thorough requirements review
- Identify research areas
- Plan research activities. For each area, define the research objective and success criteria
- Establish research boundaries
What to Document:
- Project context and alignment with business goals
- List of research topics with justification for prioritization
- Expected outcomes and their architectural impact
- Research boundaries and completion criteria
Step 2: Conducting Research and Building Contextβ
Objective: Gather sufficient information to make informed architectural decisions while avoiding excessive analysis.
This step may be skipped if no new technologies or architectural patterns are planned for adoption.
Process:
-
Information gathering
- Review official documentation, technical blogs, and case studies
- Conduct experimental tests (proof-of-concept) for critical features
- Consult internal or external experts
-
Option evaluation
- For each option, assess:
- Technical characteristics
- Required effort
- Potential risks
- Alignment with non-functional requirements
- For each option, assess:
-
Document research outcomes
- Create a concise summary focused on decisions
- Provide source references for verification
- Note uncertainties and need for further research
-
Make preliminary decisions
- Formulate recommendations based on research
- Specify rationale and potential trade-offs
What to Document:
- Key findings linked to specific requirements
- Comparison of alternatives with criteria-based evaluation
- Justification for selected technologies and patterns
- Source references and verification materials
- Uncertainties and recommendations for resolution
Step 3: Creating System Architectureβ
Objective: Define a high-level solution structure that fully satisfies requirements and is ready for detailed elaboration.
Architecture Components:
-
System Overview
- Component diagram (C4 model recommended)
- Brief description of primary data flows
- Integration points with existing infrastructure
-
Component Architecture
- List of core modules with purpose descriptions
- Responsibility boundaries for each component
- Component interactions (synchronous/asynchronous)
-
Data Flow
- Description of key entity lifecycles
- Data storage locations at each stage
- Data transformations between components
-
Integration Points
- External systems and APIs with version specifications
- Communication protocols and data formats
- Strategies for handling external system unavailability
-
Technology Stack
- Explicit justification for each technology choice
- Versions of tools used
- Migration plan for legacy components
What to Document:
- Architecture diagram with explanations
- Justification for chosen architectural style (microservices, monolith, etc.)
- How the architecture satisfies functional and non-functional requirements
- Risks of architectural decisions and mitigation strategies
Important: Describe only components necessary to fulfill current requirements. Avoid designing "for the future" without explicit requirements.
Step 4: Defining Components and Interfacesβ
Objective: Detail the internal system structure and component interaction mechanisms to ensure readiness for implementation.
Component Design Elements:
- Component responsibilities. Clear description of what each component does
- Interface definitions
- Dependency relationships
- Configuration and setup
What to Document:
- Separate subsection for each component with complete description
- Example requests and responses for all interfaces
- Sequence diagrams for key scenarios
- Component-level error handling rules
Step 5: Data Model Designβ
Objective: Define data structures and processing rules that ensure integrity and compliance with business rules.
Data Model Elements:
- Entity definitions: data and responsibilities
- Entity relationships
- Validation rules and business logic for entities
- Storage strategies
What to Document:
- ERD diagram with explanations
- Complete field descriptions for each entity
- Example data for key scenarios
- Data migration strategies when schema changes occur
Step 6: Planning Error Handling and Edge Casesβ
Objective: Ensure system reliability during failures by defining clear strategies for handling all possible scenarios.
Error Handling Design:
- Categorize errors. System errors, data validation errors, etc.
- Error response strategies
- User experience in error scenarios
- Recovery mechanisms
- Monitoring mechanisms
What to Document:
- Error matrix for each key operation with handling strategies
- Error handling examples for critical scenarios
- Log and metric formats for error tracking
- Incident response procedures for critical failures
Step 7: Defining Testing Strategyβ
Objective: Ensure implementation quality through a well-thought-out testing strategy covering all system aspects.
Testing Strategy Elements:
- Define testing levels
- Test coverage: coverage criteria and priorities
- Test scenarios
- Testing tools
- Quality checkpoints
What to Document:
- Required test level and type for each component
- Quality metrics and target values
- Example test scenarios for key features
- Integration of testing strategy with development process
Step 8: Final Design Quality Reviewβ
Objective: Ensure the design is complete, understandable, implementation-ready, and meets all quality criteria.
Quality Checklist:
Category | Verification Criterion | Verification Method |
---|---|---|
Completeness | All requirements addressed in design | Requirements-to-design mapping |
Core system components defined | Diagram and description review | |
Data models cover all required entities | ERD and description analysis | |
Error handling covers expected failure modes | Error matrix verification | |
Clarity | Design decisions clearly explained | Document review by new developer |
Component responsibilities well-defined | Component description review | |
Component interfaces specified | API contract analysis | |
Technical choices include justification | Research section verification | |
Feasibility | Design technically achievable with chosen technologies | Expert consultation |
Performance requirements can be met | Optimization strategy analysis | |
Security requirements addressed | Security measure verification | |
Implementation complexity aligns with project estimates | Developer assessment | |
Traceability | Design elements linked to specific requirements | Traceability matrix |
All requirements covered by design components | Completeness verification | |
Design decisions support requirement fulfillment | Compliance analysis | |
Testing strategy validates requirement satisfaction | Test scenario verification |
Common Design Mistakesβ
Mistake 1: Over-Engineeringβ
Problem: Designing for requirements that don't exist or adding functionality "for the future."
Symptoms:
- Design includes components with no direct link to current requirements
- Complex abstractions for scenarios that may never materialize
- Extended design timeline without clear benefit
Solution:
- Focus strictly on current requirements
- Apply the YAGNI principle (You Aren't Gonna Need It)
- Design should be extensible but not implement unused features
- Regularly ask: "Which specific requirement justifies this component?"
Mistake 2: Poorly Specified Interfacesβ
Problem: Vague component boundaries and interactions leading to implementation misunderstandings.
Symptoms:
- Unclear component responsibilities
- Missing clear API contracts
- Numerous clarification questions during implementation
Solution:
- Clearly define inputs, outputs, and errors for each interface
- Use formal specifications (OpenAPI, Protobuf)
- Include example requests and responses
- Conduct interface reviews with developers before implementation
Mistake 3: Ignoring Non-Functional Requirementsβ
Problem: Focusing only on functional behavior while neglecting performance, security, and other non-functional aspects.
Symptoms:
- No mention of response time, load capacity, or security
- Missing scaling or failover strategies
- Undefined quality metrics
Solution:
- Explicitly document all non-functional requirements in a dedicated section
- Specify measurable metrics for each (e.g., "Response time < 500ms at 1000 RPS")
- Include design elements that ensure NFR compliance
- Verify NFR compliance during final review
Mistake 4: Technology-Driven Designβ
Problem: Selecting technologies before fully understanding requirements, leading to suboptimal solutions.
Symptoms:
- Technologies mentioned before requirements are defined
- Technology comparisons without specific task alignment
- Unnecessary complexity from using "trendy" technologies
Solution:
- Let requirements drive technology choices, not vice versa
- For each technology, specify the exact requirement it satisfies
- Consider simple solutions before complex ones
- Use a technology evaluation matrix with requirement-based criteria
Mistake 5: Inadequate Error Handling Designβ
Problem: Designing only for the "happy path" while ignoring failure scenarios.
Symptoms:
- Missing error handling section
- No recovery strategies for failures
- Undefined user error messages
Solution:
- Explicitly design error handling alongside main workflows
- Define possible failure scenarios for each operation
- Include retry mechanisms, fallback strategies, and monitoring in design
- Ensure user experience is considered for all scenarios
Resolving Design Issuesβ
Issue: Design Becomes Overly Complexβ
Symptoms:
- Design document exceeds 2500 lines without clear necessity
- Too many components and interactions
- Difficulty explaining architecture to new team members
Solution:
- Return to requirements and remove elements without direct linkage
- Consider phased implementation (MVP + subsequent iterations)
- Refactor architecture by consolidating related components
Issue: Requirements Don't Map to Designβ
Symptoms:
- Difficulty tracing requirements to design elements
- Some requirements missing from design
- No clear connection between business goals and technical decisions
Solution:
- Create a requirements-to-design traceability matrix
- Conduct step-by-step verification of each requirement
- Add requirement references to relevant design sections
Issue: Technology Choices Are Unclearβ
Symptoms:
- Multiple viable options without clear selection criteria
- Missing justification for chosen technologies
Solution:
- Define decision criteria based on requirements and constraints
- Create a comparison matrix with key criteria evaluation
- Conduct proof-of-concept for critical decisions
- Document selection rationale including trade-offs
Issue: Design Lacks Implementation Detailsβ
Symptoms:
- Many questions during implementation
- Ambiguity in interfaces and contracts
Solution:
- Add specific example requests and responses
- Clarify data formats and error codes
- Include sequence diagrams for key scenarios
Conclusionβ
A design document is not merely a formal artifact but a living tool that ensures successful project implementation. High-quality design:
- Reduces errors and rework
- Accelerates development through clear guidance
- Simplifies knowledge transfer among team members
- Serves as the foundation for implementation quality assessment
Key Principles of Effective Design:
- Minimal sufficiency β design exactly what's needed for implementation
- Practical orientation β focus on solutions ready for implementation
- Decision transparency β every decision must have clear justification
- Living document β regularly update design as new information emerges
Document Templateβ
# Design "[Brief Feature Name]"
[Brief description of business objectives, alignment with corporate strategy, key stakeholders]
[Clear system boundaries definition: what's included/excluded in the solution]
---
## System Architecture
[General description of feature workflow, component listing, their relationships, and data flows between them]
---
## Components and Interfaces
**For each key component, create a subsection:**
### [Component Name]
[What the component does]
[Relationships with other components]
[Link to source requirements]
**Non-Functional Requirements**
[Non-functional requirements for the component]
- Performance: [Metrics + strategies]
- Security: [Protection mechanisms]
- Reliability: [Failover strategies]
**Error Handling Strategy**
[Error handling strategy]
**Testing Strategy**
* [Test case]
* Test case description
* [Another test case]
* Test case description
---
## System Entities
### [Entity Name]
[Entity description]
[Link to source requirements]
**Entity Methods**
* [Entity method]
* Method description and behavior
* [Another entity method]
* Method description and behavior
**Entity Data**
* [Entity field]
* Field description and behavior
* [Another entity field]
* Field description and behavior
**Testing Strategy**
* [Test case]
* Test case description
* [Another test case]
* Test case description
---
## Requirements Quality Checklist
*(Completed after document finalization)*
| Criterion | Completed | Comment |
| -------------------------------------------------------------------- | --------- | ------- |
| All requirements have unambiguous representation in design | β | |
| Non-functional requirements translated into measurable technical solutions | β | |
| Error handling designed for all key scenarios | β | |
| Data model covers all business entities and rules | β | |
| Testing strategy defined with levels and quality metrics | β | |
| Design follows minimal sufficiency principle (YAGNI) | β | |
| Traceability system exists from requirements β design elements | β | |
β Task Phase Documentation
The task phase is the final phase of the specification-driven development process, transforming an approved design into a structured implementation plan composed of discrete, executable development tasks. This phase serves as a bridge between planning and execution, breaking down complex system designs into manageable steps that can be incrementally carried out by development teams or AI agents.
As the third phase in the Requirements β Design β Tasks workflow, the task phase ensures that all meticulous planning and design efforts translate into systematic, trackable implementation progress.
Purpose and Objectivesβ
The task phase serves to:
- Transform design components into concrete development activities
- Sequence tasks for optimal development flow and early validation
- Create clear, actionable prompts for implementation
- Establish dependencies and build order among tasks
- Ensure incremental progress with testable milestones
- Provide a roadmap for systematic feature development
Step-by-Step Processβ
Step 1: Design Analysis and Task Identificationβ
Objective: Break down the design into implementable components
Task List Formation Principles:
- Review Design Components: Identify all system components that need to be built or modified
- Map to Code Artifacts: Determine which files, classes, and functions must be created or altered
- Account for Testing Requirements: Plan test creation alongside implementation
- Sequence for Early Validation: Order tasks to enable rapid validation of core functionality
- Link to Requirements: Reference specific requirements being implemented, ensuring traceability from task to user value
Task Creation Principles:
- Focus on concrete activities (writing, modifying, testing code)
- Each task must produce working, testable code
- Tasks must build incrementally upon previous work
Step 2: Task Structuring and Hierarchyβ
Objective: Decompose tasks into subtasks
Task Organization Principles:
- Maximum Two Levels: Use only top-level tasks and subtasks (avoid deeper nesting)
- Logical Grouping: Group related tasks under meaningful categories
- Sequential Dependencies: Order tasks so each builds upon prior work
- Testable Increments: Each task must result in testable functionality
Task Execution Sequencing Principles:
- Core First: Build core functionality before optional features
- Risk First: Address uncertain or complex tasks early
- Value First: Implement high-value features that can be quickly tested
- Dependency-Driven: Respect technical dependencies between components
- Foundation First: Implement core interfaces and data models before dependent components
- Bottom-Up Approach: Develop low-level utilities before high-level functions
- Test-Driven Sequencing: Write tests alongside or before implementation
- Integration Points: Plan component integration as components are built
Task Hierarchy Template:
## Task
[Task details, links to requirements and design]
- [ ] 1.1 [Implementation subtask]
- [Subtask details, links to requirements and design]
- [ ] 1.2 [Next specific task]
- [Subtask details, links to requirements and design]
## Next Task
- [Task details, links to requirements and design]
- [ ] 2.1 [Implementation subtask]
- [Subtask details, links to requirements and design]
Step 3: Task Definition and Specificationβ
Objective: Enrich subtask details with the following information:
- Clear Objective: Specify exactly which code needs to be written or modified
- Implementation Details: Identify specific files, components, or functions to create
- Requirements Traceability: Reference specific requirements being implemented
- Design Traceability: Reference the design being implemented
- Acceptance Criteria: Define how to verify task completion
- Testing Expectations: Specify which tests must be written or updated
Step 4: Validation and Refinementβ
Task Quality Criteria:
- Actionable: Can be executed without requiring further clarification
- Specific: Clearly state which files, functions, or components to create
- Testable: Produce code that can be tested and validated
- Incremental: Build upon previous tasks without large complexity jumps
- Complete: Cover all aspects of the design requiring implementation
Validation Questions:
- Can a developer begin implementation based solely on this task description?
- Does this task produce working, testable code?
- Are the requirements being implemented clearly identified?
- Does this task logically build upon previous tasks?
- Is the task scope appropriate (not too large, not too small)?
Quality Checklistβ
Before finalizing the task list, verify:
Completeness:
- All design components are covered by implementation tasks
- All requirements are addressed by one or more tasks
- Testing tasks are included for all core functionality
- Integration tasks connect all components
Clarity:
- Each task has a clear, specific objective
- Task descriptions specify which files/components to create
- Requirement references are included for every task
- Completion criteria are explicit or implicitly clear
Sequencing:
- Tasks are ordered to respect dependencies
- Early tasks establish foundations for subsequent work
- Core functionality is implemented before optional features
- Integration tasks follow component implementation
Implementability:
- Each task is appropriately sized for implementation
- No tasks require external dependencies or manual processes
- Task complexity increases gradually
Addressing Task Planning Issuesβ
Issue: Tasks Are Too Vagueβ
Symptoms: Developers cannot start implementation from task descriptions
Solution: Add more specific implementation details, including file/component names
Issue: Task Dependencies Are Unclearβ
Symptoms: Tasks cannot be completed due to missing prerequisites
Solution: Review task sequence and add missing foundational tasks
Issue: Task-to-Requirement Linkage Is Unclearβ
Symptoms: Difficulty tracing tasks back to user value
Solution: Add requirement references and validate coverage
Document Templateβ
# Tasks for "[Brief Feature Name]"
## [Task Name]
[Task details, links to requirements and design]
- [ ] 1.1 [Implementation subtask]
- [Subtask details, links to requirements and design]
- [ ] 1.2 [Next specific task]
- [Subtask details, links to requirements and design]
## [Next Task Name]
- [Task details, links to requirements and design]
- [ ] 2.1 [Implementation subtask]
- [Subtask details, links to requirements and design]
## Task List Quality Checklist
*(Completed before finalizing the task list)*
| Criterion | Completed | Comment |
| ------------------------------------------------------------------------- | --------- | ------- |
| All design components are covered by implementation tasks | β | |
| All requirements are addressed by one or more tasks | β | |
| Testing tasks are included for all core functionality | β | |
| Integration tasks connect all components | β | |
| Each task has a clear, specific objective | β | |
| Task descriptions specify which files/components to create | β | |
| Requirement references are included for every task | β | |
| Completion criteria are defined for each task | β | |
| Tasks are sequenced to respect dependencies | β | |
| Early tasks establish foundations for subsequent work | β | |
| Core functionality is implemented before optional features | β | |
| Integration tasks follow component implementation | β | |
| Each task has an appropriate size for implementation | β | |
| No tasks require external dependencies or manual processes | β | |
| Task complexity increases gradually | β | |
π― Steering Documents
Objective: Ensure consistency, quality, and efficiency in development by creating and maintaining a set of living, atomic documents that serve as the projectβs shared context and guide the team in implementing any work itemsβfrom micro-specifications to large features.
Core Principlesβ
Steering documents are not static artifacts but dynamic knowledge management and quality assurance tools. Their creation and maintenance are governed by the following key principles:
- Atomicity and Focus: Each document must focus on a single, specific topic (e.g.,
git_workflow
,react_component_structure
,postgres_naming_conventions
). Avoid creating monolithic, all-encompassing manuals. - Living Documentation: Documents must be regularly updated as the project, technologies, and best practices evolve. Outdated documentation is worse than no documentation at all.
- Practical Orientation: Content must be directly applicable by developers in their day-to-day work. Focus on the βhowβ and βwhy,β not abstract theories.
- Contextuality: Documents may be global (applying to the entire solution) or local (specific to a particular module, microservice, or component). Clearly indicate the scope of applicability.
- Integration into Workflow: Steering documents are an integral part of the specification-driven development process. They must be explicitly referenced in design specifications and considered during task planning.
- Ownership of Currency: It is recommended to create dedicated tasks within specifications for maintaining the currency of steering documents, especially after significant changes to the codebase or infrastructure.
Primary Categories of Steering Documentsβ
To ensure comprehensive coverage of the development lifecycle, steering documents are grouped into the following categories:
1. Development Environment and Infrastructure Standardsβ
- Objective: Ensure consistency and reproducibility of local and CI/CD environments.
- Example Documents:
development_environment_setup
: Local setup procedures, dependencies, IDE configuration.environment_variables_management
: Rules for naming, storing, and managing environment variables.build_and_deployment_processes
: Build scripts, CI/CD pipeline configurations, deployment procedures across environments.infrastructure_as_code_standards
: Standards for Terraform, CloudFormation, etc.
2. Code Quality Standardsβ
- Objective: Maintain high code quality, readability, and maintainability.
- Example Documents:
language_style_guide_[language]
: Style guides (e.g.,python_style_guide
,typescript_style_guide
).naming_conventions
: Conventions for naming variables, functions, classes, files, and database objects.code_organization_patterns
: Patterns for structuring projects, modules, and components.code_documentation_requirements
: Requirements for comments, docstrings, and documentation generation.
3. Git Workflow Standardsβ
- Objective: Ensure a predictable and efficient collaborative source code management process.
- Example Documents:
git_branching_strategy
: Branch naming conventions (e.g.,feature/
,hotfix/
,release/
).commit_message_format
: Commit message format (e.g., Conventional Commits).pull_request_process
: Procedures for creating, reviewing, and merging PRs (description requirements, checklists).code_review_guidelines
: Guidelines for reviewers and authors (what to check, how to provide feedback).
4. Technology- and Architecture-Specific Standardsβ
- Objective: Establish unified design and implementation rules for the projectβs key technologies.
- Example Documents:
frontend_architecture_patterns
: UI development patterns (e.g., React/Vue component structure, state management).backend_api_design
: Standards for designing REST/gRPC APIs (versioning, response structure, error codes, OpenAPI/Swagger documentation).database_design_and_migration
: Rules for database schema design, migration conventions, and indexing strategies.testing_strategy_[level]
: Global testing strategies (e.g.,unit_testing_strategy
,e2e_testing_strategy
, framework selection, coverage requirements).
5. Security, Performance, and Observabilityβ
- Objective: Lay the foundation for building reliable, secure, and easily diagnosable systems.
- Example Documents:
security_practices
: Applied security practices (input validation, session management, secret handling, dependency scanning).performance_optimization_guidelines
: Optimization guidelines (caching, asynchronous processing, profiling).monitoring_and_alerting_standards
: Standards for logging, metrics, tracing, and alert configuration.
6. Business Context and Architectureβ
- Objective: Ensure shared understanding of the domain and the systemβs high-level structure.
- Example Documents (core documents, always create):
tech_stack
: Explicit justification for each technology in the stack, including versions.domain_glossary
: Glossary of key business terms and concepts.project_dataflow
: High-level description of data flows within the system.context_diagram_c4
: C4 context diagram (Level 1; likec4 syntax recommended).container_diagram_c4
: C4 container diagram (Level 2; likec4 syntax recommended).
Step-by-Step Creation and Maintenance Processβ
Step 1: Needs Assessment and Planningβ
- Objective: Determine which steering document is needed and plan its creation.
- Process:
- Project Analysis: Review the current codebase to identify inconsistencies or areas where the absence of standards leads to errors.
- Gap Identification: Determine which standard category is missing or requires updating.
- Prioritization: Assess the potential impact of the document. Priority order: security > code quality > workflow > performance.
- Template and Scope Selection: Decide whether the document will be global or component-specific. Choose an appropriate level of detail (lightweight for small teams, comprehensive for enterprise solutions).
- Task Creation: Record the need to create or update the document as a distinct work item in the backlog.
Step 2: Content Developmentβ
- Objective: Produce a practical, clear, and useful document.
- Process:
- Research Best Practices: Use industry standards (e.g., Google Style Guides, 12-factor app) as a foundation.
- Project Contextualization: Adapt general practices to the projectβs specific technologies, constraints, and goals.
- Include Examples: Always provide concrete, working code samples, configuration files, or diagrams. βShow, donβt tell.β
- Link Integration: Connect the new document to other steering documents and relevant sections of design specifications.
- Formalization: Use clear, unambiguous language. Avoid terms like βshouldβ or βrecommendedβ where βmustβ or βmust notβ can be used instead.
Step 3: Validation and Approvalβ
- Objective: Ensure the document is useful, actionable, and error-free.
- Quality Criteria:
- Actionability: Can a developer immediately apply these rules in practice?
- Clarity: Is the document understandable to a new team member?
- Consistency: Does it contradict other steering documents or approved design specifications?
- Completeness: Does it cover all key aspects of the stated topic?
- Relevance: Is it based on the current state of the project and technologies?
- Process: Conduct a document review with key developers and architects. Obtain formal approval from the technical lead.
Step 4: Maintenance and Evolutionβ
- Objective: Keep the document current throughout the projectβs lifecycle.
- Process:
- Regular Review: Periodically verify the currency of all steering documents.
- Update upon Changes: Any significant change to architecture, technology stack, or processes must be accompanied by updates to relevant steering documents. This may be handled as a separate task within the corresponding specification.
- Remove Obsolete Content: Do not hesitate to delete documents or sections that are no longer relevant. Maintain βminimal sufficiency.β
Steering Documents Quality Checklistβ
(Completed after document creation or update)
Criterion | Done | Comment |
---|---|---|
Document focuses on a single, specific topic (atomic) | β | |
Content is practical and directly applicable by developers | β | |
Includes concrete, working examples | β | |
Provides justification for key rules and decisions | β | |
Contains no confidential data or secrets | β | |
No conflicts with other steering documents | β | |
Language is clear, unambiguous, and uses βmustβ/βmust notβ | β | |
Includes links to related steering documents and specifications | β | |
Specifies scope (global/component) and owners | β | |
Plans for regular review and updates | β |