Documentation Index
Fetch the complete documentation index at: https://docs.dacard.ai/llms.txt
Use this file to discover all available pages before exploring further.
Development Lifecycle reads how you ship: 6 stages, 36 tasks, 8 operations-stack categories, and 3 cross-cutting reads (token economics, role fluidity, cognitive debt). Where Team Operations measures capability, Lifecycle measures the work itself.
Development Lifecycle framework
This framework maps the modern product development lifecycle. The build process for capability-rich products has its own stages, concerns, and operational requirements that don’t exist in conventional software development.Six lifecycle stages
Specify and constrain
Define the problem with the right constraints. What can the system actually solve here? What are the guardrails? What does “good enough” look like when outputs are probabilistic?
Build system of context
Assemble the knowledge layer the system needs to perform. Data pipelines, embeddings, retrieval, prompt engineering, context window discipline.
Orchestrate and generate
Wire up models, agents, and pipelines. The system takes shape here: model selection, chaining, tool use, orchestration architecture.
Validate, eval, and craft
Test quality at the speed of inference. Traditional QA doesn’t work for probabilistic outputs. This stage runs on eval frameworks, human review loops, and quality benchmarks.
Ship and manage economics
Deploy and manage inference cost. Variable cost structures (tokens, GPU time) need active management and optimization.
36 lifecycle tasks
Each stage carries 6 specific tasks that represent the work. The lifecycle report shows which tasks your team has done, partially addressed, or not yet started.Operations stack (8 categories)
We also read your team’s operational tooling across eight categories.| Category | What it covers |
|---|---|
| Context and knowledge | RAG, embeddings, knowledge graphs, context management |
| Model and inference | Model selection, fine-tuning, inference optimization, model registry |
| Orchestration | Agent frameworks, workflow engines, tool use, chaining |
| Eval and quality | Evaluation frameworks, benchmarks, human review, regression testing |
| Deployment | CI/CD for models, feature flags, rollback, A/B testing |
| Economics | Cost tracking, token budgets, usage metering, margin analysis |
| Observability | Logging, tracing, drift detection, performance monitoring |
| Feedback | User feedback collection, RLHF pipelines, data labeling, model retraining |
Three cross-cutting reads
These forces span all six stages and shape every decision.Token economics
The cost of every interaction. Where traditional software has marginal cost approaching zero, capability-rich products have real per-request cost. Managing token economics is a continuous discipline, not a one-time optimization.Role fluidity
The shift from specialist to generalist. Modern teams blur traditional role boundaries. Engineers write prompts. Designers evaluate model outputs. PMs manage token budgets. The lifecycle read tracks how well your team has adapted to that reality.Cognitive debt
The capability equivalent of technical debt. When you ship without proper evaluation, monitoring, or feedback loops, you accumulate cognitive debt: models that drift, prompts that break, and outputs that degrade quietly.Using the lifecycle assessment
The lifecycle report shows your team’s progress through each stage.- Done stages are where your team has mature practice.
- Partial stages have some practice but gaps remain.
- Not started stages are areas your team hasn’t yet addressed.
How is the lifecycle different from sprints or cycles?
How is the lifecycle different from sprints or cycles?
Sprints and cycles are time-bounded execution windows. Lifecycle stages describe the work itself. You’ll touch multiple lifecycle stages every cycle. The framework helps you see whether the work is balanced (don’t ship a feature that skipped Eval) and whether your team has the muscle to do all six stages well.
When should I take the assessment?
When should I take the assessment?
Run it once at onboarding to set a baseline. Re-run it after a major ship or quarterly. Lifecycle scores move faster than capability scores because they reflect process, not investment.
API access
API access
See Get lifecycle stages and Submit lifecycle assessment for the endpoints.