Documentation Index
Fetch the complete documentation index at: https://docs.dacard.ai/llms.txt
Use this file to discover all available pages before exploring further.
Team Operations is the primary framework. We score your team across 27 dimensions in 6 functions, each 1-5, for a composite of 27-135. The composite places you on the 5-stage Team Operations ladder: Foundation, Building, Scaling, Leading, Compounding.
Team Operations framework
Team Operations is the primary scoring framework. It reads a team’s operational capability across 27 dimensions in 6 functions, each scored 1-5, for a composite of 27-135. It answers the question: how capable is your team at building and shipping?Five maturity stages
Foundation (27-48)
Basics missing across most dimensions. Team hasn’t yet established consistent practice. Most work is manual, ad hoc, or driven by individual contributors rather than shared systems.
Building (49-70)
Practice exists but lands unevenly. Some functions are stronger than others. Team is experimenting with new workflows but lacks repeatability and measurement.
Scaling (71-91)
Systematic process with measurable outcomes. Team ships features repeatedly and reliably. Data loops are starting to compound.
Leading (92-113)
Deeply integrated practice across all functions. Capability is embedded in the team’s ways of working, not just the product. Distance from Building-stage peers is meaningful and widening.
Six functions, 27 dimensions
Strategy
How well the team reads its market and makes evidence-based calls.| Dimension | What we measure | Score 1 | Score 5 |
|---|---|---|---|
| Market intelligence | Quality of market signal collection and synthesis | No systematic market read | Continuous competitive intelligence with real-time synthesis |
| Decision quality | Evidence behind product and strategic decisions | Gut-feel decisions, undocumented | Structured decision frameworks with measured outcome tracking |
| Roadmap discipline | How well the roadmap reflects strategic priorities | Roadmap driven by stakeholder requests | Outcome-based roadmap with clear OKR linkage and regular pruning |
| Competitive positioning | Clarity and defensibility of position | No differentiated position | Compounding position that deepens with scale and is hard to replicate |
Design
How effectively the team turns user insight into shipped experience.| Dimension | What we measure | Score 1 | Score 5 |
|---|---|---|---|
| Research and discovery | Depth and consistency of user research | No research practice | Continuous discovery with rigorous synthesis and opportunity trees |
| Prototyping speed | How fast the team goes from idea to testable artifact | Weeks to a prototype | Same-day prototypes with real user feedback loops |
| Experience design | Quality of interaction patterns and UX craft | No coherent interaction model | Interactions feel native, adaptive, and refined |
| Design-dev handoff | Efficiency and fidelity of the design-to-build handoff | Manual specs with high loss-in-translation | Automated handoff with design system coverage and zero spec debt |
Development
How efficiently and consistently the team builds and ships.| Dimension | What we measure | Score 1 | Score 5 |
|---|---|---|---|
| Architecture and systems | Depth of capability integration in the technical stack | No capability in the stack | Models, pipelines, and inference are core to the architecture |
| Spec and context quality | Quality of PRDs, tickets, and context handed to builders | Vague specs, high ambiguity | Rich context, acceptance criteria, examples |
| Build vs buy | Decision rigor on model and infrastructure choices | No framework for build vs buy | Principled model with clear criteria, regular review, measured outcomes |
| Delivery velocity | Speed and consistency of shipping improvements | Quarterly releases | Continuous deployment with assisted review, testing, and rollout |
Intelligence
How well the team captures, organizes, and uses customer and product signal.| Dimension | What we measure | Score 1 | Score 5 |
|---|---|---|---|
| Customer signal synthesis | Quality of feedback collection and synthesis | No systematic feedback collection | Synthesis of all customer signal into actionable intelligence |
| Product analytics | Depth and use of usage data | No analytics instrumentation | Real-time anomaly detection with surfaced insight |
| Data flywheel | Whether data creates a defensible compounding advantage | No data strategy | Proprietary flywheel: usage generates data that improves the product |
| Feedback loop quality | Whether usage data flows back to improve the product | No feedback mechanism | Real-time signal loop from user to model to product improvement |
| Knowledge management | How well institutional knowledge is captured, organized, surfaced | No systematic capture | Knowledge captured, organized, and distributed automatically |
GTM
How effectively the team takes products to market and drives adoption.| Dimension | What we measure | Score 1 | Score 5 |
|---|---|---|---|
| Positioning and messaging | Clarity and resonance of market messaging | Generic or feature-based positioning | Outcome-focused positioning with clear differentiation |
| Launch execution | Consistency and quality of launches | Ad hoc launches with no playbook | Repeatable launch system with pre/post analytics and clear success metrics |
| Adoption and expansion | How effectively the product drives growth and expansion | No adoption strategy | Onboarding, expansion loops, and retention flywheel |
| Pricing and packaging | Whether pricing reflects value and supports growth | Traditional seat-based pricing | Usage or outcome-based pricing that scales with value delivered |
Operations
How well the team learns, adapts, and optimizes its own ways of working.| Dimension | What we measure | Score 1 | Score 5 |
|---|---|---|---|
| Quality and experimentation | Rigor of testing, evaluation, and quality processes | Manual QA with no eval pipeline | Automated eval pipelines with continuous quality monitoring |
| Team orchestration | How the team coordinates work across people and systems | Manual planning with poor visibility | Real-time capacity and dependency tracking |
| Process iteration | How fast the team improves its own ways of working | Processes are fixed until broken | Continuous process improvement with data-driven retrospectives |
| Cost and token economics | How the team manages inference and infrastructure costs | No awareness of cost | Active token budget management with cost-per-feature analysis |
| Security and compliance | How the team manages security and compliance risk | No security practice | Threats detected, vulnerabilities patched, compliance maintained continuously |
| Reliability and resilience | How features handle failure and maintain availability | No specific monitoring | Failures predicted before they occur and infrastructure adjusted preemptively |
How dimensions interact
Dimensions compound within and across functions.- Data flywheel + feedback loop quality = the compounding engine. Strong data feeds strong models, which generate more useful data.
- Architecture and systems + delivery velocity = the delivery engine. Deep integration enables fast iteration.
- Market intelligence + competitive positioning = the positioning engine. Clear market read produces defensible differentiation.
- Customer signal synthesis + product analytics = the intelligence layer. Strong collection plus strong analysis produces actionable read.
Scoring criteria
We score each dimension 1-5 based on observable signal:- Public evidence - what the product shows, says, and does externally
- Technical signals - architecture patterns, API design, infrastructure choices
- Business model signals - pricing structure, packaging, monetization
- Team signals - job postings, engineering blog, conference talks
- Adapter signals - operational ground truth from connected tools (GitHub, Linear, PostHog)
Using your scores
For individual contributors
For individual contributors
Pick the function most relevant to your role and focus on its 4 dimensions. A PM leans on Strategy and Operations. An engineer leans on Development and Intelligence. Your function score is your primary growth lever.
For product leaders
For product leaders
Use the dimension heatmap across your portfolio to surface systemic weakness. A dimension that scores low across multiple products is an org-level capability gap, not a team problem. Invest in org-wide training and tooling, not individual coaching.
For founders and executives
For founders and executives
Compare your Team Operations score against your Lifecycle and Product Assessment scores to surface the biggest tension. A high product score with a low team score means your product has more capability than your team can sustain. That’s a scaling risk, not a strength.
Related
Three frameworks
How the three frameworks fit together.
Score your product
Run an assessment and read your 27-dimension breakdown.
Read your report
Walk through the maturity report.