50 rubric signals · 5 pillars · 1 verdictTypical turn · 3 weeks from NDA to sign-offAssessed against Series-A / Series-B / Growth benchmarksPost-funding · on-prem scanner · quarterly attestationTranche assessment on request · remediation-gatedEvery claim cited. Every verdict one sentence.50 rubric signals · 5 pillars · 1 verdictTypical turn · 3 weeks from NDA to sign-offAssessed against Series-A / Series-B / Growth benchmarksPost-funding · on-prem scanner · quarterly attestationTranche assessment on request · remediation-gatedEvery claim cited. Every verdict one sentence.
▚ Technical Due Diligence · 5-pillar rubric · Human-verified

Read the code
before you sign the term sheet.

Subsystems gives investors a full technical picture — operational maturity, product, engineering, technology, security — scored against a stage benchmark. Every claim is cited. Post-close, an on-prem scanner tracks remediation and gates tranches.

Pillars scored
5
Rubric signals
50
Every claim cited
100%
FIG. 01 · Service topology
Live scan
auth-svcapi-gwpayments*queuelegacy-db*cdncacheworkermetrics
10 nodes · 12 edges2 risk paths detected
§ 01 — The scan

We read every line, so you don't have to.

Before a human writes a word of the report, our scanners traverse the full repo graph — static analysis, dependency audit, authorship heatmap, test inspection. What you're about to see runs in real time on every engagement.

subsystems ▸ scan ▸ portfolio-co● running · 00:02:41
§ 02 — How it works

Three days from NDA to final report.

You hand us a repo URL. We hand back a signed PDF and a 60-minute read-out with your deal team. No portal logins, no “AI insights” dashboards.

day 0–1
01

Scope & access

NDA, repo access, target-contact intro. Scope is fixed at the start — no surprise line items.

day 1–2
02

Scan & verify

AI agents map the code graph, surface findings, and draft evidence. A senior engineer verifies every critical before it lands in the report.

day 3
03

Read-out & deliverable

Live 60-minute session with your deal team, followed by a signed PDF, raw data export, and a Q&A window through close.

§ 03 — Outside-in

Meet subsurface.
Our OSINT scanner for what's already leaking.

Before we touch the repo, subsurface maps what the internet already knows about the target. Leaked credentials, exposed endpoints, forgotten subdomains, SBOM matches against public CVE feeds, employees pasting proprietary code into public gists. What an attacker would find in an afternoon — we find in ten minutes.

▚ subsurface · v2.4target: portfolio-co.ioscan · 09:42:18 UTC● liveinternaledgepublic
leaked key · AWS
s3 bucket · public read
staging.target.io
sbom · clean
gist · prod config
subdomain · forgotten
Live · 14 of 14 layers · 2m 41s
Exposure, before the NDA ink dries.

subsurface runs in parallel with the code review. It probes 14 public layers — DNS, cert transparency, code hosting, paste sites, container registries, leaked credential dumps — and weights each hit against its blast radius inside the target's stack.

SSF-0114AWS access key in public gist · 4 mo old · still validcrit
SSF-0098S3 bucket with prod backups · world-readablecrit
SSF-0071Forgotten staging-2021.target.io · exposes unauth adminhigh
SSF-006612 employee emails in HaveIBeenPwned · same-password riskhigh
SSF-0042SBOM scan · 180 deps · 0 critical CVEspass
Layers probed
14
Hits
37
Critical
3
Scan time
2:41
§ 04 — The deliverable

A report structured around five pillars.
Fifty signals. One verdict.

Every engagement scores the company against a stage-benchmarked rubric. Pillar I through V cover the surfaces investors actually care about — from founding team and goal-setting through production incident response. Every claim is cited; every subsection closes with a verdict.

Pillar I
Operational Maturity
CTO · Key people · Team · Hiring · Goals · Culture
Does this org scale past the doubling point?
Pillar II
Product Excellence
Roadmap · Analytics · Customer feedback
Is product direction evidence-driven or founder-gut?
Pillar III
Engineering Practices
Source · CI/CD · Testing · AI tooling · Agile
Can they ship confidently after a hiring doubling?
Pillar IV
Technology
Architecture · Infra · Scale · Obs · DR · FinOps
Will the platform hold under 10× load and spend?
Pillar V
Security & Compliance
DevSecOps · Endpoint · GDPR · Governance
Earned certifications, or claimed posture?
subsystems · technical due diligencesample · redacted
Pillar III — Engineering Practices
3.57 / 5 vs Series A benchmark 3.40
3.1Source control4.1
3.2CI / CD4.0
3.3Testing & QA3.2
3.4AI tooling4.4
3.5Agile rituals2.2
Evidence (sample)
commitCI pipeline: 94% green over last 90 days · GitHub Actions
docNo written on-call rotation · no retro cadence in calendar
interviewCTO confirms: no OKRs at team level · ad hoc check-ins
Pillar verdictAbove Series A baseline on tooling. Below baseline on ritual. Hire a VP Eng in Q3 to close the gap before doubling.
Pillar III · subsection 3.5 · Agile

Retros and OKRs are the missing rituals

Three interviews confirm no team-level goal-setting. Retros occur ad hoc. Score pulled down from tooling strength. Remediation: standardise on 6-week cycles + quarterly OKR sync.

What's inside the document

50 signals. 5 pillars. 1 verdict.

Each pillar gets an overview page, 3–6 subsection pages with evidence and analysis, and a synthesis page with scored recommendations. EVOFIT sits above the rubric — the trajectory read on the system itself.

  • 00Cover · verdict mode · composite score · engagement metadata
  • 01Executive summary · the one-slide version for the IC
  • 02Spider graph · 5 pillars scored vs stage benchmark
  • 03Pillars I–V · 5–9 pages each · overview → subsections → synthesis
  • 04EVOFIT fitness read · archetype · selection mode · signature
  • 05Recommendations · priced-in SOW items for post-close
  • 06Evidence archive · every citation, linkable, timestamped
§ 05 — The framework · EVOFIT v0.3.1ALPHA

Fitness is a trajectory, not a snapshot.

EVOFIT scores a company as an evolving system. Five dimensions derived from Wong et al.'s three modes of selection (PNAS 2023). Not a checklist — an assessment of whether the producing mechanism is strengthening or decaying.

▚ Evolutionary Fitness · Acme RoboticsSeries B · 74 / 100
Archetype
Coral
maturing from Vapor → drift toward Organism
Selection signature · trajectory-weighted
C · 25R · 30A · 45 ★
ΔC +2 · ΔR +3 · ΔA +10
Prognosis · 12 mo
▲ ACCELERATING
Five dimensions · derived from Wong et al.
Capability1st-order · static persistence
Validated configurations that resist decay. What still works if everyone takes two weeks off.
Resilience2nd · dynamic persistence
Dissipation · autocatalysis · homeostasis · information processing.
Adaptability3rd · novelty generation
New functions that did not previously exist. Structural optionality preserved for the next.
Engineleading indicator
The generative mechanism. Team, tooling, practices, leadership, AI maturity.
Extinctionenvironmental gate
CLEAR / ELEVATED / CRITICAL. The asteroid does not care about your functional information.
Wong et al., PNAS 120(43), 2023 · “On the roles of function and selection in evolving systems”
Scored against the investor fitness function. ALPHA — pending validation against ≥3 engagements.
§ 06 — Beyond tech DD

Other providers read the code.
We read the system that wrote it.

Most technical due diligence stops at linter output, dep graphs, and a bus-factor spreadsheet. Ours starts there. The 5-pillar rubric covers surfaces that matter to a deal — hiring plans, goal-setting, roadmap evidence, incident response, compliance posture — scored against a stage benchmark. EVOFIT sits above the rubric: is this system strengthening or decaying?

Surface
Typical tech DD
subsystems
Code & architecture
Static analysis · dep graph · LOC
All of that, plus fan-in/out risk paths and topology verdict
People & team
Bus factor · tenure table
Hiring plan credibility · attrition pattern · 2 structured CTO interviews
Goals & rituals
Out of scope
OKRs · retro cadence · performance process · decision-making
Product function
Not assessed
Roadmap evidence · analytics maturity · feedback loops
Culture signal
Subjective anecdote
Explicit rubric · cross-referenced against interviews & docs
Compliance
Checklist: yes/no per cert
Earned vs claimed · audit evidence · continuous-compliance tooling
Fitness trajectory
Snapshot only
EVOFIT · capability · resilience · adaptability · engine · extinction
Post-funding
Report and disappear
On-prem scanner tracks remediation · tranche assessment on request
01

Stage-benchmarked, not absolute

A 3.2 on testing discipline means something different at seed vs growth. Every signal is scored against the cohort the company actually lives in.

02

Earned > claimed

Certifications, SLAs, and “we have OKRs” are starting points. Each gets verified against source artefacts — audit logs, commit history, calendar invites, retros.

03

A verdict, not a dashboard

Each subsection closes with a one-sentence verdict the IC can act on. No 400-metric spreadsheet. No “it depends.”

§ 07 — After the cheque clears

Remediation tracked.
Tranches unlocked on evidence.

The report isn't the end of the engagement. Our on-prem scanner stays inside the portfolio company and automatically tracks every remediation item we flagged — so when the next tranche is on the table, you have live evidence instead of a status email.

How it works

A scanner that lives on their infra. A dashboard that lives on yours.

Deployed on the portfolio company's own infrastructure. Reads their code, CI, ticketing, and docs with read-only credentials. Pushes only signed, hashed attestations to your investor dashboard — no source code, no secrets, no PII leaves their perimeter.

  • 01
    Baseline captured at closeThe full DD report becomes the scanner's ground truth. Every remediation item lands in the queue with a named owner and target date.
  • 02
    Continuous attestation, quarterly digestThe scanner re-runs the 50-signal rubric on a cadence. Drift is flagged; regressions trigger an analyst review. You get a one-page digest every quarter.
  • 03
    Tranche assessment on requestWhen a milestone tranche is due, call us in. We verify against the scanner's history, interview the CTO against the prior rubric, and deliver a short gating memo — normally within 5 business days.
  • 04
    Evolving benchmarkAs the company moves Series A → B → growth, the rubric re-benchmarks automatically. EVOFIT shows you trajectory, not just state.
▚ subsystems · on-prem scanner● live
Remediation queue · Q2 2026

Sample · 11 items baselined at close · 6 closed · 3 in progress · 2 open

P-I.5 · OKR cadence instituted · quarterlyclosed
P-III.3 · Unit test coverage raised 42% → 71%closed
P-IV.5 · DR runbook timed + restore-testedclosed
P-I.1 · VP Engineering · offer out · start 2026-06-01in progress
P-III.5 · Retro & on-call rotation · 4 of 6 teamsin progress
·P-II.1 · Head of Product · scoped · not yet openedopen
·P-IV.4 · Private-deploy telemetry gap · scopedopen
▚ Tranche B gate · due 2026-09Readiness: 72% · trajectory +0.4 / quarter
Forecast: gate met by 2026-08 at current pace · analyst memo 5 BD after request
They gave us a report we could put in front of the IC — five pillars, one-sentence verdicts per subsection, every claim traced to a file. The engineering practices pillar alone repriced the deal by 4%. We now run every Series B scan through them.
Rachel Ortiz
Partner · Northfield Capital
Engagement · NFC-0087
Pillars scored5
Signals verified50
Remediation items11
Price adjustment−4%
§ 08 — Signatory

Every verdict is signed by a named human.

subsystems isn't a platform. It's a practice — the codification of 200+ technical due-diligence engagements signed, in the last five years, by one auditor using AI agents as a force multiplier.

▚ Principal · Subsystems Inc.auditor#001
JRSigned
Johann Romefort
Founder · Principal auditor
DD engagements200+
Years in practice5
Stages coveredSeed → Growth · PE
Verdicts signed byJ. Romefort
▚ Named signature · every reportConfidential

A super-auditor, not a team.

AI does the traversal. Agents map the import graph, surface the dependency risk, cross-reference the CVE feed, heatmap the authorship, and draft finding evidence. The leverage is real — 50 signals across 5 pillars in a timeframe no human team could match.

One human writes the verdict. Every pillar closes with a single sentence the investment committee can act on. Every critical finding is signed off by a named human — after manual verification, interview cross-checks, and a re-read of the raw evidence. The signature is not decorative.

The rubric exists because one auditor cannot pattern-match across 200 companies without structure. The AI exists because 200 reports would otherwise take five lifetimes. The signature exists because an investor deserves a name on the verdict, not a logo.

01 · Leverage
AI as the engine
Agents traverse code, docs, tickets, OSINT. 50 signals drafted in hours, not weeks.
02 · Judgement
Human as the verdict
Edge cases interrogated. Citations verified. One sentence per pillar that the IC can act on.
03 · Accountability
Signature as the stamp
No committee. No platform alias. Every critical is signed off by a named auditor.
§ 09 — FAQ

Questions investors ask.

If yours isn't here, ask it during the intro call — we'll tell you honestly whether we're the right fit.

01Do you need the target's cooperation?+

Ideally yes — read-only repo access, a 30-min call with an engineering lead, and docs access. Where the seller is cautious, we've run productive engagements on repo access alone. We won't proceed on screenshots.

02How is this different from a traditional technical DD firm?+

A stage-benchmarked rubric across five pillars, a fitness trajectory read (EVOFIT) that no other provider offers, and a post-funding scanner that turns the report into an audit trail. We don't hand you a dashboard — we hand you a verdict. Every pillar closes with one sentence the IC can act on.

03What languages and stacks do you cover?+

TypeScript, JavaScript, Python, Go, Rust, Ruby, Java, Kotlin, Swift, C#. If your target is 90% COBOL, we'll tell you at intake.

04Is the AI actually writing the verdict?+

No. AI agents do the traversal, evidence gathering, and first-pass findings. A senior engineer reviews, verifies, and writes the verdict. Every critical finding is signed off by a named human.

05What does the on-prem scanner actually see?+

Code, CI logs, ticketing metadata, doc index, and authorship — all read-only, all on the portfolio company's infrastructure. Only signed, hashed attestations of rubric signals leave the perimeter. No source code, no customer data, no secrets are transmitted. Each attestation is timestamped and independently verifiable.

06How does tranche assessment work?+

When a milestone tranche is due, you request a gating memo. We verify the scanner's remediation history against the baseline report, interview the CTO against the prior rubric, and deliver a short pass / partial / no-go memo — typically within 5 business days. Pricing is a fixed retainer plus a per-assessment fee.

07What does it cost?+

Fixed fee per engagement, scaled to company stage and scope. Typical DD engagement sits between $25k and $80k. Post-funding scanner + attestation is a separate quarterly retainer. Scope and price are published before you sign.

▚ Ready when you are

Sign the term sheet with eyes open.

Intro call is free. 72-hour turnaround if you're under contract pressure. We'll tell you on the first call whether we can help.

Book a diligence engagement See sample report