Subsystems gives investors a full technical picture — operational maturity, product, engineering, technology, security — scored against a stage benchmark. Every claim is cited. Post-close, an on-prem scanner tracks remediation and gates tranches.
Before a human writes a word of the report, our scanners traverse the full repo graph — static analysis, dependency audit, authorship heatmap, test inspection. What you're about to see runs in real time on every engagement.
You hand us a repo URL. We hand back a signed PDF and a 60-minute read-out with your deal team. No portal logins, no “AI insights” dashboards.
NDA, repo access, target-contact intro. Scope is fixed at the start — no surprise line items.
AI agents map the code graph, surface findings, and draft evidence. A senior engineer verifies every critical before it lands in the report.
Live 60-minute session with your deal team, followed by a signed PDF, raw data export, and a Q&A window through close.
Before we touch the repo, subsurface maps what the internet already knows about the target. Leaked credentials, exposed endpoints, forgotten subdomains, SBOM matches against public CVE feeds, employees pasting proprietary code into public gists. What an attacker would find in an afternoon — we find in ten minutes.
subsurface runs in parallel with the code review. It probes 14 public layers — DNS, cert transparency, code hosting, paste sites, container registries, leaked credential dumps — and weights each hit against its blast radius inside the target's stack.
Every engagement scores the company against a stage-benchmarked rubric. Pillar I through V cover the surfaces investors actually care about — from founding team and goal-setting through production incident response. Every claim is cited; every subsection closes with a verdict.
Three interviews confirm no team-level goal-setting. Retros occur ad hoc. Score pulled down from tooling strength. Remediation: standardise on 6-week cycles + quarterly OKR sync.
Each pillar gets an overview page, 3–6 subsection pages with evidence and analysis, and a synthesis page with scored recommendations. EVOFIT sits above the rubric — the trajectory read on the system itself.
EVOFIT scores a company as an evolving system. Five dimensions derived from Wong et al.'s three modes of selection (PNAS 2023). Not a checklist — an assessment of whether the producing mechanism is strengthening or decaying.
Most technical due diligence stops at linter output, dep graphs, and a bus-factor spreadsheet. Ours starts there. The 5-pillar rubric covers surfaces that matter to a deal — hiring plans, goal-setting, roadmap evidence, incident response, compliance posture — scored against a stage benchmark. EVOFIT sits above the rubric: is this system strengthening or decaying?
A 3.2 on testing discipline means something different at seed vs growth. Every signal is scored against the cohort the company actually lives in.
Certifications, SLAs, and “we have OKRs” are starting points. Each gets verified against source artefacts — audit logs, commit history, calendar invites, retros.
Each subsection closes with a one-sentence verdict the IC can act on. No 400-metric spreadsheet. No “it depends.”
The report isn't the end of the engagement. Our on-prem scanner stays inside the portfolio company and automatically tracks every remediation item we flagged — so when the next tranche is on the table, you have live evidence instead of a status email.
Deployed on the portfolio company's own infrastructure. Reads their code, CI, ticketing, and docs with read-only credentials. Pushes only signed, hashed attestations to your investor dashboard — no source code, no secrets, no PII leaves their perimeter.
Sample · 11 items baselined at close · 6 closed · 3 in progress · 2 open
They gave us a report we could put in front of the IC — five pillars, one-sentence verdicts per subsection, every claim traced to a file. The engineering practices pillar alone repriced the deal by 4%. We now run every Series B scan through them.
subsystems isn't a platform. It's a practice — the codification of 200+ technical due-diligence engagements signed, in the last five years, by one auditor using AI agents as a force multiplier.
AI does the traversal. Agents map the import graph, surface the dependency risk, cross-reference the CVE feed, heatmap the authorship, and draft finding evidence. The leverage is real — 50 signals across 5 pillars in a timeframe no human team could match.
One human writes the verdict. Every pillar closes with a single sentence the investment committee can act on. Every critical finding is signed off by a named human — after manual verification, interview cross-checks, and a re-read of the raw evidence. The signature is not decorative.
The rubric exists because one auditor cannot pattern-match across 200 companies without structure. The AI exists because 200 reports would otherwise take five lifetimes. The signature exists because an investor deserves a name on the verdict, not a logo.
If yours isn't here, ask it during the intro call — we'll tell you honestly whether we're the right fit.
Ideally yes — read-only repo access, a 30-min call with an engineering lead, and docs access. Where the seller is cautious, we've run productive engagements on repo access alone. We won't proceed on screenshots.
A stage-benchmarked rubric across five pillars, a fitness trajectory read (EVOFIT) that no other provider offers, and a post-funding scanner that turns the report into an audit trail. We don't hand you a dashboard — we hand you a verdict. Every pillar closes with one sentence the IC can act on.
TypeScript, JavaScript, Python, Go, Rust, Ruby, Java, Kotlin, Swift, C#. If your target is 90% COBOL, we'll tell you at intake.
No. AI agents do the traversal, evidence gathering, and first-pass findings. A senior engineer reviews, verifies, and writes the verdict. Every critical finding is signed off by a named human.
Code, CI logs, ticketing metadata, doc index, and authorship — all read-only, all on the portfolio company's infrastructure. Only signed, hashed attestations of rubric signals leave the perimeter. No source code, no customer data, no secrets are transmitted. Each attestation is timestamped and independently verifiable.
When a milestone tranche is due, you request a gating memo. We verify the scanner's remediation history against the baseline report, interview the CTO against the prior rubric, and deliver a short pass / partial / no-go memo — typically within 5 business days. Pricing is a fixed retainer plus a per-assessment fee.
Fixed fee per engagement, scaled to company stage and scope. Typical DD engagement sits between $25k and $80k. Post-funding scanner + attestation is a separate quarterly retainer. Scope and price are published before you sign.
Intro call is free. 72-hour turnaround if you're under contract pressure. We'll tell you on the first call whether we can help.