Release: STEM BIO-AI v1.5.8
I have open-sourced an internal BioAI governance scanner under Apache-2.0.
STEM BIO-AI is a deterministic evidence-surface scanner for bio/medical AI repositories.
Built for the checkpoint most teams skip.
Bio/medical AI repositories are hard to evaluate quickly and consistently.
Some are rigorous academic tools.
Some are early research prototypes.
Some look mature but carry clinical language before provenance, reproducibility, or clinical-use boundaries are clear.
That creates a governance gap.
Before a repository is trusted, adopted, integrated, or used as part of a clinical-adjacent workflow, teams need a basic answer:
What evidence is actually visible here?
STEM BIO-AI makes that repository evidence surface visible.
Core features:
▪️ No LLM
▪️ No API key
▪️ No model runtime
▪️ No secrets sent anywhere
▪️ Fast local or public GitHub repo triage
▪️ Deterministic scoring
▪️ T0–T4 evidence-tier output
▪️ JSON, Markdown, PDF, and explain-trace artifacts
▪️ Findings linked to file, line, pattern, or missing signal
How it works:
Run it on a local clone or public GitHub repository.
It checks four evidence lanes:
Claim Surface
Clinical language, hype claims, limitations, disclaimers, regulatory framing.
Repository Consistency
README, docs, metadata, tests, CI, workflow claims, version signals.
Engineering Accountability
CI/CD, domain tests, changelog hygiene, data provenance, bias / limitation evidence.
Replication Evidence
Containers, dependency locks, reproducibility targets, dataset or model references, CLI and citation signals.
Additional checks include hardcoded credentials, weak dependency pinning, deprecated patient-adjacent paths, and fail-open exception handlers.
Why this matters:
Most AI governance discussions start at the model layer.
But in BioAI, risk often appears earlier — in repository claims, missing provenance, weak reproducibility, stale tests, undocumented assumptions, and demos that look more mature than their evidence supports.
STEM BIO-AI is designed for that earlier review point:
before procurement,
before pilot review,
before technical due diligence,
before an AI demo becomes an AI system.
STEM BIO-AI is not a generic linter.
Not a benchmark leaderboard.
Not an LLM-based reviewer.
It is a repository-level evidence scanner for bio/medical AI governance.
Built for:
▪️ vendor screening
▪️ repository evidence review
▪️ technical due diligence
▪️ pre-procurement / pre-pilot checks
▪️ reviewer-ready audit artifacts
The goal is simple:
not more trust language,
but more inspectable evidence surfaces.
STEM BIO-AI resources: