Once upon a time, I had a mentor who told me, “you should never be looking for startup ideas; instead, you should be looking for problems in the world, and backchaining startup ideas from there. If you truly understand the problems, the solutions naturally follow.”

I’ve spent my time at ERA researching the problems. Here’s a preliminary gap analysis of two highly tractable areas in AI Biosecurity, along with startup ideas that naturally follow:

1. AI-enabled DNA synthesis screening

Background: Currently, there exists no universal legal requirement for gene synthesis providers to conduct background checks on clients or screen DNA sequences to ensure they’re not dangerous pathogens. While some labs screen orders on a voluntary basis through organizations like the International Gene Synthesis Consortium, compliance remains optional and inconsistent. In 2006, an investigative journalist with the Guardian was able to mail order a “modified sequence of smallpox DNA,” and their order was not screened by the provider since it was less than 100 letters long.

Even in the United States, no binding legal requirements exist for DNA synthesis screening. Federal regulations were proposed in 2024 through the Framework for Nucleic Acid Synthesis Screening, but an Executive Order in May 2025 paused implementation before the April 2025 effective date, and no replacement framework has been issued. Despite several congressional bills attempting to mandate screening — including the 2023 “Securing Gene Synthesis Act” and 2024 “Nucleic Acid Standards for Biosecurity Act” — none have passed. This regulatory gap means anyone, including those with malicious intent, can order potentially dangerous genetic sequences with minimal oversight.

The Intervention: Existing screening approaches rely on sequence homology, which means matching orders against databases of known dangerous pathogens. An AI-enabled bioterrorist could circumvent this by designing functionally equivalent pathogens using synonymous codons, chimeric sequences, or entirely novel genetic constructs that retain lethality while evading database matches. Advanced AI-powered screening would analyze structural features, predicted protein function, and evolutionary markers to flag potentially dangerous sequences regardless of exact database matches.

Implementation requires two components: First, we must develop reliable AI screening systems capable of detecting novel pathogenic sequences that the world has never seen before. Second, we must require all commercial DNA synthesis providers globally to implement this screening as a condition for legal operation, with penalties for providers who ship flagged sequences without proper end-user verification.

Startup Idea: synthescreen

A screening-as-a-service API that DNA synthesis providers integrate into their order pipelines. Instead of relying on sequence homology, synthescreen uses structural analysis, predicted protein function, and evolutionary markers to flag novel sequences that are functionally dangerous but designed to evade traditional database lookups. Synthesis providers submit orders through the API before fulfillment; the system returns a risk assessment and audit log in real time, with flagged orders routed to human review and end-user verification workflows.

The core technical challenge is the detection problem: an AI-capable adversary can generate synonymous codon substitutions, chimeric constructs, and entirely novel sequences that retain pathogenic function while looking nothing like known threats; a screening system needs to reason about what a sequence does, not just what it looks like. This is an ML problem with a defensible moat: the model improves with every order screened, and the training data (proprietary sequence-risk mappings) compounds over time.

The market timing is strong. The U.S. regulatory framework was proposed, paused, and left in limbo, meaning providers face imminent compliance requirements with no timeline certainty and no off-the-shelf solution. synthescreen lets providers get ahead of regulation; entry point is US commercial synthesis providers; expansion is global providers and eventually integration with the cloud lab screening infrastructure proposed in Intervention #2.

2. Know-your-customer requirements for cloud laboratories

Background: The standard objection to AI-driven bioweapons risk is that knowledge alone is not enough — you still need hands-on laboratory skills, the “tacit knowledge” that can only be acquired through years of physical practice (see Panoplia’s uplift study). Knowing how to culture a pathogen is different from being able to do it reliably when cells behave unpredictably, reagents expire, and equipment malfunctions. This barrier has historically been one of the strongest defenses against non-state bioweapons development.

Cloud laboratories dissolve this barrier. Services like Emerald Cloud Lab allow anyone to design experiments in software and have them executed by robotic systems in a physical facility, remotely, without ever entering a lab. ECL operates 24/7 with over 200 remotely controlled instrument models, supports workflows from PCR to flow cytometry, and requires no coding experience — its CEO estimates a learning curve of roughly ten experiments before novice users are comfortable. An AI system that can design a bioweapons protocol and a cloud lab that can execute it are, individually, manageable risks. Combined, they eliminate both the knowledge bottleneck and the tacit knowledge bottleneck in a single pipeline.

Despite this, cloud labs currently operate with no standardized customer screening. A 2024 RAND analysis found that there are no public documents detailing cloud lab locations or capabilities worldwide, no standardized KYC approaches shared between cloud lab organizations, and no equivalent of the International Gene Synthesis Consortium’s voluntary screening norms. The same RAND report noted that the lack of data on cloud lab operations, customer types, and workflows makes the current oversight gap essentially unmeasured.

The Intervention: Require all cloud laboratory providers to implement know-your-customer screening as a condition of legal operation — verifying the identity, institutional affiliation, and stated research purpose of every user before granting access to experiment execution. Providers should be required to log all experimental workflows and flag protocols involving select agents or sequences of concern, with automated screening that mirrors (and integrates with) the DNA synthesis screening proposed in #1. RAND has proposed a Cloud Lab Security Consortium modeled on the IGSC; this intervention would make participation in such a consortium mandatory rather than voluntary.

Startup Idea: biotrust

A KYC and compliance platform for cloud laboratories. biotrust sits between user and experiment execution, verifying identity, institutional affiliation, and stated research purpose before any protocol is allowed to run. The platform automates screening of submitted workflows against select agent lists and sequences of concern, along with running identity verification and checking user identity against sanctions lists. Cloud lab providers plug biotrust into their existing software interfaces as a pre-execution gate — no change to their lab hardware, no disruption to legitimate users.

The market entry is cloud lab providers who currently have no screening tooling and face growing regulatory and reputational exposure. The RAND analysis makes the gap explicit: no standardized KYC, no shared data on customer types or workflows, no screening norms equivalent to what exists in DNA synthesis. When mandatory screening arrives, every provider will need a solution immediately. biotrust is that solution.

The longer-term play is broader biosecurity compliance infrastructure: expanding from cloud labs into contract research organizations, DNA synthesis providers, and equipment suppliers to become the trust layer across the entire distributed biology stack.