Neutron Python
Starlette speed, Pydantic sanity, Nucleus for state. Async-first from the handler to the database, with MCP server support so your LLM apps ship with tools defined, not glued on.
Starlette speed. Pydantic sanity. Nucleus for state.
The Python server built for AI apps.
Most Python frameworks are either too bare (raw ASGI) or too opinionated about being an ORM-first monolith. Neutron Python lands in the middle — Starlette for routing, Pydantic v2 for validation, asyncpg for Nucleus, and a built-in MCP server so your tools are first-class citizens. Plus 30-second graceful shutdown, CSRF, per-IP rate limiting, and the same middleware order as every other Neutron SDK.
from neutron import route, LoaderArgs, stream
from neutron.mcp import tool
from pydantic import BaseModel
import anthropic
class ChatReq(BaseModel):
message: str
thread_id: str | None = None
@tool("search_docs")
async def search_docs(query: str, k: int = 5) -> list[dict]:
hits = await ctx.db.vector("docs").search(query).k(k).execute()
return [{"id": h.id, "snippet": h.text[:200]} for h in hits]
@route.post("/chat")
async def chat(args: LoaderArgs[ChatReq]):
async def token_stream():
async with anthropic.AsyncClient().messages.stream(
model="claude-opus-4-6",
messages=[{"role": "user", "content": args.body.message}],
tools=[search_docs.schema],
) as s:
async for tok in s.text_stream:
yield tok
return stream(token_stream())@tool and it's a Model Context Protocol tool — discoverable by Claude, ChatGPT, or any MCP client.Retry-After. Zero dropped traffic on deploys.Data pipelines that aren't a separate process.
You don't need to run Airflow next to your API server anymore. Typed Nucleus queries, async everywhere, streaming responses, and MCP tools live in the same app — so your LLM can call your pipeline and your pipeline can write to the same database that serves the UI.
from neutron import loader, page
async def load(args):
db = args.ctx.db
trending = await db.sql().query(
"SELECT id, title FROM articles "
"ORDER BY views_24h DESC LIMIT 10"
).all()
similar = await db.vector("articles").search(
embed("python frameworks")
).k(5).execute()
return {"trending": trending, "similar": similar}
@page("/")
async def home(data):
return render("home.html", data)What it's for
LLM applications where your tools, prompts, and memory live in one app. Data pipelines that need a typed schema and a real database. Async APIs that don't want Django's assumptions. Anywhere you'd reach for FastAPI but also want a real batteries-included framework around it.
Why Python?
Because the AI ecosystem lives here. Because asyncpg hits C-like throughput against Nucleus. Because Pydantic v2 compiled in Rust is as fast as validation gets. You don't pick Python for speed; you pick it for the libraries, and then you pick a framework that doesn't waste them.
Part of a bigger system
Train a model in Neutron Mojo, expose it through Neutron Python's MCP server, consume it from Neutron TypeScript on the edge. All three backed by the same Nucleus database. One contract, many runtimes.