language

Neutron Elixir

A BEAM-native backend framework for systems that must stay up. OTP supervision, Plug + Bandit, Phoenix-style channels, tiered ETS cache, and a typed Nucleus client for all 14 data models.

Let it crash. The supervisor handles the rest.

Available
OTPSupervision Trees
BanditHTTP/1 + HTTP/2
ETSMicrosecond Cache
HotCode Reload

Fault tolerance isn't a library — it's the runtime.

Every other language treats a crashed request as an outage. On the BEAM it's just a restart. Neutron Elixir uses OTP supervisors, isolated processes, and message passing so a broken handler can't take the server down. Pair that with a Plug-based router, Bandit (pure-Elixir HTTP/2), a Phoenix-style channel behaviour, and a full Nucleus client — and you get a backend that stays up while you ship.

lib/my_app/router.ex
defmodule MyApp.Router do
  use Neutron.Router

  plug Neutron.Middleware.RequestID
  plug Neutron.Middleware.Logger
  plug Neutron.Middleware.Recovery
  plug Neutron.Middleware.CORS
  plug Neutron.Middleware.RateLimit, per_ip: 100
  plug Neutron.Middleware.Auth

  get "/health", do: send_resp(conn, 200, ~s({"status":"ok"}))

  post "/messages" do
    %{"text" => text} = conn.body_params
    case MyApp.Messages.create(text) do
      {:ok, msg}     -> json(conn, 201, msg)
      {:error, errs} -> problem(conn, 422, errs)
    end
  end

  channel "room:*", MyApp.RoomChannel
end
Plug router + middleware + channel. Mounted under a supervisor.
OTP supervisors
Process trees with automatic restart strategies. A crashing handler takes down one process, not the server. Children restart in microseconds.
Plug + Bandit
Composable middleware on Bandit, pure-Elixir HTTP/1 and HTTP/2. Ten-layer middleware stack matching the Neutron contract.
Channels + presence
Phoenix-style channel behaviour with CRDT-based presence tracking. Fan out WebSocket messages to thousands of clients per node.
Full Nucleus client
All 14 data models through Postgrex. SQL, KV, Vector, Graph, TimeSeries, Document, FTS, Geo, Blob, Streams, Columnar, Datalog, CDC, PubSub.
Jobs + tiered cache
GenServer-backed job queue with retries. ETS L1 (microseconds) + Nucleus KV L2 tiered cache. Session storage in either tier.
Cluster & hot reload
Distributed Erlang for multi-node clustering. Hot code reload for zero-downtime deploys. libcluster discovery included.

The BEAM advantage

Processes
Millions of lightweight processes per node
Supervisors
Automatic restart, no downtime
Channels
Thousands of WebSockets per node
Cluster
Distributed Erlang + Phoenix.PubSub
Hot reload
Ship code without restarts

What it's for

High-concurrency APIs that absolutely cannot go down. Real-time systems with tens of thousands of WebSocket connections. Multi-node clusters where nodes fail and traffic keeps flowing. Chat, notifications, presence, telemetry collection — anywhere the cost of a restart is higher than the cost of a crash.

Why the BEAM?

Because it was designed for concurrency, distribution, and fault tolerance at telecom scale. Isolated processes with preemptive scheduling, message passing instead of shared memory, supervisor hierarchies that restart the broken parts automatically. It's the only runtime where "nine nines" is a real engineering target instead of marketing.

Part of a bigger system

Run Neutron Elixir for the services that must stay up. Pair it with Neutron TypeScript on the edge, Rust for performance-critical paths, Go for microservices — all reading the same Nucleus database. Each piece at its peak, one source of truth.