Gleam on BEAM: Building Type-Safe, Fault-Tolerant Distributed Systems
Gleam on BEAM: Building Type-Safe, Fault-Tolerant Systems with Functional Erlang
Gleam is a statically-typed functional language that compiles to Erlang bytecode, running on the BEAM virtual machine. It combines Hindley-Milner type inference with Erlang's actor-based concurrency model, enabling compile-time safety guarantees while preserving the "let it crash" fault-tolerance philosophy.
Why Gleam on BEAM
The BEAM provides preemptive scheduling, per-process garbage collection, and hot code reloading. Gleam adds a strong static type system that catches entire categories of errors at compile time: pattern match exhaustiveness, type mismatches, and null pointer exceptions. The result is systems that are both type-safe and fault-tolerant.
Key advantages:
- No runtime type errors - the type system proves correctness before deployment
- Seamless OTP integration - supervisors, gen_servers, and processes work with type-safe APIs
- Cross-platform compilation - targets both BEAM and JavaScript from a single codebase
The Type System
Gleam uses algebraic data types and pattern matching as core constructs. All functions are total - they must handle all possible inputs.
Custom Types and Pattern Matching
pub type ConnectionError {
Timeout
Refused
InvalidResponse(String)
}
pub fn handle_result(result: Result(String, ConnectionError)) -> String {
case result {
Ok(data) -> "Received: " <> data
Error(Timeout) -> "Connection timed out"
Error(Refused) -> "Connection refused"
Error(InvalidResponse(msg)) -> "Invalid: " <> msg
}
}
The compiler enforces exhaustive pattern matching. Missing a branch results in a compile-time error.
The Result Type for Error Handling
Gleam has no exceptions. Errors are values, handled explicitly via the built-in Result type:
import gleam/result
pub fn divide(a: Int, b: Int) -> Result(Int, String) {
case b == 0 {
True -> Error("Division by zero")
False -> Ok(a / b)
}
}
pub fn safe_calculation() -> Result(Int, String) {
use result <- result.try(divide(10, 2))
use final <- result.try(divide(result, 5))
Ok(final)
}
The result.try function chains operations, short-circuiting on the first Error. The use expression provides clean syntax for monadic chaining.
Concurrency and OTP
Gleam provides type-safe wrappers around OTP primitives through the gleam_otp package. Add it to your project with gleam add gleam_otp. Processes communicate via typed messages.
Actor Pattern
import gleam/otp/actor
import gleam/erlang/process.{type Subject, send}
pub type CounterMessage {
Increment
Decrement
Get(reply_with: Subject(Int))
}
pub fn handle_message(
msg: CounterMessage,
state: Int,
) -> actor.Next(Int, CounterMessage) {
case msg {
Increment -> actor.continue(state + 1)
Decrement -> actor.continue(state - 1)
Get(reply_with) -> {
send(reply_with, state)
actor.continue(state)
}
}
}
pub fn start_counter() -> Result(Subject(CounterMessage), actor.StartError) {
actor.new(0)
|> actor.on_message(handle_message)
|> actor.start
}
The Subject(message) type ensures messages sent to a process match its expected message type. The actor builder pattern chains configuration: actor.new(state) creates the builder, actor.on_message(handler) sets the callback, and actor.start spawns the process.
Supervision Trees
Gleam's gleam_otp package provides the supervisor module for building supervision trees. Children are defined using worker specs and added via an init function:
import gleam/otp/supervisor
import gleam/otp/actor
pub fn start_supervision_tree() -> Result(Subject(supervisor.Message), supervisor.StartError) {
supervisor.start(fn(children) {
children
|> supervisor.add(supervisor.worker(fn(_) { start_counter() }))
|> supervisor.add(supervisor.worker(fn(_) { start_server() }))
})
}
For custom restart strategies, use supervisor.start_spec with a Spec record:
pub fn start_supervision_with_strategy()
-> Result(Subject(supervisor.Message), supervisor.StartError) {
let spec = supervisor.Spec(
argument: Nil,
max_frequency: 3,
frequency_period: 5,
init: fn(children) {
children
|> supervisor.add(supervisor.worker(fn(_) { start_counter() }))
|> supervisor.add(supervisor.worker(fn(_) { start_server() }))
}
)
supervisor.start_spec(spec)
}
Restart strategies (configured via Spec):
- OneForOne - restart only the crashed child (default)
- OneForAll - restart all children when any crashes
- RestForOne - restart the crashed child and all started after it
For dynamic child creation, combine supervisors with a registry or use Erlang's pg module for process groups, spawning workers on demand and linking them to a supervisor.
Interoperability with Erlang and Elixir
Gleam can call Erlang functions directly via external function declarations:
@external(erlang, "logger", "info")
pub fn log_info(message: String) -> Nil
@external(erlang, "maps", "get")
pub fn map_get(key: a, map: Dict(a, b)) -> Result(b, Nil)
The map_get function returns Result(b, Nil) to handle missing keys safely. Erlang's maps:get/2 throws an exception for missing keys, so this wrapper provides type-safe access.
For more complex interop, use the decode package to safely decode untyped Erlang terms:
import decode/zero as decode
import gleam/dynamic
pub type UserData {
UserData(id: Int, name: String)
}
pub fn decode_user(data: dynamic.Dynamic) -> Result(UserData, List(decode.DecodeError)) {
let decoder = {
use id <- decode.field("id", decode.int)
use name <- decode.field("name", decode.string)
decode.success(UserData(id: id, name: name))
}
decode.run(data, decoder)
}
The decoder composes field extractors using use syntax, then finalizes with decode.success. Invalid data produces structured errors indicating which fields failed and why.
Performance Considerations
Gleam's immutable data structures align with BEAM's design. Since all data is immutable, processes never share memory, enabling per-process garbage collection without stop-the-world pauses.
Key performance characteristics:
- Pattern matching compiles to efficient BEAM jump tables; exhaustive matches have no runtime overhead
- Binary matching uses BEAM's native bit syntax for zero-copy parsing of protocols and messages
- Tail call optimization is guaranteed; recursive functions use constant stack space
- Process isolation means a crash in one process cannot corrupt another's memory
For hot paths, prefer pattern matching over nested function calls. The compiler optimizes case expressions into direct jumps.
Testing Strategies
Gleam projects use gleeunit for unit testing. Add it with gleam add gleeunit --dev.
import gleeunit
import gleeunit/should
pub fn main() {
gleeunit.main()
}
pub fn divide_by_zero_test() {
divide(10, 0)
|> should.equal(Error("Division by zero"))
}
pub fn safe_calculation_test() {
safe_calculation()
|> should.equal(Ok(1))
}
For OTP testing, use gleam/otp/actor in tests to spawn supervised processes and verify restart behavior. Mock external dependencies by passing functions as arguments rather than using external function calls directly.
Deployment Patterns
Gleam applications deploy as standard BEAM releases using rebar3 or gleam run. The typical production setup:
- Build a release with
rebar3 releaseafter compiling Gleam to Erlang - Package in Docker using multi-stage builds for minimal image size
- Configure via environment variables using
gleam_erlangfor env access
Example Dockerfile structure:
FROM gleam/gleam:latest AS builder
WORKDIR /app
COPY . .
RUN gleam export erlang-shipment
FROM erlang:26-alpine
COPY --from=builder /app/build/erlang-shipment /app
ENTRYPOINT ["/app/bin/my_app", "foreground"]
Hot code reloading works with Gleam releases. Deploy new beam files to the running system and call l(Module). in the Erlang shell, or use appup files for automated upgrades.
Getting Started
-
Install Gleam (requires Erlang/OTP 26+):
# macOS brew install gleam # Linux curl -fsSL https://gleam.run/install.sh | sh -
Create a new project:
gleam new my_app cd my_app gleam add gleam_otp -
Build and run:
gleam run
The project structure:
src/- application source filestest/- test filesgleam.toml- project configuration and dependencies
Share this Guide:
More Guides
Agentic Workflows: Building Self-Correcting Loops with LangGraph and CrewAI State Machines
Build production-ready AI agents that iteratively improve their outputs through automated feedback loops, combining LangGraph's state machine architecture with CrewAI's multi-agent orchestration for robust, self-correcting workflows.
14 min readBun Runtime Migration: Porting High-Traffic Node.js APIs with Native APIs and SQLite
Learn how to migrate high-traffic Node.js APIs to Bun for 4× HTTP throughput and 3.8× database performance gains using native APIs and bun:sqlite.
10 min readDeno 2.0 Workspaces: Build Monorepos with JSR Packages and TypeScript-First Development
Learn how to configure Deno 2.0 workspaces for monorepo management, publish TypeScript packages to JSR, and automate releases with OIDC-authenticated CI/CD pipelines.
7 min readHono Edge Framework: Build Ultra-Fast APIs for Cloudflare Workers and Bun
Master Hono's zero-dependency web framework to build low-latency edge APIs that deploy seamlessly across Cloudflare Workers, Bun, and other JavaScript runtimes. Learn routing, middleware, validation, and real-time streaming patterns optimized for edge computing.
6 min readLLM Observability: OpenTelemetry Tracing for Non-Deterministic AI Chains
Master OpenTelemetry tracing for LLM workflows with semantic conventions, token metrics, and non-deterministic chain monitoring for production AI systems.
9 min readContinue Reading
Agentic Workflows: Building Self-Correcting Loops with LangGraph and CrewAI State Machines
Build production-ready AI agents that iteratively improve their outputs through automated feedback loops, combining LangGraph's state machine architecture with CrewAI's multi-agent orchestration for robust, self-correcting workflows.
14 min readBun Runtime Migration: Porting High-Traffic Node.js APIs with Native APIs and SQLite
Learn how to migrate high-traffic Node.js APIs to Bun for 4× HTTP throughput and 3.8× database performance gains using native APIs and bun:sqlite.
10 min readDeno 2.0 Workspaces: Build Monorepos with JSR Packages and TypeScript-First Development
Learn how to configure Deno 2.0 workspaces for monorepo management, publish TypeScript packages to JSR, and automate releases with OIDC-authenticated CI/CD pipelines.
7 min readReady to Supercharge Your Development Workflow?
Join thousands of engineering teams using MatterAI to accelerate code reviews, catch bugs earlier, and ship faster.
