The actor model, first proposed by Carl Hewitt in 1973, has proven to be one of the most elegant paradigms for building concurrent and distributed systems. At its core, the model treats "actors" as the fundamental unit of computation—isolated entities that communicate exclusively through message passing, eliminating the complexity and danger of shared mutable state. For an excellent introduction, see Hewitt's talk on the Actor Model.
This blog post explores a practical, multi-language implementation of the actor model spanning C++, Rust, and Python. These implementations share a common wire protocol enabling seamless cross-language communication while leveraging each language's unique strengths.
Carl Hewitt's 1973 actor model defined actors with three fundamental capabilities. When an actor receives a message, it can:
This framework implements a pragmatic subset optimized for high-performance systems. Here's how we differ:
| Aspect | Hewitt's Model | This Implementation |
|---|---|---|
| Actor Creation | Dynamic - actors spawn new actors at runtime | Static - topology defined at startup |
| Message Delivery | Unordered, at-most-once | Ordered per sender-receiver pair, reliable (local) |
| Location | Fully transparent | Transparent via ActorRef abstraction |
| Supervision | Not specified | Not implemented (fail-fast philosophy) |
| Synchronous Calls | Pure async only | fast_send for request-response patterns |
| Thread Model | Abstract (no threads in original) | One thread per actor, or Groups for pooling |
| Fairness | Guaranteed eventual delivery | Best-effort, priority-based |
Static topology enables:
Synchronous fast_send exists because:
No supervision trees because:
The core benefits of the actor model remain intact:
This is an engineering implementation of actor concepts, not a pure theoretical model. We trade some flexibility for predictability and performance.
In actor systems, each actor:
This model eliminates data races by design—there's no shared memory to corrupt. Each actor is an island of sequential execution in a sea of concurrency.
A key innovation in this framework is the unified ActorRef concept. An ActorRef is an opaque handle to an actor that abstracts away whether the actor is:
// C++ - Same API whether local or remote
actor_ref.send(msg, sender);
// The actor doesn't know or care if pong_ref is local or remote
pong_ref.send(std::make_unique<Ping>(count), self_ref());
// Rust - Identical abstraction
pong_ref.send(Box::new(Ping { count: 1 }), ctx.self_ref());
# Python - Same pattern
self.pong_ref.send(Ping(count=1), self.self_ref)
This transparency is powerful: you can develop and test with all actors in a single process, then deploy to distributed systems with minimal code changes.
The C++ implementation (actors-cpp) is designed for scenarios where every microsecond matters—high-frequency trading, real-time systems, and performance-critical infrastructure.
Key Features:
ThreadConfig config;
config.affinity = 0; // Pin to CPU 0
config.rt_priority = 50; // SCHED_FIFO priority
config.scheduling_policy = SCHED_FIFO;
mgr.manage<PricingActor>("pricer", {}, config);
fast_send method bypasses the message queue for request-response patterns, eliminating queuing latency entirely.// Synchronous call - response returned directly
auto response = actor_ref.fast_send(new PriceRequest(symbol), this);
PingActor initiates the exchange on startup, sends Ping messages to PongActor, and receives Pong replies. PongActor simply echoes back each ping with a pong. The MESSAGE_HANDLER macro registers handlers in the constructor.// Define messages
struct Ping : public Message_N<100> {
int count;
Ping(int c) : count(c) {}
};
struct Pong : public Message_N<101> {
int count;
Pong(int c) : count(c) {}
};
// PongActor receives Ping and sends Pong
class PongActor : public Actor {
public:
PongActor() {
MESSAGE_HANDLER(Ping, on_ping);
}
void on_ping(const Ping* m) {
cout << "Received ping " << m->count << endl;
reply(new Pong(m->count));
}
};
// PingActor sends Ping and receives Pong
class PingActor : public Actor {
Actor* pong_actor;
public:
PingActor(Actor* pong) : pong_actor(pong) {
MESSAGE_HANDLER(msg::Start, on_start);
MESSAGE_HANDLER(Pong, on_pong);
}
void on_start(const msg::Start*) {
pong_actor->send(new Ping(1), this);
}
void on_pong(const Pong* m) {
cout << "Received pong " << m->count << endl;
pong_actor->send(new Ping(m->count + 1), this);
}
};
ctx.timer().set(1000, timer_id); // 1000 microseconds
When to Use C++: As C++ lends itself to optimization, it is the best language for applications that require low latencies. Message memory pooling, lock contention reduction, and cache-friendly data layouts are straightforward to implement. Use C++ for financial trading systems, game engines, embedded real-time systems, or any application where you need deterministic latency and maximum throughput.
The Rust implementation (actors-rust) brings memory safety guarantees while maintaining excellent performance. It's ideal for systems where reliability is paramount but you can't sacrifice too much speed.
Key Features:
handle_messages! macro generates type-safe dispatch code.handle_messages!(PingActor,
Start => on_start,
Pong => on_pong
);
impl PingActor {
fn on_start(&mut self, _msg: &Start, ctx: &mut ActorContext) {
self.pong_ref.send(Box::new(Ping { count: 1 }), ctx.self_ref());
}
fn on_pong(&mut self, msg: &Pong, ctx: &mut ActorContext) {
println!("Received pong: {}", msg.count);
}
}
Memory Safety Without GC: Rust's ownership system ensures no dangling pointers or data races, without garbage collection pauses.
Async ZMQ Layer: The remote communication layer uses async I/O with a dedicated sender thread, ensuring sends never block the actor.
Manager Lifecycle: Clean startup/shutdown with the Manager pattern.
let mut mgr = Manager::new();
let ping_ref = mgr.manage("ping", Box::new(PingActor::new(pong_ref)), ThreadConfig::default());
mgr.init(); // Sends Start to all actors
mgr.run(); // Blocks until terminated
mgr.end(); // Sends Shutdown to all actors
When to Use Rust: Network services, systems programming, anywhere you need C++-like performance with memory safety guarantees, or when building infrastructure that must never crash.
The Python implementation (actors-py) prioritizes developer productivity and integration with Python's rich ecosystem of numerical and AI libraries.
Key Features:
on_<messagetype>.class PingActor(Actor):
def on_Start(self, msg: Start, sender: ActorRef):
self.pong_ref.send(Ping(count=1), self.self_ref)
def on_Pong(self, msg: Pong, sender: ActorRef):
print(f"Received pong: {msg.count}")
@register_message decorator for remote communication.@register_message
class Ping:
def __init__(self, count: int):
self.count = count
NumPy/Pandas Integration: Direct access to Python's numerical computing stack.
Quick Prototyping: Rapidly test actor topologies before implementing in a compiled language.
When to Use Python: Data science pipelines, AI/ML systems, prototyping actor designs, or any system where you need to integrate with Python libraries (TensorFlow, PyTorch, pandas, etc.).
One of the most powerful features of this framework is seamless cross-language communication. A Rust actor can send messages to a C++ actor, which can reply to a Python actor.
All three implementations use the same JSON-over-ZMQ wire protocol:
{
"type": "Ping",
"target": "pong_actor",
"sender": {
"name": "ping_actor",
"endpoint": "tcp://localhost:5001"
},
"payload": {
"count": 42
}
}
Protocol Components:
Here's a complete cross-language ping-pong where Rust sends to Python and Python replies back:
Python Pong Actor (receives Ping, sends Pong):
# python_pong.py
@register_message
class Ping:
def __init__(self, count: int):
self.count = count
@register_message
class Pong:
def __init__(self, count: int):
self.count = count
class PongActor(Actor):
def on_ping(self, env: Envelope):
print(f"Python received ping {env.msg.count}")
self.reply(env, Pong(env.msg.count))
# Setup
mgr = Manager(endpoint="tcp://*:5001")
zmq_sender = ZmqSender(local_endpoint="tcp://localhost:5001")
zmq_receiver = ZmqReceiver("tcp://*:5001", mgr, zmq_sender)
mgr.manage("pong", PongActor())
mgr.init()
mgr.run()
Rust Ping Actor (sends Ping, receives Pong):
// rust_ping.rs
#[derive(Serialize, Deserialize, Default)]
struct Ping { count: i32 }
define_message!(Ping);
#[derive(Serialize, Deserialize, Default)]
struct Pong { count: i32 }
define_message!(Pong);
struct PingActor {
pong_ref: ActorRef,
manager_handle: ManagerHandle,
}
handle_messages!(PingActor,
Start => on_start,
Pong => on_pong
);
impl PingActor {
fn on_start(&mut self, _msg: &Start, ctx: &mut ActorContext) {
println!("Sending ping to Python");
self.pong_ref.send(Box::new(Ping { count: 1 }), ctx.self_ref());
}
fn on_pong(&mut self, msg: &Pong, ctx: &mut ActorContext) {
println!("Rust received pong {} from Python", msg.count);
if msg.count < 5 {
self.pong_ref.send(Box::new(Ping { count: msg.count + 1 }), ctx.self_ref());
} else {
self.manager_handle.terminate();
}
}
}
The message types just need matching names and field layouts—the framework handles serialization automatically.
While the actor model is inherently asynchronous, real-world systems often need request-response patterns. The fast_send mechanism provides this without breaking actor isolation.
// Request-response in one call - blocks until reply
auto response = pricer_ref.fast_send(new PriceRequest(symbol), this);
if (response) {
auto price = dynamic_cast<const PriceResponse*>(response.get());
// Use price->value
}
This is semantically equivalent to a synchronous function call but maintains actor isolation—the actor still processes messages sequentially and maintains its own state. The key performance benefit is that the message is processed in the sender's thread, eliminating the context switch that would occur with async messaging.
Trade-offs:
Not every actor needs a dedicated thread. For systems with many small actors that don't require strict isolation, Groups provide a thread pool model.
In a trading system, you might have:
Creating hundreds of threads causes:
A Group runs multiple actors on a shared thread pool:
use actors::Group;
// Create a group with 4 worker threads
let mut group = Group::new("workers", 4);
// Add lightweight actors - they share the thread pool
let ref1 = group.add("handler_AAPL", Box::new(InstrumentHandler::new("AAPL")));
let ref2 = group.add("handler_GOOGL", Box::new(InstrumentHandler::new("GOOGL")));
let ref3 = group.add("handler_MSFT", Box::new(InstrumentHandler::new("MSFT")));
// ... add hundreds more
group.start();
// Send messages normally - ActorRef API is identical
ref1.send(Box::new(PriceUpdate { price: 150.25 }), None);
group.end();
// C++ Group usage
Group group("workers", 4);
group.add<InstrumentHandler>("handler_AAPL", "AAPL");
group.add<InstrumentHandler>("handler_GOOGL", "GOOGL");
group.start();
┌─────────────────┐
│ Shared Queue │
└────────┬────────┘
│
┌────────────────────┼────────────────────┐
▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌─────────┐
│Worker 1 │ │Worker 2 │ │Worker 3 │
└────┬────┘ └────┬────┘ └────┬────┘
│ │ │
▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌─────────┐
│Actor A │ │Actor B │ │Actor C │
│(Mutex) │ │(Mutex) │ │(Mutex) │
└─────────┘ └─────────┘ └─────────┘
Use Groups when:
Use dedicated threads when:
Real systems often combine both:
let mut mgr = Manager::new();
// Latency-critical actors get dedicated threads with affinity
let pricer = mgr.manage("pricer", Box::new(Pricer::new()),
ThreadConfig::with_affinity(vec![0]));
let router = mgr.manage("router", Box::new(OrderRouter::new()),
ThreadConfig::with_affinity(vec![1]));
// Lightweight actors share a thread pool
let mut handlers = Group::new("handlers", 4);
for symbol in symbols {
handlers.add(&format!("handler_{}", symbol),
Box::new(InstrumentHandler::new(symbol)));
}
mgr.init();
handlers.start();
This framework follows a static design philosophy: the actor topology is defined at startup and remains fixed during execution.
Dynamic actor creation (spawning new actors at runtime) is not currently supported. For systems that need dynamic scaling, you would:
This design is intentional for the target use cases (trading systems, real-time processing) where predictability trumps flexibility.
The actor model is exceptionally well-suited for AI-assisted development with tools like Claude. Here's why:
Each actor is a self-contained unit with:
This makes it easy for an AI to understand, generate, and modify actors in isolation.
The framework uses consistent patterns that an AI can learn and replicate:
// Every actor follows this pattern
handle_messages!(MyActor,
MessageA => on_message_a,
MessageB => on_message_b
);
impl MyActor {
fn on_message_a(&mut self, msg: &MessageA, ctx: &mut ActorContext) {
// Handle message
}
}
Once Claude understands this pattern, it can generate new actors correctly.
Because all three implementations follow the same conceptual model, Claude can:
Actors are easily testable in isolation:
This makes AI-generated code easier to verify and debug.
AI can build complex systems by composing simple actors:
The actor model doesn't support traditional inheritance. Here's why, and what to do instead.
Actors are behavioral entities, not data structures. Traditional OOP inheritance creates:
Actors should be simple, focused, and independent.
Instead of inheritance, use delegation:
struct BaseProcessor {
// Shared processing logic
}
impl BaseProcessor {
fn process(&self, data: &Data) -> Result {
// Common processing
}
}
struct SpecializedActor {
processor: BaseProcessor, // Delegation, not inheritance
special_state: i32,
}
impl SpecializedActor {
fn on_data(&mut self, msg: &DataMessage, ctx: &mut ActorContext) {
let result = self.processor.process(&msg.data); // Delegate
// Add specialized behavior
}
}
The actor model provides a clean abstraction for building concurrent and distributed systems. This multi-language implementation demonstrates that you can:
Choose the right language for each component: C++ for latency-critical paths, Rust for safe systems programming, Python for data processing and prototyping.
Maintain interoperability: All actors can communicate regardless of implementation language.
Scale from single-process to distributed: The ActorRef abstraction makes location transparent.
Leverage AI assistance: The pattern-based design is ideal for AI code generation.
Whether you're building a high-frequency trading system, a distributed microservices architecture, or an AI agent system, the actor model provides a solid foundation for reasoning about concurrency without the pitfalls of shared mutable state.
GitHub Repositories:
All frameworks are open source under the MIT License.
Copyright 2025 Vincent Maciejewski & M2 Tech