Applying Functional Core and Imperative Shell in Practice
A Simple Rust Example for Testable Logic and Clear Boundaries

In my previous post, I introduced the concept of shifting from layered architectures to a functional core and imperative shell. My main goal was to reduce complexity, clarify boundaries, and improve testability. The idea itself is straightforward: the functional core handles all domain logic without side effects: accepting inputs, returning outputs, and never directly interacting with databases or external systems. Meanwhile, the imperative shell deals with real-world concerns like HTTP endpoints, persistence, and third-party integrations, orchestrating calls to the core without letting infrastructure concerns pollute domain logic.
After sharing mostly theory before, I received plenty of questions about real code samples. So this time, I’ll walk through building an API for asset tracking and geofencing, structured as vertical slices. Specifically, I’ve taken the “Track Asset” feature from my Geofencing API and streamlined it for this example.
I also want to restate the motivation behind these articles. After over 30 years in software development, I’ve seen too many teams drown in complexity, often due to applying patterns that look elegant on paper but fail in practice. My aim is always to solve real-world problems in a sustainable, maintainable way, not to show off with complicated solutions. A functional core and an imperative shell, in my experience, keeps the architecture lean, clear, and ready to handle future changes.
The Use Case: Asset Tracking
To illustrate, imagine a logistics scenario with multiple trucks or devices that periodically report their location. The service then checks whether these devices are within a defined geofence. A geofence is a geographical boundary within which you can track events such as entry, exit or presence.
From a business standpoint, a project like this needs to:
- Track an asset’s movement over time, storing the last known geofence status (inside or outside).
- Respond to new location updates from an asset and determine if it has entered or exited the geofence.
- Persist data so the system can handle real-time or historical queries.
The domain logic is fairly straightforward: given a latitude and longitude, decide whether it falls within the geofence boundaries. Then compare that new status to the asset’s previous status and figure out whether it changed from outside to inside (an entry), from inside to outside (an exit), or remained the same.
However, many code bases place this functionality deep inside an all-in-one “service” layer or scattered across multiple “manager” classes. That approach becomes messy fast, especially when test coverage is critical. By contrast, a functional core and feature-based slices keep these rules in pure logic modules, tested with no dependencies on web frameworks or databases.
Why I use Feature Slices
Vertical slicing moves away from the traditional code technical code structures of “controllers in one folder,” “services in another,” “models in a third,” etc. Instead, each feature is an end-to-end module that contains everything needed to do its job: domain, logic, and external adapters:
- A domain/model file for input/output types and domain data structures.
- A logic file for purely functional rules (e.g., checking geofence membership).
- A handler file to accept requests and produce responses.
Organizing it this way ensures that everything related to the “track asset” feature sits in one place. Anyone coming into the code base can see precisely how an asset-tracking call flows from HTTP all the way into the domain rules. There is no more hunting through layers.
Project Structure Overview
I have chosen Rust for this example. The functional core and imperative shell approach works just as well in C#, F#, Java, or any other language that can cleanly separate logic and infrastructure. I use Rust here because its strict compiler and memory safety rules inherently promote immutability and explicit state management, making it a perfect complement to the principles of the functional core. Also, Rust's performance and concurrency advantages scale well when you need to process tracking data in real time.
Below is a simplified look at how I organize my code:
src/
├── main.rs // Thin entrypoint
├── lib.rs // Wiring, DB setup, route config
├── shared/ // Shared infra: DB, errors, ...
│ ├── db.rs
│ ├── error.rs
│ └── mod.rs
└── features/ // Vertical slice directory
├── mod.rs // Summarizes feature slices
└── track_asset/
├── mod.rs
├── handler.rs
├── model.rs
├── logic.rs
└── tests.rs // domain logic (core) tests
The shell is represented by main.rs, where the application starts, reads environment variables, and creates the HTTP server. The logic for hooking up routes (like /track) and establishing the database connection is done in lib.rs. All domain logic remains in small Rust files dedicated to the “track asset” feature.
The Functional Core: Pure Domain Logic
Within track_asset/logic.rs the geofence check is a pure function:
pub fn check_geofence(loc: &Location) -> GeofenceStatus {
if loc.lat >= 40.0 && loc.lat <= 42.0 &&
loc.lon >= -74.0 && loc.lon <= -72.0 {
GeofenceStatus::Inside
} else {
GeofenceStatus::Outside
}
}
pub fn compare_status(old: Option<GeofenceStatus>, new: GeofenceStatus) -> Movement {
match (old, new) {
(Some(GeofenceStatus::Outside), GeofenceStatus::Inside) => Movement::Entered,
(Some(GeofenceStatus::Inside), GeofenceStatus::Outside) => Movement::Exited,
(Some(GeofenceStatus::Inside), GeofenceStatus::Inside) => Movement::StayedInside,
(Some(GeofenceStatus::Outside), GeofenceStatus::Outside) => Movement::StayedOutside,
_ => Movement::Unknown,
}
}
There’s no database logic, no HTTP calls, no thread locks. It's just raw computation. This is the functional core. It’s deliberately cut off from any side effects so it can be tested in isolation.
Domain Testing
Because the domain doesn’t depend on Actix Web or SQLx, it’s trivial to write unit tests directly. For instance:
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_geofence_inside() {
let loc = Location { lat: 41.0, lon: -73.5 };
let status = check_geofence(&loc);
assert_eq!(status, GeofenceStatus::Inside);
}
#[test]
fn test_movement_entered() {
let old = Some(GeofenceStatus::Outside);
let new = GeofenceStatus::Inside;
assert_eq!(compare_status(old, new), Movement::Entered);
}
}
This style of testing is fast, runs without spinning up servers, and clarifies that domain correctness stands independent of infrastructure. For me, that’s the biggest advantage of the functional core approach: no mocking frameworks, no boilerplate, just pure logic under test.
The Imperative Shell: Handler & Database Integration
Meanwhile, the shell code is where side effects happen. In handler.rs there are a function that Actix calls when a request comes in, retrieves the previous status from the database, compares it with the new geofence result, then persists the updated status. A snippet looks like this:
pub async fn track_endpoint(
input: web::Json<TrackInput>,
db: web::Data<PgPool>,
) -> Result<HttpResponse, AppError> {
let asset_id = &input.asset_id;
let location = Location { lat: input.lat, lon: input.lon };
let new_status = check_geofence(&location);
let old_status: Option<String> = sqlx::query_scalar(
"SELECT last_status FROM asset_status WHERE asset_id = $1"
)
.bind(asset_id)
.fetch_optional(db.get_ref())
.await
.map_err(AppError::from)?;
let parsed_old = old_status.and_then(|s| match s.as_str() {
"Inside" => Some(GeofenceStatus::Inside),
"Outside" => Some(GeofenceStatus::Outside),
_ => None,
});
let movement = compare_status(parsed_old, new_status.clone());
// Upsert the new status
sqlx::query(
"INSERT INTO asset_status (asset_id, last_status, updated_at)
VALUES ($1, $2, now())
ON CONFLICT (asset_id) DO UPDATE
SET last_status = $2, updated_at = now()"
)
.bind(asset_id)
.bind(format!("{:?}", new_status))
.execute(db.get_ref())
.await
.map_err(AppError::from)?;
Ok(HttpResponse::Ok().json(TrackOutput {
asset_id: asset_id.clone(),
movement: format!("{:?}", movement),
}))
}
Here the code is more verbose because it has to deal with real world concerns: pulling data from PostgreSQL, handling potential errors, and sending an HTTP response. Note how the domain logic (check_geofence, compare_status) remains the same pure functions from earlier.
Integration Test
Testing the "whole system" with the real HTTP endpoint can be done using reqwest in a separate tests/ directory:
#[tokio::test]
async fn test_track_endpoint_end_to_end() {
let client = reqwest::Client::new();
let base_url = "http://localhost:8080/track";
let payload = serde_json::json!({
"asset_id": "test-asset-1",
"lat": 40.5,
"lon": -73.9
});
let resp = client
.post(base_url)
.json(&payload)
.send()
.await
.expect("Request failed");
assert!(resp.status().is_success());
let json: serde_json::Value = resp.json().await.expect("Invalid JSON");
assert_eq!(json["asset_id"], "test-asset-1");
// movement could be Entered, StayedInside, Exited, etc.
}
By spinning up the Actix server on a known port, I can verify the entire flow: from request parsing, through the imperative shell, into the database, and back out to the client response. This ensures confidence that all external pieces work together.
Putting It All Together
Functional core and imperative shell solve some age-old problems in software architecture:
Cohesion: Domain logic is in one place, side effects are in another. It's obvious where to add new business rules and where to integrate new data sources.
Easy testing: The domain logic can be tested by itself, without external services, while the imperative shell can be tested end-to-end with real HTTP calls.
Minimal coupling: Each vertical slice stands on its own, so an evolution in one feature rarely breaks another.
In an asset tracking scenario, this clarity is invaluable, especially when working to deadlines or with large teams. There's no need to wade through 'service' classes that mix business rules and repository logic. Instead, the slices remain comprehensible. The Track Asset Slice is about receiving a location, updating the geofence status and returning the updated status.
Next Steps
Anyone wanting to adapt this approach to other feature, like order processing, user registration, or IoT sensor data, you name it, should find it straightforward:
- Add a new feature slice folder (e.g. order_processing).
- Create model.rs, logic.rs, handler.rs, etc. within it.
- Keep the domain pure, put side effects in the shell.
- Register the route in features::init_routes.
It’s that simple. The approach scales well, and new developers generally appreciate that each folder is a small, self-contained part of the application. If the code grows beyond one domain service, multiple crates can share domain logic while each service has its own shell.
For a fully working reference, check out the GitHub repository.
The code inside demonstrates how to stand up a vertical slice with a functional core (no side effects) and an imperative shell (all the external calls, framework integrations, database I/O). This example was deliberately small, but the pattern works for complex applications, too. It leads to clearer boundaries, simpler tests, and fewer headaches when you refactor or scale the application.
That’s all for this follow-up of Simplify & Succeed: Replacing Layered Architectures with an Imperative Shell and Functional Core.
The next time a design challenge arises, consider whether your business logic can be made pure, and whether you can isolate side effects in feature slices. Rust’s powerful compiler helps reinforce immutability, and the result is often more robust, maintainable software.
Cheers!