engineering

Building a Warehouse Management System with Rust and Axum

A few months ago I finished rewriting a warehouse management system from PHP/Laravel Narra Warehouse Management into Rust. The system is now live at api-warehouse.msdqn.dev and handles inventory, procurement, sales, finance, and reporting for day-to-day warehouse operations. This post covers why I made that choice, what the architecture looks like, and what I’d do differently.

The Problem

Warehouse management sounds simple until you look at what it actually covers. Products have variations (size, color, batch). Stock lives across multiple warehouses and has to be tracked at every movement purchases, sales, transfers, adjustments. Purchase orders go through approval states. Financial accounts need to reconcile. Reports need to aggregate across all of it.

The PHP Lambo system worked. But as the data model grew to 35+ entities and business rules got more complex, I kept running into the same friction: runtime bugs that should have been caught at compile time, inconsistent handling of nullable fields, and uncertainty around whether concurrent requests were safe. PHP is fine for a lot of things, but type-erased arrays carrying important domain state aren’t something you want in a system where a stock count error has real consequences.

I wanted a rewrite that gave me the compiler as a business logic validator.

Tech Stack

Backend: Rust + Axum 0.8, SeaORM + PostgreSQL, JWT auth via jsonwebtoken + argon2, OpenAPI docs via utoipa.

Frontend: React 19 + TanStack Router/Query/Table/Form + Tailwind + shadcn/ui.

Infrastructure: Nix flake for reproducible builds, NixOS module with systemd, nginx, and Let’s Encrypt managed declaratively.

Workspace: 10 Rust crates in a Cargo workspace:

apps/
  wm-auth        # authentication and JWT
  wm-database    # shared DB utilities and activity logging
  wm-errors      # AppError type
  wm-migration   # SeaORM migrations (83 and counting)
  wm-server      # binary: router + state wiring
  wm-types       # shared DTOs and response types
  wm-users       # user and role management
  wm-validation  # Validated<T> extractor
  wm-warehouse   # the bulk: inventory, orders, stock, finance
  wm-web         # static file serving

Why Rust?

Three things drove this choice:

Type safety at domain boundaries. With 35+ entities and complex state transitions a purchase order going from draftconfirmedreceived, stock movements that debit one warehouse and credit another the compiler catching invalid state changes at build time is genuinely valuable. In PHP, representing an order state as a string means every handler has to defensively check values that might not be there. In Rust, if the type doesn’t allow it, the code doesn’t compile.

Predictable performance. No GC pauses, zero-cost abstractions, and async I/O on Tokio. Inventory operations often touch many rows listing orders with their items, calculating stock levels across warehouses, generating reports. The consistent latency matters when users are waiting on these queries during busy warehouse operations.

Fearless concurrency. Multiple warehouse staff hitting the API at the same time is the normal case, not the edge case. Rust’s ownership model makes data races a compile-time error rather than a production incident.

The ecosystem is mature enough now. SeaORM handles migrations and queries, Axum covers the HTTP layer, and utoipa generates OpenAPI docs from annotations. I wasn’t pioneering anything.

Why Axum Specifically?

I evaluated Actix-web and Axum. Actix is fast, but Axum’s model felt more composable. The key reasons I stuck with Axum:

Tower middleware. The whole stack CORS, tracing, custom activity logging is just Tower layers. This means middleware is testable in isolation and composable without framework-specific APIs.

Extractors. Type-safe request decomposition is one of Axum’s best features. When a handler signature says auth: AuthUser, Axum calls the FromRequestParts impl, which validates the JWT and constructs the user. If the token is missing or invalid, the extractor returns an error before the handler runs. Nothing leaks through.

Clean error handling. You define one error type implementing IntoResponse, and every handler can return Result<T, AppError>. The framework handles the rest.

It doesn’t fight you. Axum is thin and explicit. There’s no magic, no hidden lifecycle hooks, no framework-owned request context. You can read the source and understand exactly what happens.

Architecture: Clean Architecture in Rust

Each domain is a separate crate with three layers:

wm-warehouse/src/
  domain/
    models.rs            # plain structs, no ORM annotations
    order_repository.rs  # trait: CreateOrder, ListOrders, etc.
  application/
    create_order.rs      # use case: takes repo trait, runs business logic
    list_orders.rs
  infrastructure/
    http/
      order_handlers.rs  # Axum handlers: extract, call application, return response
      dto.rs             # request/response shapes
    persistence/
      sea_orm_order_repository.rs  # SeaORM impl of the domain trait

The domain defines a trait OrderRepository with no knowledge of HTTP, SeaORM, or Axum. The application layer takes that trait and executes business logic. The infrastructure layer wires it together.

This means every use case is testable with a mock repository, the HTTP layer is a thin translation layer, and swapping the database implementation requires touching exactly one file.

A typical handler looks like this:

#[utoipa::path(post, path = "/api/v1/orders",
    request_body = CreateOrderRequest,
    responses((status = 200, body = SingleResponse<Order>)),
    security(("bearer_auth" = [])), tag = "Orders"
)]
pub async fn create_order(
    State(state): State<WarehouseState>,
    auth: AuthUser,
    Json(body): Json<CreateOrderRequest>,
) -> Result<Json<SingleResponse<Order>>, AppError> {
    require("orders:write", &auth)?;
    let order = application::create_order::execute(
        &state.order_repo,
        body.order_type,
        body.warehouse_id,
        body.counterparty,
        body.notes,
        body.items,
        auth.id,
        body.expected_delivery_date,
        body.payment_terms_days,
    )
    .await?;
    Ok(Json(SingleResponse { data: order }))
}

State, auth, and body are all extracted before the handler body runs. If any extractor fails, the handler never executes. The ? operator propagates AppError values up, and Axum calls into_response() automatically.

Error Handling

All errors flow through one type:

#[derive(Debug, thiserror::Error)]
pub enum AppError {
    #[error("Not found: {0}")]
    NotFound(String),
    #[error("Bad request: {0}")]
    BadRequest(String),
    #[error("Unauthorized: {0}")]
    Unauthorized(String),
    #[error("Forbidden: {0}")]
    Forbidden(String),
    #[error("Conflict: {0}")]
    Conflict(String),
    #[error("Internal server error: {0}")]
    Internal(String),
    #[error("Validation error: {0}")]
    Validation(String),
    #[error(transparent)]
    Database(#[from] sea_orm::DbErr),
    #[error(transparent)]
    Jwt(#[from] jsonwebtoken::errors::Error),
    #[error("Pagination error: {0}")]
    Pagination(#[from] paginator_rs::PaginatorError),
}

impl IntoResponse for AppError {
    fn into_response(self) -> Response {
        let (status, message) = match &self {
            AppError::NotFound(msg) => (StatusCode::NOT_FOUND, msg.clone()),
            AppError::Unauthorized(msg) => (StatusCode::UNAUTHORIZED, msg.clone()),
            AppError::Forbidden(msg) => (StatusCode::FORBIDDEN, msg.clone()),
            AppError::Validation(msg) => (StatusCode::UNPROCESSABLE_ENTITY, msg.clone()),
            AppError::Database(err) => {
                tracing::error!("Database error: {err}");
                (StatusCode::INTERNAL_SERVER_ERROR, "Internal server error".to_string())
            }
            // ...
        };
        (status, Json(ErrorResponse { error: ErrorBody { code: status.as_u16(), message } }))
            .into_response()
    }
}

SeaORM errors, JWT errors, and pagination errors all have From implementations, so they convert automatically on ?. The logging for internal errors happens in into_response no handler needs to log its own errors.

Authentication and the AuthUser Extractor

JWT claims are decoded by an Axum extractor that implements FromRequestParts:

impl<S> FromRequestParts<S> for AuthUser
where
    S: Send + Sync,
{
    type Rejection = AppError;

    async fn from_request_parts(parts: &mut Parts, _state: &S) -> Result<Self, Self::Rejection> {
        let auth_header = parts
            .headers
            .get("Authorization")
            .and_then(|v| v.to_str().ok())
            .ok_or_else(|| AppError::Unauthorized("Missing authorization header".to_string()))?;

        let token = auth_header
            .strip_prefix("Bearer ")
            .ok_or_else(|| AppError::Unauthorized("Invalid authorization format".to_string()))?;

        let jwt_secret = parts
            .extensions
            .get::<JwtSecret>()
            .ok_or_else(|| AppError::Internal("JWT secret not configured".to_string()))?;

        let key = DecodingKey::from_secret(jwt_secret.0.as_bytes());
        let token_data = decode::<TokenClaims>(token, &key, &Validation::default())?;

        Ok(AuthUser {
            id: token_data.claims.sub,
            email: token_data.claims.email,
            permissions: token_data.claims.permissions,
        })
    }
}

The JwtSecret is injected as an Axum extension layer at startup, so extractors can access it without going through State. The AuthUser struct carries the decoded permissions, which brings me to RBAC.

RBAC

Permissions follow a resource:action format orders:read, inventory:write, users:manage. They’re baked into the JWT claims at login time, so no extra DB lookup happens per request.

The guard is a single function:

pub fn require(permission: &str, auth: &AuthUser) -> Result<(), AppError> {
    if auth.permissions.contains(&permission.to_string()) {
        Ok(())
    } else {
        Err(AppError::Forbidden(format!(
            "Missing required permission: {permission}"
        )))
    }
}

Called at the top of handlers that need it:

require("orders:write", &auth)?;

Seeded roles admin, warehouse_manager, inventory_clerk, viewer cover the common cases, and permissions can be assigned directly to users for one-offs.

Dependency Injection via State

The root AppState wires all repository implementations at startup:

pub struct AppState {
    pub auth_service: DynAuthService,
    pub user_state: UserState,
    pub warehouse_state: WarehouseState,
    pub jwt_secret: JwtSecret,
    pub db: Arc<DatabaseConnection>,
}

WarehouseState holds Arc<dyn XRepository> for every domain brands, categories, products, inventory, orders, stock movements, payments, and more. When a handler receives State(state): State<WarehouseState>, it gets the concrete implementations without knowing anything about SeaORM.

This is dependency injection the Rust way: no DI container, no runtime registration, just trait objects and Arc. Verbose to wire up, but completely transparent.

Router Composition

The top-level router is flat and readable:

pub fn build(state: AppState) -> Router {
    let v1 = v1_router(&state);
    Router::new()
        .merge(SwaggerUi::new("/swagger-ui").url("/api-docs/openapi.json", openapi))
        .nest("/api/v1", v1)
        .nest_service("/uploads", ServeDir::new(upload_dir))
        .layer(middleware::from_fn_with_state(activity_state, activity_log_middleware))
        .layer(TraceLayer::new_for_http())
        .layer(CorsLayer::permissive())
        .layer(axum::Extension(state.jwt_secret))
}

fn v1_router(state: &AppState) -> Router {
    Router::new()
        .nest("/auth", wm_auth::router(state.auth_service.clone()))
        .merge(wm_users::router(state.user_state.clone()))
        .merge(wm_warehouse::router(state.warehouse_state.clone()))
}

Each crate exposes a router() function that takes its own state slice and returns a Router. The server crate composes them. Adding a new domain means implementing the crate, exposing router(), and adding one .merge() call here.

What I Learned

Rust compile times are real. A full rebuild of the workspace takes a while. Incremental builds are fine for day-to-day work, but cold builds (CI, first checkout) are noticeably slower than PHP or Node. The workspace setup helps by avoiding redundant compilation of shared crates, but you’ll feel this. Worth it, but set expectations.

83 migrations is a lot. SeaORM migrations are SQL files managed like any other migration tool. They work, but as the schema evolves you accumulate history fast. I haven’t felt pain here yet, but I can see it coming if the schema continues to grow and rollbacks become necessary.

The type safety payoff is real. The instances where the compiler caught something that would have been a production bug a stock movement that forgot to update inventory, a payment that could be applied to an already-closed order made the initial investment in the type system worth it. I’m more confident shipping changes to this codebase than I was with the PHP version.

Nix for deployment is powerful but steep. Writing a NixOS module to declare the systemd service, nginx config, and Let’s Encrypt certificates is great once you have it. Getting there requires understanding Nix’s module system, the NixOS option hierarchy, and how flakes work. I’d do it again, but I wouldn’t recommend it as a starting point.

Where It Is Now

The system is in production. Users are actively managing inventory, processing purchase orders, recording sales, and running financial reports against it. The API serves the React frontend at warehouse.msdqn.dev.

Next steps I’m thinking about: tighter integration tests against a real test database (right now coverage is mostly at the handler level), and revisiting the reporting layer the current SQL queries for aggregated reports work but aren’t as maintainable as I’d like.

If you have questions about any specific part of the architecture, feel free to reach out.