Keyboard shortcuts

Press ← or β†’ to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Type-Driven Correctness in Rust

Speaker Intro

  • Principal Firmware Architect in Microsoft SCHIE (Silicon and Cloud Hardware Infrastructure Engineering) team
  • Industry veteran with expertise in security, systems programming (firmware, operating systems, hypervisors), CPU and platform architecture, and C++ systems
  • Started programming in Rust in 2017 (@AWS EC2), and have been in love with the language ever since

A practical guide to using Rust’s type system to make entire classes of bugs impossible to compile. While the companion Rust Patterns book covers the mechanics (traits, associated types, type-state), this guide shows how to apply those mechanics to real-world domains β€” hardware diagnostics, cryptography, protocol validation, and embedded systems.

Every pattern here follows one principle: push invariants from runtime checks into the type system so the compiler enforces them.

How to Use This Book

Difficulty Legend

SymbolLevelAudience
🟒IntroductoryComfortable with ownership + traits
🟑IntermediateFamiliar with generics + associated types
πŸ”΄AdvancedReady for type-state, phantom types, and session types

Pacing Guide

GoalPathTime
Quick overviewch01, ch13 (reference card)30 min
IPMI / BMC developerch02, ch05, ch07, ch10, ch172.5 hrs
GPU / PCIe developerch02, ch06, ch09, ch10, ch152.5 hrs
Redfish implementerch02, ch05, ch07, ch08, ch17, ch183 hrs
Framework / infrastructurech04, ch08, ch11, ch14, ch182.5 hrs
New to correct-by-constructionch01 β†’ ch10 in order, then ch12 exercises4 hrs
Full deep diveAll chapters sequentially7 hrs

Annotated Table of Contents

ChTitleDifficultyKey Idea
1The Philosophy β€” Why Types Beat Tests🟒Three levels of correctness; types as compiler-checked guarantees
2Typed Command Interfaces🟑Associated types bind request β†’ response
3Single-Use Types🟑Move semantics as linear types for crypto
4Capability Tokens🟑Zero-sized proof-of-authority tokens
5Protocol State MachinesπŸ”΄Type-state for IPMI sessions + PCIe LTSSM
6Dimensional Analysis🟒Newtype wrappers prevent unit mix-ups
7Validated Boundaries🟑Parse once at the edge, carry proof in types
8Capability Mixins🟑Ingredient traits + blanket impls
9Phantom Types🟑PhantomData for register width, DMA direction
10Putting It All Together🟑All 7 patterns in one diagnostic platform
11Fourteen Tricks from the Trenches🟑Sentinelβ†’Option, sealed traits, builders, etc.
12Exercises🟑Six capstone problems with solutions
13Reference Cardβ€”Pattern catalogue + decision flowchart
14Testing Type-Level Guarantees🟑trybuild, proptest, cargo-show-asm
15Const Fn🟠Compile-time proofs for memory maps, registers, bitfields
16Send & Sync🟠Compile-time concurrency proofs
17Redfish Client Walkthrough🟑Eight patterns composed into a type-safe Redfish client
18Redfish Server Walkthrough🟑Builder type-state, source tokens, health rollup, mixins

Prerequisites

ConceptWhere to learn it
Ownership and borrowingRust Patterns, ch01
Traits and associated typesRust Patterns, ch02
Newtypes and type-stateRust Patterns, ch03
PhantomDataRust Patterns, ch04
Generics and trait boundsRust Patterns, ch01

The Correct-by-Construction Spectrum

← Less Safe                                                    More Safe β†’

Runtime checks      Unit tests        Property tests      Correct by Construction
─────────────       ──────────        ──────────────      ──────────────────────

if temp > 100 {     #[test]           proptest! {         struct Celsius(f64);
  panic!("too       fn test_temp() {    |t in 0..200| {   // Can't confuse with Rpm
  hot");              assert!(          assert!(...)       // at the type level
}                     check(42));     }
                    }                 }
                                                          Invalid program?
Invalid program?    Invalid program?  Invalid program?    Won't compile.
Crashes in prod.    Fails in CI.      Fails in CI         Never exists.
                                      (probabilistic).

This guide operates at the rightmost position β€” where bugs don’t exist because the type system cannot express them.


The Philosophy β€” Why Types Beat Tests 🟒

What you’ll learn: The three levels of compile-time correctness (value, state, protocol), how generic function signatures act as compiler-checked guarantees, and when correct-by-construction patterns are β€” and aren’t β€” worth the investment.

Cross-references: ch02 (typed commands), ch05 (type-state), ch13 (reference card)

The Cost of Runtime Checking

Consider a typical runtime guard in a diagnostics codebase:

fn read_sensor(sensor_type: &str, raw: &[u8]) -> f64 {
    match sensor_type {
        "temperature" => raw[0] as i8 as f64,          // signed byte
        "fan_speed"   => u16::from_le_bytes([raw[0], raw[1]]) as f64,
        "voltage"     => u16::from_le_bytes([raw[0], raw[1]]) as f64 / 1000.0,
        _             => panic!("unknown sensor type: {sensor_type}"),
    }
}

This function has four failure modes the compiler cannot catch:

  1. Typo: "temperture" β†’ panic at runtime
  2. Wrong raw length: fan_speed with 1 byte β†’ panic at runtime
  3. Caller uses the returned f64 as RPM when it’s actually Β°C β†’ logic bug, silent
  4. New sensor type added but this match not updated β†’ panic at runtime

Every failure mode is discovered after deployment. Tests help, but they only cover the cases someone thought to write. The type system covers all cases, including ones nobody imagined.

Three Levels of Correctness

Level 1 β€” Value Correctness

Make invalid values unrepresentable.

// ❌ Any u16 can be a "port" β€” 0 is invalid but compiles
fn connect(port: u16) { /* ... */ }

// βœ… Only validated ports can exist
pub struct Port(u16);  // private field

impl TryFrom<u16> for Port {
    type Error = &'static str;
    fn try_from(v: u16) -> Result<Self, Self::Error> {
        if v > 0 { Ok(Port(v)) } else { Err("port must be > 0") }
    }
}

fn connect(port: Port) { /* ... */ }
// Port(0) can never be constructed β€” invariant holds everywhere

Hardware example: SensorId(u8) β€” wraps a raw sensor number with validation that it’s in the SDR range.

Level 2 β€” State Correctness

Make invalid transitions unrepresentable.

use std::marker::PhantomData;

struct Disconnected;
struct Connected;

struct Socket<State> {
    fd: i32,
    _state: PhantomData<State>,
}

impl Socket<Disconnected> {
    fn connect(self, addr: &str) -> Socket<Connected> {
        // ... connect logic ...
        Socket { fd: self.fd, _state: PhantomData }
    }
}

impl Socket<Connected> {
    fn send(&mut self, data: &[u8]) { /* ... */ }
    fn disconnect(self) -> Socket<Disconnected> {
        Socket { fd: self.fd, _state: PhantomData }
    }
}

// Socket<Disconnected> has no send() method β€” compile error if you try

Hardware example: GPIO pin modes β€” Pin<Input> has read() but not write().

Level 3 β€” Protocol Correctness

Make invalid interactions unrepresentable.

use std::io;

trait IpmiCmd {
    type Response;
    fn parse_response(&self, raw: &[u8]) -> io::Result<Self::Response>;
}

// Simplified for illustration β€” see ch02 for the full trait with
// net_fn(), cmd_byte(), payload(), and parse_response().

struct ReadTemp { sensor_id: u8 }
impl IpmiCmd for ReadTemp {
    type Response = Celsius;
    fn parse_response(&self, raw: &[u8]) -> io::Result<Celsius> {
        Ok(Celsius(raw[0] as i8 as f64))
    }
}

#[derive(Debug)] struct Celsius(f64);

fn execute<C: IpmiCmd>(cmd: &C, raw: &[u8]) -> io::Result<C::Response> {
    cmd.parse_response(raw)
}
// ReadTemp always returns Celsius β€” can't accidentally get Rpm

Hardware example: IPMI, Redfish, NVMe Admin commands β€” the request type determines the response type.

Types as Compiler-Checked Guarantees

When you write:

fn execute<C: IpmiCmd>(cmd: &C) -> io::Result<C::Response>

You’re not just writing a function β€” you’re stating a guarantee: β€œfor any command type C that implements IpmiCmd, executing it produces exactly C::Response.” The compiler verifies this guarantee every time it builds your code. If the types don’t line up, the program won’t compile.

This is why Rust’s type system is so powerful β€” it’s not just catching mistakes, it’s enforcing correctness at compile time.

When NOT to Use These Patterns

Correct-by-construction is not always the right choice:

SituationRecommendation
Safety-critical boundary (power sequencing, crypto)βœ… Always β€” a bug here melts hardware or leaks secrets
Cross-module public APIβœ… Usually β€” misuse should be a compile error
State machine with 3+ statesβœ… Usually β€” type-state prevents wrong transitions
Internal helper within one 50-line function❌ Overkill β€” a simple assert! suffices
Prototyping / exploring unknown hardware❌ Raw types first β€” refine after behaviour is understood
User-facing CLI parsing⚠️ clap + TryFrom at the boundary, raw types inside is fine

The key question: β€œIf this bug happens in production, how bad is it?”

  • Fan stops β†’ GPU melts β†’ use types
  • Wrong DER record β†’ customer gets bad data β†’ use types
  • Debug log message slightly wrong β†’ use assert!

Key Takeaways

  1. Three levels of correctness β€” value (newtypes), state (type-state), protocol (associated types) β€” each eliminates a broader class of bugs.
  2. Types as guarantees β€” every generic function signature is a contract the compiler checks on each build.
  3. The cost question β€” β€œif this bug ships, how bad is it?” determines whether types or tests are the right tool.
  4. Types complement tests β€” they eliminate entire categories; tests cover specific values and edge cases.
  5. Know when to stop β€” internal helpers and throwaway prototypes rarely need type-level enforcement.

Typed Command Interfaces β€” Request Determines Response 🟑

What you’ll learn: How associated types on a command trait create a compile-time binding between request and response, eliminating mismatched parsing, unit confusion, and silent type coercion across IPMI, Redfish, and NVMe protocols.

Cross-references: ch01 (philosophy), ch06 (dimensional types), ch07 (validated boundaries), ch10 (integration)

The Untyped Swamp

Most hardware management stacks β€” IPMI, Redfish, NVMe Admin, PLDM β€” start life as raw bytes in β†’ raw bytes out. This creates a category of bugs that tests can only partially find:

use std::io;

struct BmcRaw { /* ipmitool handle */ }

impl BmcRaw {
    fn raw_command(&self, net_fn: u8, cmd: u8, data: &[u8]) -> io::Result<Vec<u8>> {
        // ... shells out to ipmitool ...
        Ok(vec![0x00, 0x19, 0x00]) // stub
    }
}

fn diagnose_thermal(bmc: &BmcRaw) -> io::Result<()> {
    let raw = bmc.raw_command(0x04, 0x2D, &[0x20])?;
    let cpu_temp = raw[0] as f64;        // 🀞 is byte 0 the reading?

    let raw = bmc.raw_command(0x04, 0x2D, &[0x30])?;
    let fan_rpm = raw[0] as u32;         // πŸ› fan speed is 2 bytes LE

    let raw = bmc.raw_command(0x04, 0x2D, &[0x40])?;
    let voltage = raw[0] as f64;         // πŸ› need to divide by 1000

    if cpu_temp > fan_rpm as f64 {       // πŸ› comparing Β°C to RPM
        println!("uh oh");
    }

    log_temp(voltage);                   // πŸ› passing Volts as temperature
    Ok(())
}

fn log_temp(t: f64) { println!("Temp: {t}Β°C"); }
#BugDiscovered
1Fan RPM parsed as 1 byte instead of 2Production, 3 AM
2Voltage not scaledEvery PSU flagged as overvoltage
3Comparing Β°C to RPMMaybe never
4Volts passed to temp logger6 months later, reading historical data

Root cause: Everything is Vec<u8> β†’ f64 β†’ pray.

The Typed Command Pattern

Step 1 β€” Domain newtypes

#[derive(Debug, Clone, Copy, PartialEq, PartialOrd)]
pub struct Celsius(pub f64);

#[derive(Debug, Clone, Copy, PartialEq, PartialOrd)]
pub struct Rpm(pub u32);  // u32: raw IPMI sensor value (integer RPM)

#[derive(Debug, Clone, Copy, PartialEq, PartialOrd)]
pub struct Volts(pub f64);

#[derive(Debug, Clone, Copy, PartialEq, PartialOrd)]
pub struct Watts(pub f64);

Note on Rpm(u32) vs Rpm(f64): In this chapter the inner type is u32 because IPMI sensor readings are integer values. In ch06 (Dimensional Analysis), Rpm uses f64 to support arithmetic operations (averaging, scaling). Both are valid β€” the newtype prevents cross-unit confusion regardless of inner type.

Step 2 β€” The command trait (type-indexed dispatch)

The associated type Response is the key β€” it binds each command struct to its return type. Each implementing struct pins Response to a specific domain type, so execute() always returns exactly the right type:

pub trait IpmiCmd {
    /// The "type index" β€” determines what execute() returns.
    type Response;

    fn net_fn(&self) -> u8;
    fn cmd_byte(&self) -> u8;
    fn payload(&self) -> Vec<u8>;

    /// Parsing encapsulated here β€” each command knows its own byte layout.
    fn parse_response(&self, raw: &[u8]) -> io::Result<Self::Response>;
}

Step 3 β€” One struct per command

pub struct ReadTemp { pub sensor_id: u8 }
impl IpmiCmd for ReadTemp {
    type Response = Celsius;
    fn net_fn(&self) -> u8 { 0x04 }
    fn cmd_byte(&self) -> u8 { 0x2D }
    fn payload(&self) -> Vec<u8> { vec![self.sensor_id] }
    fn parse_response(&self, raw: &[u8]) -> io::Result<Celsius> {
        if raw.is_empty() {
            return Err(io::Error::new(io::ErrorKind::InvalidData, "empty response"));
        }
        // Note: ch01's untyped example uses `raw[0] as i8 as f64` (signed)
        // because that function was demonstrating generic parsing without
        // SDR metadata. Here we use unsigned (`as f64`) because the SDR
        // linearization formula in IPMI spec Β§35.5 converts the unsigned
        // raw reading to a calibrated value. In production, apply the
        // full SDR formula: result = (M Γ— raw + B) Γ— 10^(R_exp).
        Ok(Celsius(raw[0] as f64))  // unsigned raw byte, converted per SDR formula
    }
}

pub struct ReadFanSpeed { pub fan_id: u8 }
impl IpmiCmd for ReadFanSpeed {
    type Response = Rpm;
    fn net_fn(&self) -> u8 { 0x04 }
    fn cmd_byte(&self) -> u8 { 0x2D }
    fn payload(&self) -> Vec<u8> { vec![self.fan_id] }
    fn parse_response(&self, raw: &[u8]) -> io::Result<Rpm> {
        if raw.len() < 2 {
            return Err(io::Error::new(io::ErrorKind::InvalidData,
                format!("fan speed needs 2 bytes, got {}", raw.len())));
        }
        Ok(Rpm(u16::from_le_bytes([raw[0], raw[1]]) as u32))
    }
}

pub struct ReadVoltage { pub rail: u8 }
impl IpmiCmd for ReadVoltage {
    type Response = Volts;
    fn net_fn(&self) -> u8 { 0x04 }
    fn cmd_byte(&self) -> u8 { 0x2D }
    fn payload(&self) -> Vec<u8> { vec![self.rail] }
    fn parse_response(&self, raw: &[u8]) -> io::Result<Volts> {
        if raw.len() < 2 {
            return Err(io::Error::new(io::ErrorKind::InvalidData,
                format!("voltage needs 2 bytes, got {}", raw.len())));
        }
        Ok(Volts(u16::from_le_bytes([raw[0], raw[1]]) as f64 / 1000.0))
    }
}

Step 4 β€” The executor (zero dyn, monomorphised)

pub struct BmcConnection { pub timeout_secs: u32 }

impl BmcConnection {
    pub fn execute<C: IpmiCmd>(&self, cmd: &C) -> io::Result<C::Response> {
        let raw = self.raw_send(cmd.net_fn(), cmd.cmd_byte(), &cmd.payload())?;
        cmd.parse_response(&raw)
    }

    fn raw_send(&self, _nf: u8, _cmd: u8, _data: &[u8]) -> io::Result<Vec<u8>> {
        Ok(vec![0x19, 0x00]) // stub
    }
}

Step 5 β€” All four bugs become compile errors

fn diagnose_thermal_typed(bmc: &BmcConnection) -> io::Result<()> {
    let cpu_temp: Celsius = bmc.execute(&ReadTemp { sensor_id: 0x20 })?;
    let fan_rpm:  Rpm     = bmc.execute(&ReadFanSpeed { fan_id: 0x30 })?;
    let voltage:  Volts   = bmc.execute(&ReadVoltage { rail: 0x40 })?;

    // Bug #1 β€” IMPOSSIBLE: parsing lives in ReadFanSpeed::parse_response
    // Bug #2 β€” IMPOSSIBLE: unit scaling lives in ReadVoltage::parse_response

    // Bug #3 β€” COMPILE ERROR:
    // if cpu_temp > fan_rpm { }
    //    ^^^^^^^^   ^^^^^^^ Celsius vs Rpm β†’ "mismatched types" ❌

    // Bug #4 β€” COMPILE ERROR:
    // log_temperature(voltage);
    //                 ^^^^^^^ Volts, expected Celsius ❌

    if cpu_temp > Celsius(85.0) { println!("CPU overheating: {:?}", cpu_temp); }
    if fan_rpm < Rpm(4000)      { println!("Fan too slow: {:?}", fan_rpm); }

    Ok(())
}

fn log_temperature(t: Celsius) { println!("Temp: {:?}", t); }
fn log_voltage(v: Volts)       { println!("Voltage: {:?}", v); }

IPMI: Sensor Reads That Can’t Be Confused

Adding a new sensor is one struct + one impl β€” no scattered parsing:

pub struct ReadPowerDraw { pub domain: u8 }
impl IpmiCmd for ReadPowerDraw {
    type Response = Watts;
    fn net_fn(&self) -> u8 { 0x04 }
    fn cmd_byte(&self) -> u8 { 0x2D }
    fn payload(&self) -> Vec<u8> { vec![self.domain] }
    fn parse_response(&self, raw: &[u8]) -> io::Result<Watts> {
        if raw.len() < 2 {
            return Err(io::Error::new(io::ErrorKind::InvalidData,
                format!("power draw needs 2 bytes, got {}", raw.len())));
        }
        Ok(Watts(u16::from_le_bytes([raw[0], raw[1]]) as f64))
    }
}

// Every caller that uses bmc.execute(&ReadPowerDraw { domain: 0 })
// automatically gets Watts back β€” no parsing code elsewhere

Testing Each Command in Isolation

#[cfg(test)]
mod tests {
    use super::*;

    struct StubBmc {
        responses: std::collections::HashMap<u8, Vec<u8>>,
    }

    impl StubBmc {
        fn execute<C: IpmiCmd>(&self, cmd: &C) -> io::Result<C::Response> {
            let key = cmd.payload()[0];
            let raw = self.responses.get(&key)
                .ok_or_else(|| io::Error::new(io::ErrorKind::NotFound, "no stub"))?;
            cmd.parse_response(raw)
        }
    }

    #[test]
    fn read_temp_parses_raw_byte() {
        let bmc = StubBmc {
            responses: [(0x20, vec![0x19])].into(), // 25 decimal = 0x19
        };
        let temp = bmc.execute(&ReadTemp { sensor_id: 0x20 }).unwrap();
        assert_eq!(temp, Celsius(25.0));
    }

    #[test]
    fn read_fan_parses_two_byte_le() {
        let bmc = StubBmc {
            responses: [(0x30, vec![0x00, 0x19])].into(), // 0x1900 = 6400
        };
        let rpm = bmc.execute(&ReadFanSpeed { fan_id: 0x30 }).unwrap();
        assert_eq!(rpm, Rpm(6400));
    }

    #[test]
    fn read_voltage_scales_millivolts() {
        let bmc = StubBmc {
            responses: [(0x40, vec![0xE8, 0x2E])].into(), // 0x2EE8 = 12008 mV
        };
        let v = bmc.execute(&ReadVoltage { rail: 0x40 }).unwrap();
        assert!((v.0 - 12.008).abs() < 0.001);
    }
}

Redfish: Schema-Typed REST Endpoints

Redfish is an even better fit β€” each endpoint returns a DMTF-defined JSON schema:

use serde::Deserialize;

#[derive(Debug, Deserialize)]
pub struct ThermalResponse {
    #[serde(rename = "Temperatures")]
    pub temperatures: Vec<RedfishTemp>,
    #[serde(rename = "Fans")]
    pub fans: Vec<RedfishFan>,
}

#[derive(Debug, Deserialize)]
pub struct RedfishTemp {
    #[serde(rename = "Name")]
    pub name: String,
    #[serde(rename = "ReadingCelsius")]
    pub reading: f64,
    #[serde(rename = "UpperThresholdCritical")]
    pub critical_hi: Option<f64>,
    #[serde(rename = "Status")]
    pub status: RedfishHealth,
}

#[derive(Debug, Deserialize)]
pub struct RedfishFan {
    #[serde(rename = "Name")]
    pub name: String,
    #[serde(rename = "Reading")]
    pub rpm: u32,
    #[serde(rename = "Status")]
    pub status: RedfishHealth,
}

#[derive(Debug, Deserialize)]
pub struct PowerResponse {
    #[serde(rename = "Voltages")]
    pub voltages: Vec<RedfishVoltage>,
    #[serde(rename = "PowerSupplies")]
    pub psus: Vec<RedfishPsu>,
}

#[derive(Debug, Deserialize)]
pub struct RedfishVoltage {
    #[serde(rename = "Name")]
    pub name: String,
    #[serde(rename = "ReadingVolts")]
    pub reading: f64,
    #[serde(rename = "Status")]
    pub status: RedfishHealth,
}

#[derive(Debug, Deserialize)]
pub struct RedfishPsu {
    #[serde(rename = "Name")]
    pub name: String,
    #[serde(rename = "PowerOutputWatts")]
    pub output_watts: Option<f64>,
    #[serde(rename = "Status")]
    pub status: RedfishHealth,
}

#[derive(Debug, Deserialize)]
pub struct ProcessorResponse {
    #[serde(rename = "Model")]
    pub model: String,
    #[serde(rename = "TotalCores")]
    pub cores: u32,
    #[serde(rename = "Status")]
    pub status: RedfishHealth,
}

#[derive(Debug, Deserialize)]
pub struct RedfishHealth {
    #[serde(rename = "State")]
    pub state: String,
    #[serde(rename = "Health")]
    pub health: Option<String>,
}

/// Typed Redfish endpoint β€” each knows its response type.
pub trait RedfishEndpoint {
    type Response: serde::de::DeserializeOwned;
    fn method(&self) -> &'static str;
    fn path(&self) -> String;
}

pub struct GetThermal { pub chassis_id: String }
impl RedfishEndpoint for GetThermal {
    type Response = ThermalResponse;
    fn method(&self) -> &'static str { "GET" }
    fn path(&self) -> String {
        format!("/redfish/v1/Chassis/{}/Thermal", self.chassis_id)
    }
}

pub struct GetPower { pub chassis_id: String }
impl RedfishEndpoint for GetPower {
    type Response = PowerResponse;
    fn method(&self) -> &'static str { "GET" }
    fn path(&self) -> String {
        format!("/redfish/v1/Chassis/{}/Power", self.chassis_id)
    }
}

pub struct GetProcessor { pub system_id: String, pub proc_id: String }
impl RedfishEndpoint for GetProcessor {
    type Response = ProcessorResponse;
    fn method(&self) -> &'static str { "GET" }
    fn path(&self) -> String {
        format!("/redfish/v1/Systems/{}/Processors/{}", self.system_id, self.proc_id)
    }
}

pub struct RedfishClient {
    pub base_url: String,
    pub auth_token: String,
}

impl RedfishClient {
    pub fn execute<E: RedfishEndpoint>(&self, endpoint: &E) -> io::Result<E::Response> {
        let url = format!("{}{}", self.base_url, endpoint.path());
        let json_bytes = self.http_request(endpoint.method(), &url)?;
        serde_json::from_slice(&json_bytes)
            .map_err(|e| io::Error::new(io::ErrorKind::InvalidData, e))
    }

    fn http_request(&self, _method: &str, _url: &str) -> io::Result<Vec<u8>> {
        Ok(vec![]) // stub β€” real impl uses reqwest/hyper
    }
}

// Usage β€” fully typed, self-documenting
fn redfish_pre_flight(client: &RedfishClient) -> io::Result<()> {
    let thermal: ThermalResponse = client.execute(&GetThermal {
        chassis_id: "1".into(),
    })?;
    let power: PowerResponse = client.execute(&GetPower {
        chassis_id: "1".into(),
    })?;

    // ❌ Compile error β€” can't pass PowerResponse to a thermal check:
    // check_thermals(&power);  β†’ "expected ThermalResponse, found PowerResponse"

    for temp in &thermal.temperatures {
        if let Some(crit) = temp.critical_hi {
            if temp.reading > crit {
                println!("CRITICAL: {} at {}Β°C (threshold: {}Β°C)",
                    temp.name, temp.reading, crit);
            }
        }
    }
    Ok(())
}

NVMe Admin: Identify Doesn’t Return Log Pages

NVMe admin commands follow the same shape. The controller distinguishes command opcodes, but in C the caller must know which struct to overlay on the 4 KB completion buffer. The typed-command pattern makes this impossible to get wrong:

use std::io;

/// The NVMe Admin command trait β€” same shape as IpmiCmd.
pub trait NvmeAdminCmd {
    type Response;
    fn opcode(&self) -> u8;
    fn parse_completion(&self, data: &[u8]) -> io::Result<Self::Response>;
}

// ── Identify (opcode 0x06) ──

#[derive(Debug, Clone)]
pub struct IdentifyResponse {
    pub model_number: String,   // bytes 24–63
    pub serial_number: String,  // bytes 4–23
    pub firmware_rev: String,   // bytes 64–71
    pub total_capacity_gb: u64,
}

pub struct Identify {
    pub nsid: u32, // 0 = controller, >0 = namespace
}

impl NvmeAdminCmd for Identify {
    type Response = IdentifyResponse;
    fn opcode(&self) -> u8 { 0x06 }
    fn parse_completion(&self, data: &[u8]) -> io::Result<IdentifyResponse> {
        if data.len() < 4096 {
            return Err(io::Error::new(io::ErrorKind::InvalidData, "short identify"));
        }
        Ok(IdentifyResponse {
            serial_number: String::from_utf8_lossy(&data[4..24]).trim().to_string(),
            model_number: String::from_utf8_lossy(&data[24..64]).trim().to_string(),
            firmware_rev: String::from_utf8_lossy(&data[64..72]).trim().to_string(),
            total_capacity_gb: u64::from_le_bytes(
                data[280..288].try_into().unwrap()
            ) / (1024 * 1024 * 1024),
        })
    }
}

// ── Get Log Page (opcode 0x02) ──

#[derive(Debug, Clone)]
pub struct SmartLog {
    pub critical_warning: u8,
    pub temperature_kelvin: u16,
    pub available_spare_pct: u8,
    pub data_units_read: u128,
}

pub struct GetLogPage {
    pub log_id: u8, // 0x02 = SMART/Health
}

impl NvmeAdminCmd for GetLogPage {
    type Response = SmartLog;
    fn opcode(&self) -> u8 { 0x02 }
    fn parse_completion(&self, data: &[u8]) -> io::Result<SmartLog> {
        if data.len() < 512 {
            return Err(io::Error::new(io::ErrorKind::InvalidData, "short log page"));
        }
        Ok(SmartLog {
            critical_warning: data[0],
            temperature_kelvin: u16::from_le_bytes([data[1], data[2]]),
            available_spare_pct: data[3],
            data_units_read: u128::from_le_bytes(data[32..48].try_into().unwrap()),
        })
    }
}

// ── Executor ──

pub struct NvmeController { /* fd, BAR, etc. */ }

impl NvmeController {
    pub fn admin_cmd<C: NvmeAdminCmd>(&self, cmd: &C) -> io::Result<C::Response> {
        let raw = self.submit_and_wait(cmd.opcode())?;
        cmd.parse_completion(&raw)
    }

    fn submit_and_wait(&self, _opcode: u8) -> io::Result<Vec<u8>> {
        Ok(vec![0u8; 4096]) // stub β€” real impl issues doorbell + waits for CQ entry
    }
}

// ── Usage ──

fn nvme_health_check(ctrl: &NvmeController) -> io::Result<()> {
    let id: IdentifyResponse = ctrl.admin_cmd(&Identify { nsid: 0 })?;
    let smart: SmartLog = ctrl.admin_cmd(&GetLogPage { log_id: 0x02 })?;

    // ❌ Compile error β€” Identify returns IdentifyResponse, not SmartLog:
    // let smart: SmartLog = ctrl.admin_cmd(&Identify { nsid: 0 })?;

    println!("{} (FW {}): {}Β°C, {}% spare",
        id.model_number, id.firmware_rev,
        smart.temperature_kelvin.saturating_sub(273),
        smart.available_spare_pct);

    Ok(())
}

The three-protocol progression now follows a graduated arc (the same technique ch07 uses for validated boundaries):

BeatProtocolComplexityWhat it adds
1IPMISimple: sensor ID β†’ readingCore pattern: trait + associated type
2RedfishREST: endpoint β†’ typed JSONSerde integration, schema-typed responses
3NVMeBinary: opcode β†’ 4 KB struct overlayRaw buffer parsing, multi-struct completion data

Extension: Macro DSL for Command Scripts

/// Execute a series of typed IPMI commands, returning a tuple of results.
macro_rules! diag_script {
    ($bmc:expr; $($cmd:expr),+ $(,)?) => {{
        ( $( $bmc.execute(&$cmd)?, )+ )
    }};
}

fn full_pre_flight(bmc: &BmcConnection) -> io::Result<()> {
    let (temp, rpm, volts) = diag_script!(bmc;
        ReadTemp     { sensor_id: 0x20 },
        ReadFanSpeed { fan_id:    0x30 },
        ReadVoltage  { rail:      0x40 },
    );
    // Type: (Celsius, Rpm, Volts) β€” fully inferred, swap = compile error
    assert!(temp  < Celsius(95.0), "CPU too hot");
    assert!(rpm   > Rpm(3000),     "Fan too slow");
    assert!(volts > Volts(11.4),   "12V rail sagging");
    Ok(())
}

Extension: Enum Dispatch for Dynamic Scripts

When commands come from JSON config at runtime:

pub enum AnyReading {
    Temp(Celsius),
    Rpm(Rpm),
    Volt(Volts),
    Watt(Watts),
}

pub enum AnyCmd {
    Temp(ReadTemp),
    Fan(ReadFanSpeed),
    Voltage(ReadVoltage),
    Power(ReadPowerDraw),
}

impl AnyCmd {
    pub fn execute(&self, bmc: &BmcConnection) -> io::Result<AnyReading> {
        match self {
            AnyCmd::Temp(c)    => Ok(AnyReading::Temp(bmc.execute(c)?)),
            AnyCmd::Fan(c)     => Ok(AnyReading::Rpm(bmc.execute(c)?)),
            AnyCmd::Voltage(c) => Ok(AnyReading::Volt(bmc.execute(c)?)),
            AnyCmd::Power(c)   => Ok(AnyReading::Watt(bmc.execute(c)?)),
        }
    }
}

fn run_dynamic_script(bmc: &BmcConnection, script: &[AnyCmd]) -> io::Result<Vec<AnyReading>> {
    script.iter().map(|cmd| cmd.execute(bmc)).collect()
}

The Pattern Family

This pattern applies to every hardware management protocol:

ProtocolRequest TypeResponse Type
IPMI Sensor ReadingReadTempCelsius
Redfish RESTGetThermalThermalResponse
NVMe AdminIdentifyIdentifyResponse
PLDMGetFwParamsFwParamsResponse
MCTPGetEidEidResponse
PCIe Config SpaceReadCapabilityCapabilityHeader
SMBIOS/DMIReadType17MemoryDeviceInfo

The request type determines the response type β€” the compiler enforces it everywhere.

Typed Command Flow

flowchart LR
    subgraph "Compile Time"
        RT["ReadTemp"] -->|"type Response = Celsius"| C[Celsius]
        RF["ReadFanSpeed"] -->|"type Response = Rpm"| R[Rpm]
        RV["ReadVoltage"] -->|"type Response = Volts"| V[Volts]
    end
    subgraph "Runtime"
        E["bmc.execute(&cmd)"] -->|"monomorphised"| P["cmd.parse_response(raw)"]
    end
    style RT fill:#e1f5fe,color:#000
    style RF fill:#e1f5fe,color:#000
    style RV fill:#e1f5fe,color:#000
    style C fill:#c8e6c9,color:#000
    style R fill:#c8e6c9,color:#000
    style V fill:#c8e6c9,color:#000
    style E fill:#fff3e0,color:#000
    style P fill:#fff3e0,color:#000

Exercise: PLDM Typed Commands

Design a PldmCmd trait (same shape as IpmiCmd) for two PLDM commands:

  • GetFwParams β†’ FwParamsResponse { active_version: String, pending_version: Option<String> }
  • QueryDeviceIds β†’ DeviceIdResponse { descriptors: Vec<Descriptor> }

Requirements: static dispatch, parse_response returns io::Result<Self::Response>.

Solution
use std::io;

pub trait PldmCmd {
    type Response;
    fn pldm_type(&self) -> u8;
    fn command_code(&self) -> u8;
    fn parse_response(&self, raw: &[u8]) -> io::Result<Self::Response>;
}

#[derive(Debug, Clone)]
pub struct FwParamsResponse {
    pub active_version: String,
    pub pending_version: Option<String>,
}

pub struct GetFwParams;
impl PldmCmd for GetFwParams {
    type Response = FwParamsResponse;
    fn pldm_type(&self) -> u8 { 0x05 } // Firmware Update
    fn command_code(&self) -> u8 { 0x02 }
    fn parse_response(&self, raw: &[u8]) -> io::Result<FwParamsResponse> {
        // Simplified β€” real impl decodes PLDM FW Update spec fields
        if raw.len() < 4 {
            return Err(io::Error::new(io::ErrorKind::InvalidData, "too short"));
        }
        Ok(FwParamsResponse {
            active_version: String::from_utf8_lossy(&raw[..4]).to_string(),
            pending_version: None,
        })
    }
}

#[derive(Debug, Clone)]
pub struct Descriptor { pub descriptor_type: u16, pub data: Vec<u8> }

#[derive(Debug, Clone)]
pub struct DeviceIdResponse { pub descriptors: Vec<Descriptor> }

pub struct QueryDeviceIds;
impl PldmCmd for QueryDeviceIds {
    type Response = DeviceIdResponse;
    fn pldm_type(&self) -> u8 { 0x05 }
    fn command_code(&self) -> u8 { 0x04 }
    fn parse_response(&self, raw: &[u8]) -> io::Result<DeviceIdResponse> {
        Ok(DeviceIdResponse { descriptors: vec![] }) // stub
    }
}

Key Takeaways

  1. Associated type = compile-time contract β€” type Response on the command trait locks each request to exactly one response type.
  2. Parsing is encapsulated β€” byte-layout knowledge lives in parse_response, not scattered across callers.
  3. Zero-cost dispatch β€” generic execute<C: IpmiCmd> monomorphises to direct calls with no vtable.
  4. One pattern, many protocols β€” IPMI, Redfish, NVMe, PLDM, MCTP all fit the same trait Cmd { type Response; } shape.
  5. Enum dispatch bridges static and dynamic β€” wrap typed commands in an enum for runtime-driven scripts without losing type safety inside each arm.
  6. Graduated complexity strengthens intuition β€” IPMI (sensor ID β†’ reading), Redfish (endpoint β†’ JSON schema), and NVMe (opcode β†’ 4 KB struct overlay) all use the same trait shape, but each beat adds a layer of parsing complexity.

Single-Use Types β€” Cryptographic Guarantees via Ownership 🟑

What you’ll learn: How Rust’s move semantics act as a linear type system, making nonce reuse, double key-agreement, and accidental fuse re-programming impossible at compile time.

Cross-references: ch01 (philosophy), ch04 (capability tokens), ch05 (type-state), ch14 (testing compile-fail)

The Nonce Reuse Catastrophe

In authenticated encryption (AES-GCM, ChaCha20-Poly1305), reusing a nonce with the same key is catastrophic β€” it leaks the XOR of two plaintexts and often the authentication key itself. This isn’t a theoretical concern:

  • 2016: Forbidden Attack on AES-GCM in TLS β€” nonce reuse allowed plaintext recovery
  • 2020: Multiple IoT firmware update systems found reusing nonces due to poor RNG

In C/C++, a nonce is just a uint8_t[12]. Nothing prevents you from using it twice.

// C β€” nothing stops nonce reuse
uint8_t nonce[12];
generate_nonce(nonce);
encrypt(key, nonce, msg1, out1);   // βœ… first use
encrypt(key, nonce, msg2, out2);   // πŸ› CATASTROPHIC: same nonce

Move Semantics as Linear Types

Rust’s ownership system is effectively a linear type system β€” a value can be used exactly once (moved) unless it implements Copy. The ring crate exploits this:

// ring::aead::Nonce is:
// - NOT Clone
// - NOT Copy
// - Consumed by value when used
pub struct Nonce(/* private */);

impl Nonce {
    pub fn try_assume_unique_for_key(value: &[u8]) -> Result<Self, Unspecified> {
        // ...
    }
    // No Clone, no Copy β€” can only be used once
}

When you pass a Nonce to seal_in_place(), it moves:

// Pseudocode mirroring ring's API shape
fn seal_in_place(
    key: &SealingKey,
    nonce: Nonce,       // ← moved, not borrowed
    data: &mut Vec<u8>,
) -> Result<(), Error> {
    // ... encrypt data in place ...
    // nonce is consumed β€” cannot be used again
    Ok(())
}

Attempting to reuse it:

fn bad_encrypt(key: &SealingKey, data1: &mut Vec<u8>, data2: &mut Vec<u8>) {
    // .unwrap() is safe β€” a 12-byte array is always a valid nonce.
    let nonce = Nonce::try_assume_unique_for_key(&[0u8; 12]).unwrap();
    seal_in_place(key, nonce, data1).unwrap();  // βœ… nonce moved here
    // seal_in_place(key, nonce, data2).unwrap();
    //                    ^^^^^ ERROR: use of moved value ❌
}

The compiler proves that each nonce is used exactly once. No test required.

Case Study: ring’s Nonce

The ring crate goes further with NonceSequence β€” a trait that generates nonces and is also non-cloneable:

/// A sequence of unique nonces.
/// Not Clone β€” once bound to a key, cannot be duplicated.
pub trait NonceSequence {
    fn advance(&mut self) -> Result<Nonce, Unspecified>;
}

/// SealingKey wraps a NonceSequence β€” each seal() auto-advances.
pub struct SealingKey<N: NonceSequence> {
    key: UnboundKey,   // consumed during construction
    nonce_seq: N,
}

impl<N: NonceSequence> SealingKey<N> {
    pub fn new(key: UnboundKey, nonce_seq: N) -> Self {
        // UnboundKey is moved β€” can't be used for both sealing AND opening
        SealingKey { key, nonce_seq }
    }

    pub fn seal_in_place_append_tag(
        &mut self,       // &mut β€” exclusive access
        aad: Aad<&[u8]>,
        in_out: &mut Vec<u8>,
    ) -> Result<(), Unspecified> {
        let nonce = self.nonce_seq.advance()?; // auto-generate unique nonce
        // ... encrypt with nonce ...
        Ok(())
    }
}
pub struct UnboundKey;
pub struct Aad<T>(T);
pub struct Unspecified;

The ownership chain prevents:

  1. Nonce reuse β€” Nonce is not Clone, consumed on each call
  2. Key duplication β€” UnboundKey is moved into SealingKey, can’t also make an OpeningKey
  3. Sequence duplication β€” NonceSequence is not Clone, so no two keys share a counter

None of these require runtime checks. The compiler enforces all three.

Case Study: Ephemeral Key Agreement

Ephemeral Diffie-Hellman keys must be used exactly once (that’s what β€œephemeral” means). ring enforces this:

/// An ephemeral private key. Not Clone, not Copy.
/// Consumed by agree_ephemeral().
pub struct EphemeralPrivateKey { /* ... */ }

/// Compute shared secret β€” consumes the private key.
pub fn agree_ephemeral(
    my_private_key: EphemeralPrivateKey,  // ← moved
    peer_public_key: &UnparsedPublicKey,
    error_value: Unspecified,
    kdf: impl FnOnce(&[u8]) -> Result<SharedSecret, Unspecified>,
) -> Result<SharedSecret, Unspecified> {
    // ... DH computation ...
    // my_private_key is consumed β€” can never be reused
    kdf(&[])
}
pub struct UnparsedPublicKey;
pub struct SharedSecret;
pub struct Unspecified;

After calling agree_ephemeral(), the private key no longer exists in memory (it’s been dropped). A C++ developer would need to remember to memset(key, 0, len) and hope the compiler doesn’t optimise it away. In Rust, the key is simply gone.

Hardware Application: One-Time Fuse Programming

Server platforms have OTP (one-time programmable) fuses for security keys, board serial numbers, and feature bits. Writing a fuse is irreversible β€” doing it twice with different data bricks the board. This is a perfect fit for move semantics:

use std::io;

/// A fuse write payload. Not Clone, not Copy.
/// Consumed when the fuse is programmed.
pub struct FusePayload {
    address: u32,
    data: Vec<u8>,
    // private constructor β€” only created via validated builder
}

/// Proof that the fuse programmer is in the correct state.
pub struct FuseController {
    /* hardware handle */
}

impl FuseController {
    /// Program a fuse β€” consumes the payload, preventing double-write.
    pub fn program(
        &mut self,
        payload: FusePayload,  // ← moved β€” can't be used twice
    ) -> io::Result<()> {
        // ... write to OTP hardware ...
        // payload is consumed β€” trying to program again with the same
        // payload is a compile error
        Ok(())
    }
}

/// Builder with validation β€” only way to create a FusePayload.
pub struct FusePayloadBuilder {
    address: Option<u32>,
    data: Option<Vec<u8>>,
}

impl FusePayloadBuilder {
    pub fn new() -> Self {
        FusePayloadBuilder { address: None, data: None }
    }

    pub fn address(mut self, addr: u32) -> Self {
        self.address = Some(addr);
        self
    }

    pub fn data(mut self, data: Vec<u8>) -> Self {
        self.data = Some(data);
        self
    }

    pub fn build(self) -> Result<FusePayload, &'static str> {
        let address = self.address.ok_or("address required")?;
        let data = self.data.ok_or("data required")?;
        if data.len() > 32 { return Err("fuse data too long"); }
        Ok(FusePayload { address, data })
    }
}

// Usage:
fn program_board_serial(ctrl: &mut FuseController) -> io::Result<()> {
    let payload = FusePayloadBuilder::new()
        .address(0x100)
        .data(b"SN12345678".to_vec())
        .build()
        .map_err(|e| io::Error::new(io::ErrorKind::InvalidInput, e))?;

    ctrl.program(payload)?;      // βœ… payload consumed

    // ctrl.program(payload);    // ❌ ERROR: use of moved value
    //              ^^^^^^^ value used after move

    Ok(())
}

Hardware Application: Single-Use Calibration Token

Some sensors require a calibration step that must happen exactly once per power cycle. A calibration token enforces this:

/// Issued once at power-on. Not Clone, not Copy.
pub struct CalibrationToken {
    _private: (),
}

pub struct SensorController {
    calibrated: bool,
}

impl SensorController {
    /// Called once at power-on β€” returns a calibration token.
    pub fn power_on() -> (Self, CalibrationToken) {
        (
            SensorController { calibrated: false },
            CalibrationToken { _private: () },
        )
    }

    /// Calibrate the sensor β€” consumes the token.
    pub fn calibrate(&mut self, _token: CalibrationToken) -> io::Result<()> {
        // ... run calibration sequence ...
        self.calibrated = true;
        Ok(())
    }

    /// Read a sensor β€” only meaningful after calibration.
    ///
    /// **Limitation:** The move-semantics guarantee is *partial*. The caller
    /// can `drop(cal_token)` without calling `calibrate()` β€” the token will
    /// be destroyed but calibration won't run. The `#[must_use]` annotation
    /// (see below) generates a warning but not a hard error.
    ///
    /// The runtime `self.calibrated` check here is the **safety net** for
    /// that gap. For a fully compile-time solution, see the type-state
    /// pattern in ch05 where `send_command()` only exists on `IpmiSession<Active>`.
    pub fn read(&self) -> io::Result<f64> {
        if !self.calibrated {
            return Err(io::Error::new(io::ErrorKind::Other, "not calibrated"));
        }
        Ok(25.0) // stub
    }
}

fn sensor_workflow() -> io::Result<()> {
    let (mut ctrl, cal_token) = SensorController::power_on();

    // Must use cal_token somewhere β€” it's not Copy, so dropping it
    // without consuming it generates a warning (or error with #[must_use])
    ctrl.calibrate(cal_token)?;

    // Now reads work:
    let temp = ctrl.read()?;
    println!("Temperature: {temp}Β°C");

    // Can't calibrate again β€” token was consumed:
    // ctrl.calibrate(cal_token);  // ❌ use of moved value

    Ok(())
}

When to Use Single-Use Types

ScenarioUse single-use (move) semantics?
Cryptographic noncesβœ… Always β€” nonce reuse is catastrophic
Ephemeral keys (DH, ECDH)βœ… Always β€” reuse weakens forward secrecy
OTP fuse writesβœ… Always β€” double-write bricks hardware
License activation codesβœ… Usually β€” prevent double-activation
Calibration tokensβœ… Usually β€” enforce once-per-session
File write handles⚠️ Sometimes β€” depends on protocol
Database transaction handles⚠️ Sometimes β€” commit/rollback is single-use
General data buffers❌ These need reuse β€” use &mut [u8]

Single-Use Ownership Flow

flowchart LR
    N["Nonce::new()"] -->|move| E["encrypt(nonce, msg)"]
    E -->|consumed| X["❌ nonce gone"]
    N -.->|"reuse attempt"| ERR["COMPILE ERROR:\nuse of moved value"]
    style N fill:#e1f5fe,color:#000
    style E fill:#c8e6c9,color:#000
    style X fill:#ffcdd2,color:#000
    style ERR fill:#ffcdd2,color:#000

Exercise: Single-Use Firmware Signing Token

Design a SigningToken that can be used exactly once to sign a firmware image:

  • SigningToken::issue(key_id: &str) -> SigningToken (not Clone, not Copy)
  • sign(token: SigningToken, image: &[u8]) -> SignedImage (consumes the token)
  • Attempting to sign twice should be a compile error.
Solution
pub struct SigningToken {
    key_id: String,
    // NOT Clone, NOT Copy
}

pub struct SignedImage {
    pub signature: Vec<u8>,
    pub key_id: String,
}

impl SigningToken {
    pub fn issue(key_id: &str) -> Self {
        SigningToken { key_id: key_id.to_string() }
    }
}

pub fn sign(token: SigningToken, _image: &[u8]) -> SignedImage {
    // Token consumed by move β€” can't be reused
    SignedImage {
        signature: vec![0xDE, 0xAD],  // stub
        key_id: token.key_id,
    }
}

// βœ… Compiles:
// let tok = SigningToken::issue("release-key");
// let signed = sign(tok, &firmware_bytes);
//
// ❌ Compile error:
// let signed2 = sign(tok, &other_bytes);  // ERROR: use of moved value

Key Takeaways

  1. Move = linear use β€” a non-Clone, non-Copy type can be consumed exactly once; the compiler enforces this.
  2. Nonce reuse is catastrophic β€” Rust’s ownership system prevents it structurally, not by discipline.
  3. Pattern applies beyond crypto β€” OTP fuses, calibration tokens, audit entries β€” anything that must happen at most once.
  4. Ephemeral keys get forward secrecy for free β€” the key agreement value is moved into the derived secret and vanishes.
  5. When in doubt, remove Clone β€” you can always add it later; removing it from a published API is a breaking change.

Capability Tokens β€” Zero-Cost Proof of Authority 🟑

What you’ll learn: How zero-sized types (ZSTs) act as compile-time proof tokens, enforcing privilege hierarchies, power sequencing, and revocable authority β€” all at zero runtime cost.

Cross-references: ch03 (single-use types), ch05 (type-state), ch08 (mixins), ch10 (integration)

The Problem: Who Is Allowed to Do What?

In hardware diagnostics, some operations are dangerous:

  • Programming BMC firmware
  • Resetting PCIe links
  • Writing OTP fuses
  • Enabling high-voltage test modes

In C/C++, these are guarded by runtime checks:

// C β€” runtime permission check
int reset_pcie_link(bmc_handle_t bmc, int slot) {
    if (!bmc->is_admin) {        // runtime check
        return -EPERM;
    }
    if (!bmc->link_trained) {    // another runtime check
        return -EINVAL;
    }
    // ... do the dangerous thing ...
    return 0;
}

Every function that does something dangerous must repeat these checks. Forget one, and you have a privilege escalation bug.

Zero-Sized Types as Proof Tokens

A capability token is a zero-sized type (ZST) that proves the caller has the authority to perform an action. It costs zero bytes at runtime β€” it exists only in the type system:

use std::marker::PhantomData;

/// Proof that the caller has admin privileges.
/// Zero-sized β€” compiles away completely.
/// Not Clone, not Copy β€” must be explicitly passed.
pub struct AdminToken {
    _private: (),   // prevents construction outside this module
}

/// Proof that the PCIe link is trained and ready.
pub struct LinkTrainedToken {
    _private: (),
}

pub struct BmcController { /* ... */ }

impl BmcController {
    /// Authenticate as admin β€” returns a capability token.
    /// This is the ONLY way to create an AdminToken.
    pub fn authenticate_admin(
        &mut self,
        credentials: &[u8],
    ) -> Result<AdminToken, &'static str> {
        // ... validate credentials ...
        let valid = true;
        if valid {
            Ok(AdminToken { _private: () })
        } else {
            Err("authentication failed")
        }
    }

    /// Train the PCIe link β€” returns proof that it's trained.
    pub fn train_link(&mut self) -> Result<LinkTrainedToken, &'static str> {
        // ... perform link training ...
        Ok(LinkTrainedToken { _private: () })
    }

    /// Reset a PCIe link β€” requires BOTH admin + link-trained proof.
    /// No runtime checks needed β€” the tokens ARE the proof.
    pub fn reset_pcie_link(
        &mut self,
        _admin: &AdminToken,         // zero-cost proof of authority
        _trained: &LinkTrainedToken,  // zero-cost proof of state
        slot: u32,
    ) -> Result<(), &'static str> {
        println!("Resetting PCIe link on slot {slot}");
        Ok(())
    }
}

Usage β€” the type system enforces the workflow:

fn maintenance_workflow(bmc: &mut BmcController) -> Result<(), &'static str> {
    // Step 1: Authenticate β€” get admin proof
    let admin = bmc.authenticate_admin(b"secret")?;

    // Step 2: Train link β€” get trained proof
    let trained = bmc.train_link()?;

    // Step 3: Reset β€” compiler requires both tokens
    bmc.reset_pcie_link(&admin, &trained, 0)?;

    Ok(())
}

// This WON'T compile:
fn unprivileged_attempt(bmc: &mut BmcController) -> Result<(), &'static str> {
    let trained = bmc.train_link()?;
    // bmc.reset_pcie_link(???, &trained, 0)?;
    //                     ^^^ no AdminToken β€” can't call this
    Ok(())
}

The AdminToken and LinkTrainedToken are zero bytes in the compiled binary. They exist only during type-checking. The function signature fn reset_pcie_link(&mut self, _admin: &AdminToken, ...) is a proof obligation β€” β€œyou may only call this if you can produce an AdminToken” β€” and the only way to produce one is through authenticate_admin().

Power Sequencing Authority

Server power sequencing has strict ordering: standby β†’ auxiliary β†’ main β†’ CPU. Reversing the sequence can damage hardware. Capability tokens enforce ordering:

/// State tokens β€” each one proves the previous step completed.
pub struct StandbyOn { _p: () }
pub struct AuxiliaryOn { _p: () }
pub struct MainOn { _p: () }
pub struct CpuPowered { _p: () }

pub struct PowerController { /* ... */ }

impl PowerController {
    /// Step 1: Enable standby power. No precondition.
    pub fn enable_standby(&mut self) -> Result<StandbyOn, &'static str> {
        println!("Standby power ON");
        Ok(StandbyOn { _p: () })
    }

    /// Step 2: Enable auxiliary β€” requires standby proof.
    pub fn enable_auxiliary(
        &mut self,
        _standby: &StandbyOn,
    ) -> Result<AuxiliaryOn, &'static str> {
        println!("Auxiliary power ON");
        Ok(AuxiliaryOn { _p: () })
    }

    /// Step 3: Enable main β€” requires auxiliary proof.
    pub fn enable_main(
        &mut self,
        _aux: &AuxiliaryOn,
    ) -> Result<MainOn, &'static str> {
        println!("Main power ON");
        Ok(MainOn { _p: () })
    }

    /// Step 4: Power CPU β€” requires main proof.
    pub fn power_cpu(
        &mut self,
        _main: &MainOn,
    ) -> Result<CpuPowered, &'static str> {
        println!("CPU powered ON");
        Ok(CpuPowered { _p: () })
    }
}

fn power_on_sequence(ctrl: &mut PowerController) -> Result<CpuPowered, &'static str> {
    let standby = ctrl.enable_standby()?;
    let aux = ctrl.enable_auxiliary(&standby)?;
    let main = ctrl.enable_main(&aux)?;
    let cpu = ctrl.power_cpu(&main)?;
    Ok(cpu)
}

// Trying to skip a step:
// fn wrong_order(ctrl: &mut PowerController) {
//     ctrl.power_cpu(???);  // ❌ can't produce MainOn without enable_main()
// }

Hierarchical Capabilities

Real systems have hierarchies β€” an admin can do everything a user can do, plus more. Model this with a trait hierarchy:

/// Base capability β€” anyone who is authenticated.
pub trait Authenticated {
    fn token_id(&self) -> u64;
}

/// Operator can read sensors and run non-destructive diagnostics.
pub trait Operator: Authenticated {}

/// Admin can do everything an operator can, plus destructive operations.
pub trait Admin: Operator {}

// Concrete tokens:
pub struct UserToken { id: u64 }
pub struct OperatorToken { id: u64 }
pub struct AdminCapToken { id: u64 }

impl Authenticated for UserToken { fn token_id(&self) -> u64 { self.id } }
impl Authenticated for OperatorToken { fn token_id(&self) -> u64 { self.id } }
impl Operator for OperatorToken {}
impl Authenticated for AdminCapToken { fn token_id(&self) -> u64 { self.id } }
impl Operator for AdminCapToken {}
impl Admin for AdminCapToken {}

pub struct Bmc { /* ... */ }

impl Bmc {
    /// Anyone authenticated can read sensors.
    pub fn read_sensor(&self, _who: &impl Authenticated, id: u32) -> f64 {
        42.0 // stub
    }

    /// Only operators and above can run diagnostics.
    pub fn run_diag(&mut self, _who: &impl Operator, test: &str) -> bool {
        true // stub
    }

    /// Only admins can flash firmware.
    pub fn flash_firmware(&mut self, _who: &impl Admin, image: &[u8]) -> Result<(), &'static str> {
        Ok(()) // stub
    }
}

An AdminCapToken can be passed to any function β€” it satisfies Authenticated, Operator, and Admin. A UserToken can only call read_sensor(). The compiler enforces the entire privilege model at zero runtime cost.

Lifetime-Bounded Capability Tokens

Sometimes a capability should be scoped β€” valid only within a certain lifetime. Rust’s borrow checker handles this naturally:

/// A scoped admin session. The token borrows the session,
/// so it cannot outlive it.
pub struct AdminSession {
    _active: bool,
}

pub struct ScopedAdminToken<'session> {
    _session: &'session AdminSession,
}

impl AdminSession {
    pub fn begin(credentials: &[u8]) -> Result<Self, &'static str> {
        // ... authenticate ...
        Ok(AdminSession { _active: true })
    }

    /// Create a scoped token β€” lives only as long as the session.
    pub fn token(&self) -> ScopedAdminToken<'_> {
        ScopedAdminToken { _session: self }
    }
}

fn scoped_example() -> Result<(), &'static str> {
    let session = AdminSession::begin(b"credentials")?;
    let token = session.token();

    // Use token within this scope...
    // When session drops, token is invalidated by the borrow checker.
    // No need for runtime expiry checks.

    // drop(session);
    // ❌ ERROR: cannot move out of `session` because it is borrowed
    //    (by `token`, which holds &session)
    //
    // Even if we skip drop() and just try to use `token` after
    // session goes out of scope β€” same error: lifetime mismatch.

    Ok(())
}

When to Use Capability Tokens

ScenarioPattern
Privileged hardware operationsZST proof token (AdminToken)
Multi-step sequencingChain of state tokens (StandbyOn β†’ AuxiliaryOn β†’ …)
Role-based access controlTrait hierarchy (Authenticated β†’ Operator β†’ Admin)
Time-limited privilegesLifetime-bounded tokens (ScopedAdminToken<'a>)
Cross-module authorityPublic token type, private constructor

Cost Summary

WhatRuntime cost
ZST token in memory0 bytes
Token parameter passingOptimised away by LLVM
Trait hierarchy dispatchStatic dispatch (monomorphised)
Lifetime enforcementCompile-time only

Total runtime overhead: zero. The privilege model exists only in the type system.

Capability Token Hierarchy

flowchart TD
    AUTH["authenticate(user, pass)"] -->|returns| AT["AdminToken"]
    AT -->|"&AdminToken"| FW["firmware_update()"]
    AT -->|"&AdminToken"| RST["reset_pcie_link()"]
    AT -->|downgrade| OP["OperatorToken"]
    OP -->|"&OperatorToken"| RD["read_sensors()"]
    OP -.->|"attempt firmware_update"| ERR["❌ Compile Error"]
    style AUTH fill:#e1f5fe,color:#000
    style AT fill:#c8e6c9,color:#000
    style OP fill:#fff3e0,color:#000
    style FW fill:#e8f5e9,color:#000
    style RST fill:#e8f5e9,color:#000
    style RD fill:#fff3e0,color:#000
    style ERR fill:#ffcdd2,color:#000

Exercise: Tiered Diagnostic Permissions

Design a three-tier capability system: ViewerToken, TechToken, EngineerToken.

  • Viewers can call read_status()
  • Techs can also call run_quick_diag()
  • Engineers can also call flash_firmware()
  • Higher tiers can do everything lower tiers can (use trait bounds or token conversion).
Solution
// Tokens β€” zero-sized, private constructors
pub struct ViewerToken { _private: () }
pub struct TechToken { _private: () }
pub struct EngineerToken { _private: () }

// Capability traits β€” hierarchical
pub trait CanView {}
pub trait CanDiag: CanView {}
pub trait CanFlash: CanDiag {}

impl CanView for ViewerToken {}
impl CanView for TechToken {}
impl CanView for EngineerToken {}
impl CanDiag for TechToken {}
impl CanDiag for EngineerToken {}
impl CanFlash for EngineerToken {}

pub fn read_status(_tok: &impl CanView) -> String {
    "status: OK".into()
}

pub fn run_quick_diag(_tok: &impl CanDiag) -> String {
    "diag: PASS".into()
}

pub fn flash_firmware(_tok: &impl CanFlash, _image: &[u8]) {
    // Only engineers reach here
}

Key Takeaways

  1. ZST tokens cost zero bytes β€” they exist only in the type system; LLVM optimises them away completely.
  2. Private constructors = unforgeable β€” only your module’s authenticate() can mint a token.
  3. Trait hierarchies model permission levels β€” CanFlash: CanDiag: CanView mirrors real RBAC.
  4. Lifetime-bounded tokens revoke automatically β€” ScopedAdminToken<'session> can’t outlive the session.
  5. Combine with type-state (ch05) for protocols that require authentication and sequenced operations.

Protocol State Machines β€” Type-State for Real Hardware πŸ”΄

What you’ll learn: How type-state encoding makes protocol violations (wrong-order commands, use-after-close) into compile errors, applied to IPMI session lifecycles and PCIe link training.

Cross-references: ch01 (level 2 β€” state correctness), ch04 (tokens), ch09 (phantom types), ch11 (trick 4 β€” typestate builder, trick 8 β€” async type-state)

The Problem: Protocol Violations

Hardware protocols have strict state machines. An IPMI session has states: Unauthenticated β†’ Authenticated β†’ Active β†’ Closed. PCIe link training goes through Detect β†’ Polling β†’ Configuration β†’ L0. Sending a command in the wrong state corrupts the session or hangs the bus.

IPMI session state machine:

stateDiagram-v2
    [*] --> Idle
    Idle --> Authenticated : authenticate(user, pass)
    Authenticated --> Active : activate_session()
    Active --> Active : send_command(cmd)
    Active --> Closed : close()
    Closed --> [*]

    note right of Active : send_command() only exists here
    note right of Idle : send_command() β†’ compile error

PCIe Link Training State Machine (LTSSM):

stateDiagram-v2
    [*] --> Detect
    Detect --> Polling : receiver detected
    Polling --> Configuration : bit lock + symbol lock
    Configuration --> L0 : link number + lane assigned
    L0 --> L0 : send_tlp() / receive_tlp()
    L0 --> Recovery : error threshold
    Recovery --> L0 : retrained
    Recovery --> Detect : retraining failed

    note right of L0 : TLP transmit only in L0

In C/C++, state is tracked with an enum and runtime checks:

typedef enum { IDLE, AUTHENTICATED, ACTIVE, CLOSED } session_state_t;

typedef struct {
    session_state_t state;
    uint32_t session_id;
    // ...
} ipmi_session_t;

int ipmi_send_command(ipmi_session_t *s, uint8_t cmd, uint8_t *data, int len) {
    if (s->state != ACTIVE) {        // runtime check β€” easy to forget
        return -EINVAL;
    }
    // ... send command ...
    return 0;
}

Type-State Pattern

With type-state, each protocol state is a distinct type. Transitions are methods that consume one state and return another. The compiler prevents calling methods in the wrong state because those methods don’t exist on that type.

use std::marker::PhantomData;

// States β€” zero-sized marker types
pub struct Idle;
# Case Study: IPMI Session Lifecycle

pub struct Authenticated;
pub struct Active;
pub struct Closed;

/// IPMI session parameterised by its current state.
/// The state exists ONLY in the type system (PhantomData is zero-sized).
pub struct IpmiSession<State> {
    transport: String,     // e.g., "192.168.1.100"
    session_id: Option<u32>,
    _state: PhantomData<State>,
}

// Transition: Idle β†’ Authenticated
impl IpmiSession<Idle> {
    pub fn new(host: &str) -> Self {
        IpmiSession {
            transport: host.to_string(),
            session_id: None,
            _state: PhantomData,
        }
    }

    pub fn authenticate(
        self,              // ← consumes Idle session
        user: &str,
        pass: &str,
    ) -> Result<IpmiSession<Authenticated>, String> {
        println!("Authenticating {user} on {}", self.transport);
        Ok(IpmiSession {
            transport: self.transport,
            session_id: Some(42),
            _state: PhantomData,
        })
    }
}

// Transition: Authenticated β†’ Active
impl IpmiSession<Authenticated> {
    pub fn activate(self) -> Result<IpmiSession<Active>, String> {
        // session_id is guaranteed Some by the type-state transition path.
        println!("Activating session {}", self.session_id.unwrap());
        Ok(IpmiSession {
            transport: self.transport,
            session_id: self.session_id,
            _state: PhantomData,
        })
    }
}

// Operations available ONLY in Active state
impl IpmiSession<Active> {
    pub fn send_command(&mut self, netfn: u8, cmd: u8, data: &[u8]) -> Vec<u8> {
        // session_id is guaranteed Some in Active state.
        println!("Sending cmd 0x{cmd:02X} on session {}", self.session_id.unwrap());
        vec![0x00] // stub: completion code OK
    }

    pub fn close(self) -> IpmiSession<Closed> {
        // session_id is guaranteed Some in Active state.
        println!("Closing session {}", self.session_id.unwrap());
        IpmiSession {
            transport: self.transport,
            session_id: None,
            _state: PhantomData,
        }
    }
}

fn ipmi_workflow() -> Result<(), String> {
    let session = IpmiSession::new("192.168.1.100");

    // session.send_command(0x04, 0x2D, &[]);
    //  ^^^^^^ ERROR: no method `send_command` on IpmiSession<Idle> ❌

    let session = session.authenticate("admin", "password")?;

    // session.send_command(0x04, 0x2D, &[]);
    //  ^^^^^^ ERROR: no method `send_command` on IpmiSession<Authenticated> ❌

    let mut session = session.activate()?;

    // βœ… NOW send_command exists:
    let response = session.send_command(0x04, 0x2D, &[1]);

    let _closed = session.close();

    // _closed.send_command(0x04, 0x2D, &[]);
    //  ^^^^^^ ERROR: no method `send_command` on IpmiSession<Closed> ❌

    Ok(())
}

No runtime state checks anywhere. The compiler enforces:

  • Authentication before activation
  • Activation before sending commands
  • No commands after close

PCIe link training is a multi-phase protocol defined in the PCIe specification. Type-state prevents sending data before the link is ready:

use std::marker::PhantomData;

// PCIe LTSSM states (simplified)
pub struct Detect;
pub struct Polling;
pub struct Configuration;
pub struct L0;         // fully operational
pub struct Recovery;

pub struct PcieLink<State> {
    slot: u32,
    width: u8,          // negotiated width (x1, x4, x8, x16)
    speed: u8,          // Gen1=1, Gen2=2, Gen3=3, Gen4=4, Gen5=5
    _state: PhantomData<State>,
}

impl PcieLink<Detect> {
    pub fn new(slot: u32) -> Self {
        PcieLink {
            slot, width: 0, speed: 0,
            _state: PhantomData,
        }
    }

    pub fn detect_receiver(self) -> Result<PcieLink<Polling>, String> {
        println!("Slot {}: receiver detected", self.slot);
        Ok(PcieLink {
            slot: self.slot, width: 0, speed: 0,
            _state: PhantomData,
        })
    }
}

impl PcieLink<Polling> {
    pub fn poll_compliance(self) -> Result<PcieLink<Configuration>, String> {
        println!("Slot {}: polling complete, entering configuration", self.slot);
        Ok(PcieLink {
            slot: self.slot, width: 0, speed: 0,
            _state: PhantomData,
        })
    }
}

impl PcieLink<Configuration> {
    pub fn negotiate(self, width: u8, speed: u8) -> Result<PcieLink<L0>, String> {
        println!("Slot {}: negotiated x{width} Gen{speed}", self.slot);
        Ok(PcieLink {
            slot: self.slot, width, speed,
            _state: PhantomData,
        })
    }
}

impl PcieLink<L0> {
    /// Send a TLP β€” only possible when the link is fully trained (L0).
    pub fn send_tlp(&mut self, tlp: &[u8]) -> Vec<u8> {
        println!("Slot {}: sending {} byte TLP", self.slot, tlp.len());
        vec![0x00] // stub
    }

    /// Enter recovery β€” returns to Recovery state.
    pub fn enter_recovery(self) -> PcieLink<Recovery> {
        PcieLink {
            slot: self.slot, width: self.width, speed: self.speed,
            _state: PhantomData,
        }
    }

    pub fn link_info(&self) -> String {
        format!("x{} Gen{}", self.width, self.speed)
    }
}

impl PcieLink<Recovery> {
    pub fn retrain(self, speed: u8) -> Result<PcieLink<L0>, String> {
        println!("Slot {}: retrained at Gen{speed}", self.slot);
        Ok(PcieLink {
            slot: self.slot, width: self.width, speed,
            _state: PhantomData,
        })
    }
}

fn pcie_workflow() -> Result<(), String> {
    let link = PcieLink::new(0);

    // link.send_tlp(&[0x01]);  // ❌ no method `send_tlp` on PcieLink<Detect>

    let link = link.detect_receiver()?;
    let link = link.poll_compliance()?;
    let mut link = link.negotiate(16, 5)?; // x16 Gen5

    // βœ… NOW we can send TLPs:
    let _resp = link.send_tlp(&[0x00, 0x01, 0x02]);
    println!("Link: {}", link.link_info());

    // Recovery and retrain:
    let recovery = link.enter_recovery();
    let mut link = recovery.retrain(4)?;  // downgrade to Gen4
    let _resp = link.send_tlp(&[0x03]);

    Ok(())
}

Combining Type-State with Capability Tokens

Type-state and capability tokens compose naturally. A diagnostic that requires an active IPMI session AND admin privileges:

use std::marker::PhantomData;
pub struct Active;
pub struct AdminToken { _p: () }
pub struct IpmiSession<S> { _s: PhantomData<S> }
impl IpmiSession<Active> {
    pub fn send_command(&mut self, _nf: u8, _cmd: u8, _d: &[u8]) -> Vec<u8> { vec![] }
}

/// Run a firmware update β€” requires:
/// 1. Active IPMI session (type-state)
/// 2. Admin privileges (capability token)
pub fn firmware_update(
    session: &mut IpmiSession<Active>,   // proves session is active
    _admin: &AdminToken,                 // proves caller is admin
    image: &[u8],
) -> Result<(), String> {
    // No runtime checks needed β€” the signature IS the check
    session.send_command(0x2C, 0x01, image);
    Ok(())
}

The caller must:

  1. Create a session (Idle)
  2. Authenticate it (Authenticated)
  3. Activate it (Active)
  4. Obtain an AdminToken
  5. Then and only then call firmware_update()

All enforced at compile time, zero runtime cost.

Beat 3: Firmware Update β€” Multi-Phase FSM with Composition

A firmware update lifecycle has more states than a session and composition with both capability tokens AND single-use types (ch03). This is the most complex type-state example in the book β€” if you’re comfortable with it, you’ve mastered the pattern.

stateDiagram-v2
    [*] --> Idle
    Idle --> Uploading : begin_upload(admin, image)
    Uploading --> Verifying : finish_upload()
    Uploading --> Idle : abort()
    Verifying --> Verified : verify_ok()
    Verifying --> Idle : verify_fail()
    Verified --> Applying : apply(single-use VerifiedImage token)
    Applying --> WaitingReboot : apply_complete()
    WaitingReboot --> [*] : reboot()

    note right of Verified : VerifiedImage token consumed by apply()
    note right of Uploading : abort() returns to Idle (safe)
use std::marker::PhantomData;

// ── States ──
pub struct Idle;
pub struct Uploading;
pub struct Verifying;
pub struct Verified;
pub struct Applying;
pub struct WaitingReboot;

// ── Single-use proof that image passed verification (ch03) ──
pub struct VerifiedImage {
    _private: (),
    pub digest: [u8; 32],
}

// ── Capability token: only admins can initiate (ch04) ──
pub struct FirmwareAdminToken { _private: () }

pub struct FwUpdate<S> {
    version: String,
    _state: PhantomData<S>,
}

impl FwUpdate<Idle> {
    pub fn new() -> Self {
        FwUpdate { version: String::new(), _state: PhantomData }
    }

    /// Begin upload β€” requires admin privilege.
    pub fn begin_upload(
        self,
        _admin: &FirmwareAdminToken,
        version: &str,
    ) -> FwUpdate<Uploading> {
        println!("Uploading firmware v{version}...");
        FwUpdate { version: version.to_string(), _state: PhantomData }
    }
}

impl FwUpdate<Uploading> {
    pub fn finish_upload(self) -> FwUpdate<Verifying> {
        println!("Upload complete, verifying v{}...", self.version);
        FwUpdate { version: self.version, _state: PhantomData }
    }

    /// Abort returns to Idle β€” safe at any point during upload.
    pub fn abort(self) -> FwUpdate<Idle> {
        println!("Upload aborted.");
        FwUpdate { version: String::new(), _state: PhantomData }
    }
}

impl FwUpdate<Verifying> {
    /// On success, produces a single-use VerifiedImage token.
    pub fn verify_ok(self, digest: [u8; 32]) -> (FwUpdate<Verified>, VerifiedImage) {
        println!("Verification passed for v{}", self.version);
        (
            FwUpdate { version: self.version, _state: PhantomData },
            VerifiedImage { _private: (), digest },
        )
    }

    pub fn verify_fail(self) -> FwUpdate<Idle> {
        println!("Verification failed β€” returning to idle.");
        FwUpdate { version: String::new(), _state: PhantomData }
    }
}

impl FwUpdate<Verified> {
    /// Apply CONSUMES the VerifiedImage token β€” can't apply twice.
    pub fn apply(self, proof: VerifiedImage) -> FwUpdate<Applying> {
        println!("Applying v{} (digest: {:02x?})", self.version, &proof.digest[..4]);
        // proof is moved β€” can't be reused
        FwUpdate { version: self.version, _state: PhantomData }
    }
}

impl FwUpdate<Applying> {
    pub fn apply_complete(self) -> FwUpdate<WaitingReboot> {
        println!("Apply complete β€” waiting for reboot.");
        FwUpdate { version: self.version, _state: PhantomData }
    }
}

impl FwUpdate<WaitingReboot> {
    pub fn reboot(self) {
        println!("Rebooting into v{}...", self.version);
    }
}

// ── Usage ──

fn firmware_workflow() {
    let fw = FwUpdate::new();

    // fw.finish_upload();  // ❌ no method `finish_upload` on FwUpdate<Idle>

    let admin = FirmwareAdminToken { _private: () }; // from auth system
    let fw = fw.begin_upload(&admin, "2.10.1");
    let fw = fw.finish_upload();

    let digest = [0xAB; 32]; // computed during verification
    let (fw, token) = fw.verify_ok(digest);

    let fw = fw.apply(token);
    // fw.apply(token);  // ❌ use of moved value: `token`

    let fw = fw.apply_complete();
    fw.reboot();
}

What the three beats illustrate together:

BeatProtocolStatesComposition
1IPMI session4Pure type-state
2PCIe LTSSM5Type-state + recovery branch
3Firmware update6Type-state + capability tokens (ch04) + single-use proof (ch03)

Each beat adds a layer of complexity. By beat 3, the compiler enforces state ordering, admin privilege, AND one-time application β€” three bug classes eliminated in a single FSM.

When to Use Type-State

ProtocolType-State worthwhile?
IPMI session lifecycleβœ… Yes β€” authenticate β†’ activate β†’ command β†’ close
PCIe link trainingβœ… Yes β€” detect β†’ poll β†’ configure β†’ L0
TLS handshakeβœ… Yes β€” ClientHello β†’ ServerHello β†’ Finished
USB enumerationβœ… Yes β€” Attached β†’ Powered β†’ Default β†’ Addressed β†’ Configured
Simple request/response⚠️ Probably not β€” only 2 states
Fire-and-forget messages❌ No β€” no state to track

Exercise: USB Device Enumeration Type-State

Model a USB device that must go through: Attached β†’ Powered β†’ Default β†’ Addressed β†’ Configured. Each transition should consume the previous state and produce the next. send_data() should only be available in Configured.

Solution
use std::marker::PhantomData;

pub struct Attached;
pub struct Powered;
pub struct Default;
pub struct Addressed;
pub struct Configured;

pub struct UsbDevice<State> {
    address: u8,
    _state: PhantomData<State>,
}

impl UsbDevice<Attached> {
    pub fn new() -> Self {
        UsbDevice { address: 0, _state: PhantomData }
    }
    pub fn power_on(self) -> UsbDevice<Powered> {
        UsbDevice { address: self.address, _state: PhantomData }
    }
}

impl UsbDevice<Powered> {
    pub fn reset(self) -> UsbDevice<Default> {
        UsbDevice { address: self.address, _state: PhantomData }
    }
}

impl UsbDevice<Default> {
    pub fn set_address(self, addr: u8) -> UsbDevice<Addressed> {
        UsbDevice { address: addr, _state: PhantomData }
    }
}

impl UsbDevice<Addressed> {
    pub fn configure(self) -> UsbDevice<Configured> {
        UsbDevice { address: self.address, _state: PhantomData }
    }
}

impl UsbDevice<Configured> {
    pub fn send_data(&self, _data: &[u8]) {
        // Only available in Configured state
    }
}

Key Takeaways

  1. Type-state makes wrong-order calls impossible β€” methods only exist on the state where they’re valid.
  2. Each transition consumes self β€” you can’t hold onto an old state after transitioning.
  3. Combine with capability tokens β€” firmware_update() requires both Session<Active> and AdminToken.
  4. Three beats, increasing complexity β€” IPMI (pure FSM), PCIe LTSSM (recovery branches), and firmware update (FSM + tokens + single-use proofs) show the pattern scales from simple to richly composed.
  5. Don’t over-apply β€” two-state request/response protocols are simpler without type-state.
  6. The pattern extends to full Redfish workflows β€” ch17 applies type-state to Redfish session lifecycles, and ch18 uses builder type-state for response construction.

Dimensional Analysis β€” Making the Compiler Check Your Units 🟒

What you’ll learn: How newtype wrappers and the uom crate turn the compiler into a unit-checking engine, preventing the class of bug that destroyed a $328M spacecraft.

Cross-references: ch02 (typed commands use these types), ch07 (validated boundaries), ch10 (integration)

The Mars Climate Orbiter

In 1999, NASA’s Mars Climate Orbiter was lost because one team sent thrust data in pound-force seconds while the navigation team expected newton-seconds. The spacecraft entered the atmosphere at 57 km instead of 226 km and disintegrated. Cost: $327.6 million.

The root cause: both values were double. The compiler couldn’t distinguish them.

This same class of bug lurks in every hardware diagnostic that deals with physical quantities:

// C β€” all doubles, no unit checking
double read_temperature(int sensor_id);   // Celsius? Fahrenheit? Kelvin?
double read_voltage(int channel);          // Volts? Millivolts?
double read_fan_speed(int fan_id);         // RPM? Radians per second?

// Bug: comparing Celsius to Fahrenheit
if (read_temperature(0) > read_temperature(1)) { ... }  // units might differ!

Newtypes for Physical Quantities

The simplest correct-by-construction approach: wrap each unit in its own type.

use std::fmt;

/// Temperature in degrees Celsius.
#[derive(Debug, Clone, Copy, PartialEq, PartialOrd)]
pub struct Celsius(pub f64);

/// Temperature in degrees Fahrenheit.
#[derive(Debug, Clone, Copy, PartialEq, PartialOrd)]
pub struct Fahrenheit(pub f64);

/// Voltage in volts.
#[derive(Debug, Clone, Copy, PartialEq, PartialOrd)]
pub struct Volts(pub f64);

/// Voltage in millivolts.
#[derive(Debug, Clone, Copy, PartialEq, PartialOrd)]
pub struct Millivolts(pub f64);

/// Fan speed in RPM.
#[derive(Debug, Clone, Copy, PartialEq, PartialOrd)]
pub struct Rpm(pub f64);

// Conversions are explicit:
impl From<Celsius> for Fahrenheit {
    fn from(c: Celsius) -> Self {
        Fahrenheit(c.0 * 9.0 / 5.0 + 32.0)
    }
}

impl From<Fahrenheit> for Celsius {
    fn from(f: Fahrenheit) -> Self {
        Celsius((f.0 - 32.0) * 5.0 / 9.0)
    }
}

impl From<Volts> for Millivolts {
    fn from(v: Volts) -> Self {
        Millivolts(v.0 * 1000.0)
    }
}

impl From<Millivolts> for Volts {
    fn from(mv: Millivolts) -> Self {
        Volts(mv.0 / 1000.0)
    }
}

impl fmt::Display for Celsius {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        write!(f, "{:.1}Β°C", self.0)
    }
}

impl fmt::Display for Rpm {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        write!(f, "{:.0} RPM", self.0)
    }
}

Now the compiler catches unit mismatches:

#[derive(Debug, Clone, Copy, PartialEq, PartialOrd)]
pub struct Celsius(pub f64);
#[derive(Debug, Clone, Copy, PartialEq, PartialOrd)]
pub struct Volts(pub f64);

fn check_thermal_limit(temp: Celsius, limit: Celsius) -> bool {
    temp > limit  // βœ… same units β€” compiles
}

// fn bad_comparison(temp: Celsius, voltage: Volts) -> bool {
//     temp > voltage  // ❌ ERROR: mismatched types β€” Celsius vs Volts
// }

Zero runtime cost β€” newtypes compile down to raw f64 values. The wrapper is purely a type-level concept.

Newtype Macro for Hardware Quantities

Writing newtypes by hand gets repetitive. A macro eliminates the boilerplate:

/// Generate a newtype for a physical quantity.
macro_rules! quantity {
    ($Name:ident, $unit:expr) => {
        #[derive(Debug, Clone, Copy, PartialEq, PartialOrd)]
        pub struct $Name(pub f64);

        impl $Name {
            pub fn new(value: f64) -> Self { $Name(value) }
            pub fn value(self) -> f64 { self.0 }
        }

        impl std::fmt::Display for $Name {
            fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
                write!(f, "{:.2} {}", self.0, $unit)
            }
        }

        impl std::ops::Add for $Name {
            type Output = Self;
            fn add(self, rhs: Self) -> Self { $Name(self.0 + rhs.0) }
        }

        impl std::ops::Sub for $Name {
            type Output = Self;
            fn sub(self, rhs: Self) -> Self { $Name(self.0 - rhs.0) }
        }
    };
}

// Usage:
quantity!(Celsius, "Β°C");
quantity!(Fahrenheit, "Β°F");
quantity!(Volts, "V");
quantity!(Millivolts, "mV");
quantity!(Rpm, "RPM");
quantity!(Watts, "W");
quantity!(Amperes, "A");
quantity!(Pascals, "Pa");
quantity!(Hertz, "Hz");
quantity!(Bytes, "B");

Each line generates a complete type with Display, Add, Sub, and comparison operators. All at zero runtime cost.

Physics caveat: The macro generates Add for all quantities, including Celsius. Adding absolute temperatures (25Β°C + 30Β°C = 55Β°C) is not physically meaningful β€” you’d need a separate TemperatureDelta type for differences. The uom crate (shown later) handles this correctly. For simple sensor diagnostics where you only compare and display, you can omit Add/Sub from temperature types and keep them for quantities where addition makes sense (Watts, Volts, Bytes). If you need delta arithmetic, define a CelsiusDelta(f64) newtype with impl Add<CelsiusDelta> for Celsius.

Applied Example: Sensor Pipeline

A typical diagnostic reads raw ADC values, converts them to physical units, and compares against thresholds. With dimensional types, each step is type-checked:

macro_rules! quantity {
    ($Name:ident, $unit:expr) => {
        #[derive(Debug, Clone, Copy, PartialEq, PartialOrd)]
        pub struct $Name(pub f64);
        impl $Name {
            pub fn new(value: f64) -> Self { $Name(value) }
            pub fn value(self) -> f64 { self.0 }
        }
        impl std::fmt::Display for $Name {
            fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
                write!(f, "{:.2} {}", self.0, $unit)
            }
        }
    };
}
quantity!(Celsius, "Β°C");
quantity!(Volts, "V");
quantity!(Rpm, "RPM");

/// Raw ADC reading β€” not yet a physical quantity.
#[derive(Debug, Clone, Copy)]
pub struct AdcReading {
    pub channel: u8,
    pub raw: u16,   // 12-bit ADC value (0–4095)
}

/// Calibration coefficients for converting ADC β†’ physical unit.
pub struct TemperatureCalibration {
    pub offset: f64,
    pub scale: f64,   // Β°C per ADC count
}

pub struct VoltageCalibration {
    pub reference_mv: f64,
    pub divider_ratio: f64,
}

impl TemperatureCalibration {
    /// Convert raw ADC β†’ Celsius. The return type guarantees the output is Celsius.
    pub fn convert(&self, adc: AdcReading) -> Celsius {
        Celsius::new(adc.raw as f64 * self.scale + self.offset)
    }
}

impl VoltageCalibration {
    /// Convert raw ADC β†’ Volts. The return type guarantees the output is Volts.
    pub fn convert(&self, adc: AdcReading) -> Volts {
        Volts::new(adc.raw as f64 * self.reference_mv / 4096.0 / self.divider_ratio / 1000.0)
    }
}

/// Threshold check β€” only compiles if units match.
pub struct Threshold<T: PartialOrd> {
    pub warning: T,
    pub critical: T,
}

#[derive(Debug, PartialEq)]
pub enum ThresholdResult {
    Normal,
    Warning,
    Critical,
}

impl<T: PartialOrd> Threshold<T> {
    pub fn check(&self, value: &T) -> ThresholdResult {
        if *value >= self.critical {
            ThresholdResult::Critical
        } else if *value >= self.warning {
            ThresholdResult::Warning
        } else {
            ThresholdResult::Normal
        }
    }
}

fn sensor_pipeline_example() {
    let temp_cal = TemperatureCalibration { offset: -50.0, scale: 0.0625 };
    let temp_threshold = Threshold {
        warning: Celsius::new(85.0),
        critical: Celsius::new(100.0),
    };

    let adc = AdcReading { channel: 0, raw: 2048 };
    let temp: Celsius = temp_cal.convert(adc);

    let result = temp_threshold.check(&temp);
    println!("Temperature: {temp}, Status: {result:?}");

    // This won't compile β€” can't check a Celsius reading against a Volts threshold:
    // let volt_threshold = Threshold {
    //     warning: Volts::new(11.4),
    //     critical: Volts::new(10.8),
    // };
    // volt_threshold.check(&temp);  // ❌ ERROR: expected &Volts, found &Celsius
}

The entire pipeline is statically type-checked:

  • ADC readings are raw counts (not units)
  • Calibration produces typed quantities (Celsius, Volts)
  • Thresholds are generic over the quantity type
  • Comparing Celsius against Volts is a compile error

The uom Crate

For production use, the uom crate provides a comprehensive dimensional analysis system with hundreds of units, automatic conversion, and zero runtime overhead:

// Cargo.toml: uom = { version = "0.36", features = ["f64"] }
//
// use uom::si::f64::*;
// use uom::si::thermodynamic_temperature::degree_celsius;
// use uom::si::electric_potential::volt;
// use uom::si::power::watt;
//
// let temp = ThermodynamicTemperature::new::<degree_celsius>(85.0);
// let voltage = ElectricPotential::new::<volt>(12.0);
// let power = Power::new::<watt>(250.0);
//
// // temp + voltage;  // ❌ compile error β€” can't add temperature to voltage
// // power > temp;    // ❌ compile error β€” can't compare power to temperature

Use uom when you need automatic derived-unit support (e.g., Watts = Volts Γ— Amperes). Use hand-rolled newtypes when you need only simple quantities without derived-unit arithmetic.

When to Use Dimensional Types

ScenarioRecommendation
Sensor readings (temp, voltage, fan)βœ… Always β€” prevents unit confusion
Threshold comparisonsβœ… Always β€” generic Threshold<T>
Cross-subsystem data exchangeβœ… Always β€” enforce contracts at API boundaries
Internal calculations (same unit throughout)⚠️ Optional β€” less bug-prone
String/display formatting❌ Use Display impl on the quantity type

Sensor Pipeline Type Flow

flowchart LR
    RAW["raw: &[u8]"] -->|parse| C["Celsius(f64)"]
    RAW -->|parse| R["Rpm(u32)"]
    RAW -->|parse| V["Volts(f64)"]
    C -->|threshold check| TC["Threshold<Celsius>"]
    R -->|threshold check| TR["Threshold<Rpm>"]
    C -.->|"C + R"| ERR["❌ mismatched types"]
    style RAW fill:#e1f5fe,color:#000
    style C fill:#c8e6c9,color:#000
    style R fill:#fff3e0,color:#000
    style V fill:#e8eaf6,color:#000
    style TC fill:#c8e6c9,color:#000
    style TR fill:#fff3e0,color:#000
    style ERR fill:#ffcdd2,color:#000

Exercise: Power Budget Calculator

Create Watts(f64) and Amperes(f64) newtypes. Implement:

  • Watts::from_vi(volts: Volts, amps: Amperes) -> Watts (P = V Γ— I)
  • A PowerBudget that tracks total watts and rejects additions that exceed a configured limit.
  • Attempting Watts + Celsius should be a compile error.
Solution
#[derive(Debug, Clone, Copy, PartialEq, PartialOrd)]
pub struct Watts(pub f64);

#[derive(Debug, Clone, Copy, PartialEq, PartialOrd)]
pub struct Amperes(pub f64);

#[derive(Debug, Clone, Copy, PartialEq, PartialOrd)]
pub struct Volts(pub f64);

#[derive(Debug, Clone, Copy, PartialEq, PartialOrd)]
pub struct Celsius(pub f64);

impl Watts {
    pub fn from_vi(volts: Volts, amps: Amperes) -> Self {
        Watts(volts.0 * amps.0)
    }
}

impl std::ops::Add for Watts {
    type Output = Watts;
    fn add(self, rhs: Watts) -> Watts {
        Watts(self.0 + rhs.0)
    }
}

pub struct PowerBudget {
    total: Watts,
    limit: Watts,
}

impl PowerBudget {
    pub fn new(limit: Watts) -> Self {
        PowerBudget { total: Watts(0.0), limit }
    }
    pub fn add(&mut self, w: Watts) -> Result<(), String> {
        let new_total = Watts(self.total.0 + w.0);
        if new_total > self.limit {
            return Err(format!("budget exceeded: {:?} > {:?}", new_total, self.limit));
        }
        self.total = new_total;
        Ok(())
    }
}

// ❌ Compile error: Watts + Celsius β†’ "mismatched types"
// let bad = Watts(100.0) + Celsius(50.0);

Key Takeaways

  1. Newtypes prevent unit confusion at zero cost β€” Celsius and Rpm are both f64 inside, but the compiler treats them as different types.
  2. The Mars Climate Orbiter bug is impossible β€” passing Pounds where Newtons is expected is a compile error.
  3. quantity! macro reduces boilerplate β€” stamp out Display, arithmetic, and threshold logic for each unit.
  4. uom crate handles derived units β€” use it when you need Watts = Volts Γ— Amperes automatically.
  5. Threshold is generic over the quantity β€” Threshold<Celsius> can’t accidentally compare to Threshold<Rpm>.

Validated Boundaries β€” Parse, Don’t Validate 🟑

What you’ll learn: How to validate data exactly once at the system boundary, carry the proof of validity in a dedicated type, and never re-check β€” applied to IPMI FRU records (flat bytes), Redfish JSON (structured documents), and IPMI SEL records (polymorphic binary with nested dispatch), with a complete end-to-end walkthrough.

Cross-references: ch02 (typed commands), ch06 (dimensional types), ch11 (trick 2 β€” sealed traits, trick 3 β€” #[non_exhaustive], trick 5 β€” FromStr), ch14 (proptest)

The Problem: Shotgun Validation

In typical code, validation is scattered everywhere. Every function that receives data re-checks it β€œjust in case”:

// C β€” validation scattered across the codebase
int process_fru_data(uint8_t *data, int len) {
    if (data == NULL) return -1;          // check: non-null
    if (len < 8) return -1;              // check: minimum length
    if (data[0] != 0x01) return -1;      // check: format version
    if (checksum(data, len) != 0) return -1; // check: checksum

    // ... 10 more functions that repeat the same checks ...
}

This pattern (β€œshotgun validation”) has two problems:

  1. Redundancy β€” the same checks appear in dozens of places
  2. Incompleteness β€” forget one check in one function and you have a bug

Parse, Don’t Validate

The correct-by-construction approach: validate once at the boundary, then carry the proof of validity in the type.

/// Raw bytes from the wire β€” not yet validated.
#[derive(Debug)]
pub struct RawFruData(Vec<u8>);

Case Study: IPMI FRU Data

#[derive(Debug)]
pub struct RawFruData(Vec<u8>);

/// Validated IPMI FRU data. Can only be created via TryFrom,
/// which enforces all invariants. Once you have a ValidFru,
/// all data is guaranteed correct.
#[derive(Debug)]
pub struct ValidFru {
    format_version: u8,
    internal_area_offset: u8,
    chassis_area_offset: u8,
    board_area_offset: u8,
    product_area_offset: u8,
    data: Vec<u8>,
}

#[derive(Debug)]
pub enum FruError {
    TooShort { actual: usize, minimum: usize },
    BadFormatVersion(u8),
    ChecksumMismatch { expected: u8, actual: u8 },
    InvalidAreaOffset { area: &'static str, offset: u8 },
}

impl std::fmt::Display for FruError {
    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
        match self {
            Self::TooShort { actual, minimum } =>
                write!(f, "FRU data too short: {actual} bytes (minimum {minimum})"),
            Self::BadFormatVersion(v) =>
                write!(f, "unsupported FRU format version: {v}"),
            Self::ChecksumMismatch { expected, actual } =>
                write!(f, "checksum mismatch: expected 0x{expected:02X}, got 0x{actual:02X}"),
            Self::InvalidAreaOffset { area, offset } =>
                write!(f, "invalid {area} area offset: {offset}"),
        }
    }
}

impl TryFrom<RawFruData> for ValidFru {
    type Error = FruError;

    fn try_from(raw: RawFruData) -> Result<Self, FruError> {
        let data = raw.0;

        // 1. Length check
        if data.len() < 8 {
            return Err(FruError::TooShort {
                actual: data.len(),
                minimum: 8,
            });
        }

        // 2. Format version
        if data[0] != 0x01 {
            return Err(FruError::BadFormatVersion(data[0]));
        }

        // 3. Checksum (header is first 8 bytes, checksum at byte 7)
        let checksum: u8 = data[..8].iter().fold(0u8, |acc, &b| acc.wrapping_add(b));
        if checksum != 0 {
            return Err(FruError::ChecksumMismatch {
                expected: 0,
                actual: checksum,
            });
        }

        // 4. Area offsets must be within bounds
        for (name, idx) in [
            ("internal", 1), ("chassis", 2),
            ("board", 3), ("product", 4),
        ] {
            let offset = data[idx];
            if offset != 0 && (offset as usize * 8) >= data.len() {
                return Err(FruError::InvalidAreaOffset {
                    area: name,
                    offset,
                });
            }
        }

        // All checks passed β€” construct the validated type
        Ok(ValidFru {
            format_version: data[0],
            internal_area_offset: data[1],
            chassis_area_offset: data[2],
            board_area_offset: data[3],
            product_area_offset: data[4],
            data,
        })
    }
}

impl ValidFru {
    /// No validation needed β€” the type guarantees correctness.
    pub fn board_area(&self) -> Option<&[u8]> {
        if self.board_area_offset == 0 {
            return None;
        }
        let start = self.board_area_offset as usize * 8;
        Some(&self.data[start..])  // safe β€” bounds checked during parsing
    }

    pub fn product_area(&self) -> Option<&[u8]> {
        if self.product_area_offset == 0 {
            return None;
        }
        let start = self.product_area_offset as usize * 8;
        Some(&self.data[start..])
    }

    pub fn format_version(&self) -> u8 {
        self.format_version
    }
}

Any function that takes &ValidFru knows the data is well-formed. No re-checking:

pub struct ValidFru { board_area_offset: u8, data: Vec<u8> }
impl ValidFru {
    pub fn board_area(&self) -> Option<&[u8]> { None }
}

/// This function does NOT need to validate the FRU data.
/// The type signature guarantees it's already valid.
fn extract_board_serial(fru: &ValidFru) -> Option<String> {
    let board = fru.board_area()?;
    // ... parse serial from board area ...
    // No bounds checks needed β€” ValidFru guarantees offsets are in range
    Some("ABC123".to_string()) // stub
}

fn extract_board_manufacturer(fru: &ValidFru) -> Option<String> {
    let board = fru.board_area()?;
    // Still no validation needed β€” same guarantee
    Some("Acme Corp".to_string()) // stub
}

Validated Redfish JSON

The same pattern applies to Redfish API responses. Parse once, carry validity in the type:

use std::collections::HashMap;

/// Raw JSON string from a Redfish endpoint.
pub struct RawRedfishResponse(pub String);

/// A validated Redfish Thermal response.
/// All required fields are guaranteed present and within range.
#[derive(Debug)]
pub struct ValidThermalResponse {
    pub temperatures: Vec<ValidTemperatureReading>,
    pub fans: Vec<ValidFanReading>,
}

#[derive(Debug)]
pub struct ValidTemperatureReading {
    pub name: String,
    pub reading_celsius: f64,     // guaranteed non-NaN, within sensor range
    pub upper_critical: f64,
    pub status: HealthStatus,
}

#[derive(Debug)]
pub struct ValidFanReading {
    pub name: String,
    pub reading_rpm: u32,        // guaranteed > 0 for present fans
    pub status: HealthStatus,
}

#[derive(Debug, Clone, Copy, PartialEq)]
pub enum HealthStatus {
    Ok,
    Warning,
    Critical,
}

#[derive(Debug)]
pub enum RedfishValidationError {
    MissingField(&'static str),
    OutOfRange { field: &'static str, value: f64 },
    InvalidStatus(String),
}

impl std::fmt::Display for RedfishValidationError {
    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
        match self {
            Self::MissingField(name) => write!(f, "missing required field: {name}"),
            Self::OutOfRange { field, value } =>
                write!(f, "field {field} out of range: {value}"),
            Self::InvalidStatus(s) => write!(f, "invalid health status: {s}"),
        }
    }
}

// Once validated, downstream code never re-checks:
fn check_thermal_health(thermal: &ValidThermalResponse) -> bool {
    // No need to check for missing fields or NaN values.
    // ValidThermalResponse guarantees all readings are sensible.
    thermal.temperatures.iter().all(|t| {
        t.reading_celsius < t.upper_critical && t.status != HealthStatus::Critical
    }) && thermal.fans.iter().all(|f| {
        f.reading_rpm > 0 && f.status != HealthStatus::Critical
    })
}

Polymorphic Validation: IPMI SEL Records

The first two case studies validated flat structures β€” a fixed byte layout (FRU) and a known JSON schema (Redfish). Real-world data is often polymorphic: the interpretation of later bytes depends on earlier bytes. IPMI System Event Log (SEL) records are the canonical example.

The Shape of the Problem

Every SEL record is exactly 16 bytes. But what those bytes mean depends on a dispatch chain:

Byte 2: Record Type
  β”œβ”€ 0x02 β†’ System Event
  β”‚    Byte 10[6:4]: Event Type
  β”‚      β”œβ”€ 0x01       β†’ Threshold event (reading + threshold in data bytes 2-3)
  β”‚      β”œβ”€ 0x02-0x0C  β†’ Discrete event (bit in offset field)
  β”‚      └─ 0x6F       β†’ Sensor-specific (meaning depends on Sensor Type in byte 7)
  β”‚           Byte 7: Sensor Type
  β”‚             β”œβ”€ 0x01 β†’ Temperature events
  β”‚             β”œβ”€ 0x02 β†’ Voltage events
  β”‚             β”œβ”€ 0x04 β†’ Fan events
  β”‚             β”œβ”€ 0x07 β†’ Processor events
  β”‚             β”œβ”€ 0x0C β†’ Memory events
  β”‚             β”œβ”€ 0x08 β†’ Power Supply events
  β”‚             └─ ...  β†’ (42 sensor types in IPMI 2.0 Table 42-3)
  β”œβ”€ 0xC0-0xDF β†’ OEM Timestamped
  └─ 0xE0-0xFF β†’ OEM Non-Timestamped

In C, this is a switch inside a switch inside a switch, with each level sharing the same uint8_t *data pointer. Forget one level, misread the spec table, or index the wrong byte β€” the bug is silent.

// C β€” the polymorphic parsing problem
void process_sel_entry(uint8_t *data, int len) {
    if (data[2] == 0x02) {  // system event
        uint8_t event_type = (data[10] >> 4) & 0x07;
        if (event_type == 0x01) {  // threshold
            uint8_t reading = data[11];   // πŸ› or is it data[13]?
            uint8_t threshold = data[12]; // πŸ› spec says byte 12 is trigger, not threshold
            printf("Temp: %d crossed %d\n", reading, threshold);
        } else if (event_type == 0x6F) {  // sensor-specific
            uint8_t sensor_type = data[7];
            if (sensor_type == 0x0C) {  // memory
                // πŸ› forgot to check event data 1 offset bits
                printf("Memory ECC error\n");
            }
            // πŸ› no else β€” silently drops 30+ other sensor types
        }
    }
    // πŸ› OEM record types silently ignored
}

Step 1 β€” Parse the Outer Frame

The first TryFrom dispatches on record type β€” the outermost layer of the union:

/// Raw 16-byte SEL record, straight from `Get SEL Entry` (IPMI cmd 0x43).
pub struct RawSelRecord(pub [u8; 16]);

/// Validated SEL record β€” record type dispatched, all fields checked.
pub enum ValidSelRecord {
    SystemEvent(SystemEventRecord),
    OemTimestamped(OemTimestampedRecord),
    OemNonTimestamped(OemNonTimestampedRecord),
}

#[derive(Debug)]
pub struct OemTimestampedRecord {
    pub record_id: u16,
    pub timestamp: u32,
    pub manufacturer_id: [u8; 3],
    pub oem_data: [u8; 6],
}

#[derive(Debug)]
pub struct OemNonTimestampedRecord {
    pub record_id: u16,
    pub oem_data: [u8; 13],
}

#[derive(Debug)]
pub enum SelParseError {
    UnknownRecordType(u8),
    UnknownSensorType(u8),
    UnknownEventType(u8),
    InvalidEventData { reason: &'static str },
}

impl std::fmt::Display for SelParseError {
    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
        match self {
            Self::UnknownRecordType(t) => write!(f, "unknown record type: 0x{t:02X}"),
            Self::UnknownSensorType(t) => write!(f, "unknown sensor type: 0x{t:02X}"),
            Self::UnknownEventType(t) => write!(f, "unknown event type: 0x{t:02X}"),
            Self::InvalidEventData { reason } => write!(f, "invalid event data: {reason}"),
        }
    }
}

impl TryFrom<RawSelRecord> for ValidSelRecord {
    type Error = SelParseError;

    fn try_from(raw: RawSelRecord) -> Result<Self, SelParseError> {
        let d = &raw.0;
        let record_id = u16::from_le_bytes([d[0], d[1]]);

        match d[2] {
            0x02 => {
                let system = parse_system_event(record_id, d)?;
                Ok(ValidSelRecord::SystemEvent(system))
            }
            0xC0..=0xDF => {
                Ok(ValidSelRecord::OemTimestamped(OemTimestampedRecord {
                    record_id,
                    timestamp: u32::from_le_bytes([d[3], d[4], d[5], d[6]]),
                    manufacturer_id: [d[7], d[8], d[9]],
                    oem_data: [d[10], d[11], d[12], d[13], d[14], d[15]],
                }))
            }
            0xE0..=0xFF => {
                Ok(ValidSelRecord::OemNonTimestamped(OemNonTimestampedRecord {
                    record_id,
                    oem_data: [d[3], d[4], d[5], d[6], d[7], d[8], d[9],
                               d[10], d[11], d[12], d[13], d[14], d[15]],
                }))
            }
            other => Err(SelParseError::UnknownRecordType(other)),
        }
    }
}

After this boundary, every consumer matches on the enum. The compiler enforces handling all three record types β€” you can’t β€œforget” OEM records.

Step 2 β€” Parse the System Event: Sensor Type β†’ Typed Event

The inner dispatch turns the event data bytes into a sum type indexed by sensor type. This is where the C switch-in-a-switch becomes a nested enum:

#[derive(Debug)]
pub struct SystemEventRecord {
    pub record_id: u16,
    pub timestamp: u32,
    pub generator: GeneratorId,
    pub sensor_type: SensorType,
    pub sensor_number: u8,
    pub event_direction: EventDirection,
    pub event: TypedEvent,      // ← the key: event data is TYPED
}

#[derive(Debug)]
pub enum GeneratorId {
    Software(u8),
    Ipmb { slave_addr: u8, channel: u8, lun: u8 },
}

#[derive(Debug, Clone, Copy, PartialEq)]
pub enum EventDirection { Assertion, Deassertion }

// ──── The Sensor/Event Type Hierarchy ────

/// Sensor types from IPMI Table 42-3. Non-exhaustive because future
/// IPMI revisions and OEM ranges will add variants (see ch11 trick 3).
#[non_exhaustive]
#[derive(Debug, Clone, Copy, PartialEq)]
pub enum SensorType {
    Temperature,    // 0x01
    Voltage,        // 0x02
    Current,        // 0x03
    Fan,            // 0x04
    PhysicalSecurity, // 0x05
    Processor,      // 0x07
    PowerSupply,    // 0x08
    Memory,         // 0x0C
    SystemEvent,    // 0x12
    Watchdog2,      // 0x23
}

/// The polymorphic payload β€” each variant carries its own typed data.
#[derive(Debug)]
pub enum TypedEvent {
    Threshold(ThresholdEvent),
    SensorSpecific(SensorSpecificEvent),
    Discrete { offset: u8, event_data: [u8; 3] },
}

/// Threshold events carry the trigger reading and threshold value.
/// Both are raw sensor values (pre-linearization), kept as u8.
/// After SDR linearization, they become dimensional types (ch06).
#[derive(Debug)]
pub struct ThresholdEvent {
    pub crossing: ThresholdCrossing,
    pub trigger_reading: u8,
    pub threshold_value: u8,
}

#[derive(Debug, Clone, Copy, PartialEq)]
pub enum ThresholdCrossing {
    LowerNonCriticalLow,
    LowerNonCriticalHigh,
    LowerCriticalLow,
    LowerCriticalHigh,
    LowerNonRecoverableLow,
    LowerNonRecoverableHigh,
    UpperNonCriticalLow,
    UpperNonCriticalHigh,
    UpperCriticalLow,
    UpperCriticalHigh,
    UpperNonRecoverableLow,
    UpperNonRecoverableHigh,
}

/// Sensor-specific events β€” each sensor type gets its own variant
/// with an exhaustive enum of that sensor's defined events.
#[derive(Debug)]
pub enum SensorSpecificEvent {
    Temperature(TempEvent),
    Voltage(VoltageEvent),
    Fan(FanEvent),
    Processor(ProcessorEvent),
    PowerSupply(PowerSupplyEvent),
    Memory(MemoryEvent),
    PhysicalSecurity(PhysicalSecurityEvent),
    Watchdog(WatchdogEvent),
}

// ──── Per-sensor-type event enums (from IPMI Table 42-3) ────

#[derive(Debug, Clone, Copy, PartialEq)]
pub enum MemoryEvent {
    CorrectableEcc,
    UncorrectableEcc,
    Parity,
    MemoryBoardScrubFailed,
    MemoryDeviceDisabled,
    CorrectableEccLogLimit,
    PresenceDetected,
    ConfigurationError,
    Spare,
    Throttled,
    CriticalOvertemperature,
}

#[derive(Debug, Clone, Copy, PartialEq)]
pub enum PowerSupplyEvent {
    PresenceDetected,
    Failure,
    PredictiveFailure,
    InputLost,
    InputOutOfRange,
    InputLostOrOutOfRange,
    ConfigurationError,
    InactiveStandby,
}

#[derive(Debug, Clone, Copy, PartialEq)]
pub enum TempEvent {
    UpperNonCritical,
    UpperCritical,
    UpperNonRecoverable,
    LowerNonCritical,
    LowerCritical,
    LowerNonRecoverable,
}

#[derive(Debug, Clone, Copy, PartialEq)]
pub enum VoltageEvent {
    UpperNonCritical,
    UpperCritical,
    UpperNonRecoverable,
    LowerNonCritical,
    LowerCritical,
    LowerNonRecoverable,
}

#[derive(Debug, Clone, Copy, PartialEq)]
pub enum FanEvent {
    UpperNonCritical,
    UpperCritical,
    UpperNonRecoverable,
    LowerNonCritical,
    LowerCritical,
    LowerNonRecoverable,
}

#[derive(Debug, Clone, Copy, PartialEq)]
pub enum ProcessorEvent {
    Ierr,
    ThermalTrip,
    Frb1BistFailure,
    Frb2HangInPost,
    Frb3ProcessorStartupFailure,
    ConfigurationError,
    UncorrectableMachineCheck,
    PresenceDetected,
    Disabled,
    TerminatorPresenceDetected,
    Throttled,
}

#[derive(Debug, Clone, Copy, PartialEq)]
pub enum PhysicalSecurityEvent {
    ChassisIntrusion,
    DriveIntrusion,
    IOCardAreaIntrusion,
    ProcessorAreaIntrusion,
    LanLeashedLost,
    UnauthorizedDocking,
    FanAreaIntrusion,
}

#[derive(Debug, Clone, Copy, PartialEq)]
pub enum WatchdogEvent {
    BiosReset,
    OsReset,
    OsShutdown,
    OsPowerDown,
    OsPowerCycle,
    BiosNmi,
    Timer,
}

Step 3 β€” The Parser Wiring

fn parse_system_event(record_id: u16, d: &[u8]) -> Result<SystemEventRecord, SelParseError> {
    let timestamp = u32::from_le_bytes([d[3], d[4], d[5], d[6]]);

    let generator = if d[7] & 0x01 == 0 {
        GeneratorId::Ipmb {
            slave_addr: d[7] & 0xFE,
            channel: (d[8] >> 4) & 0x0F,
            lun: d[8] & 0x03,
        }
    } else {
        GeneratorId::Software(d[7])
    };

    let sensor_type = parse_sensor_type(d[10])?;
    let sensor_number = d[11];
    let event_direction = if d[12] & 0x80 != 0 {
        EventDirection::Deassertion
    } else {
        EventDirection::Assertion
    };

    let event_type_code = d[12] & 0x7F;
    let event_data = [d[13], d[14], d[15]];

    let event = match event_type_code {
        0x01 => {
            // Threshold β€” event data byte 2 is trigger reading, byte 3 is threshold
            let offset = event_data[0] & 0x0F;
            TypedEvent::Threshold(ThresholdEvent {
                crossing: parse_threshold_crossing(offset)?,
                trigger_reading: event_data[1],
                threshold_value: event_data[2],
            })
        }
        0x6F => {
            // Sensor-specific β€” dispatch on sensor type
            let offset = event_data[0] & 0x0F;
            let specific = parse_sensor_specific(&sensor_type, offset)?;
            TypedEvent::SensorSpecific(specific)
        }
        0x02..=0x0C => {
            // Generic discrete
            TypedEvent::Discrete { offset: event_data[0] & 0x0F, event_data }
        }
        other => return Err(SelParseError::UnknownEventType(other)),
    };

    Ok(SystemEventRecord {
        record_id,
        timestamp,
        generator,
        sensor_type,
        sensor_number,
        event_direction,
        event,
    })
}

fn parse_sensor_type(code: u8) -> Result<SensorType, SelParseError> {
    match code {
        0x01 => Ok(SensorType::Temperature),
        0x02 => Ok(SensorType::Voltage),
        0x03 => Ok(SensorType::Current),
        0x04 => Ok(SensorType::Fan),
        0x05 => Ok(SensorType::PhysicalSecurity),
        0x07 => Ok(SensorType::Processor),
        0x08 => Ok(SensorType::PowerSupply),
        0x0C => Ok(SensorType::Memory),
        0x12 => Ok(SensorType::SystemEvent),
        0x23 => Ok(SensorType::Watchdog2),
        other => Err(SelParseError::UnknownSensorType(other)),
    }
}

fn parse_threshold_crossing(offset: u8) -> Result<ThresholdCrossing, SelParseError> {
    match offset {
        0x00 => Ok(ThresholdCrossing::LowerNonCriticalLow),
        0x01 => Ok(ThresholdCrossing::LowerNonCriticalHigh),
        0x02 => Ok(ThresholdCrossing::LowerCriticalLow),
        0x03 => Ok(ThresholdCrossing::LowerCriticalHigh),
        0x04 => Ok(ThresholdCrossing::LowerNonRecoverableLow),
        0x05 => Ok(ThresholdCrossing::LowerNonRecoverableHigh),
        0x06 => Ok(ThresholdCrossing::UpperNonCriticalLow),
        0x07 => Ok(ThresholdCrossing::UpperNonCriticalHigh),
        0x08 => Ok(ThresholdCrossing::UpperCriticalLow),
        0x09 => Ok(ThresholdCrossing::UpperCriticalHigh),
        0x0A => Ok(ThresholdCrossing::UpperNonRecoverableLow),
        0x0B => Ok(ThresholdCrossing::UpperNonRecoverableHigh),
        _ => Err(SelParseError::InvalidEventData {
            reason: "threshold offset out of range",
        }),
    }
}

fn parse_sensor_specific(
    sensor_type: &SensorType,
    offset: u8,
) -> Result<SensorSpecificEvent, SelParseError> {
    match sensor_type {
        SensorType::Memory => {
            let ev = match offset {
                0x00 => MemoryEvent::CorrectableEcc,
                0x01 => MemoryEvent::UncorrectableEcc,
                0x02 => MemoryEvent::Parity,
                0x03 => MemoryEvent::MemoryBoardScrubFailed,
                0x04 => MemoryEvent::MemoryDeviceDisabled,
                0x05 => MemoryEvent::CorrectableEccLogLimit,
                0x06 => MemoryEvent::PresenceDetected,
                0x07 => MemoryEvent::ConfigurationError,
                0x08 => MemoryEvent::Spare,
                0x09 => MemoryEvent::Throttled,
                0x0A => MemoryEvent::CriticalOvertemperature,
                _ => return Err(SelParseError::InvalidEventData {
                    reason: "unknown memory event offset",
                }),
            };
            Ok(SensorSpecificEvent::Memory(ev))
        }
        SensorType::PowerSupply => {
            let ev = match offset {
                0x00 => PowerSupplyEvent::PresenceDetected,
                0x01 => PowerSupplyEvent::Failure,
                0x02 => PowerSupplyEvent::PredictiveFailure,
                0x03 => PowerSupplyEvent::InputLost,
                0x04 => PowerSupplyEvent::InputOutOfRange,
                0x05 => PowerSupplyEvent::InputLostOrOutOfRange,
                0x06 => PowerSupplyEvent::ConfigurationError,
                0x07 => PowerSupplyEvent::InactiveStandby,
                _ => return Err(SelParseError::InvalidEventData {
                    reason: "unknown power supply event offset",
                }),
            };
            Ok(SensorSpecificEvent::PowerSupply(ev))
        }
        SensorType::Processor => {
            let ev = match offset {
                0x00 => ProcessorEvent::Ierr,
                0x01 => ProcessorEvent::ThermalTrip,
                0x02 => ProcessorEvent::Frb1BistFailure,
                0x03 => ProcessorEvent::Frb2HangInPost,
                0x04 => ProcessorEvent::Frb3ProcessorStartupFailure,
                0x05 => ProcessorEvent::ConfigurationError,
                0x06 => ProcessorEvent::UncorrectableMachineCheck,
                0x07 => ProcessorEvent::PresenceDetected,
                0x08 => ProcessorEvent::Disabled,
                0x09 => ProcessorEvent::TerminatorPresenceDetected,
                0x0A => ProcessorEvent::Throttled,
                _ => return Err(SelParseError::InvalidEventData {
                    reason: "unknown processor event offset",
                }),
            };
            Ok(SensorSpecificEvent::Processor(ev))
        }
        // Pattern repeats for Temperature, Voltage, Fan, etc.
        // Each sensor type maps its offsets to a dedicated enum.
        _ => Err(SelParseError::InvalidEventData {
            reason: "sensor-specific dispatch not implemented for this sensor type",
        }),
    }
}

Step 4 β€” Consuming Typed SEL Records

Once parsed, downstream code pattern-matches on the nested enums. The compiler enforces exhaustive handling β€” no silent fallthrough, no forgotten sensor type:

/// Determine whether a SEL event should trigger a hardware alert.
/// The compiler ensures every variant is handled.
fn should_alert(record: &ValidSelRecord) -> bool {
    match record {
        ValidSelRecord::SystemEvent(sys) => match &sys.event {
            TypedEvent::Threshold(t) => {
                // Any critical or non-recoverable threshold crossing β†’ alert
                matches!(t.crossing,
                    ThresholdCrossing::UpperCriticalLow
                    | ThresholdCrossing::UpperCriticalHigh
                    | ThresholdCrossing::LowerCriticalLow
                    | ThresholdCrossing::LowerCriticalHigh
                    | ThresholdCrossing::UpperNonRecoverableLow
                    | ThresholdCrossing::UpperNonRecoverableHigh
                    | ThresholdCrossing::LowerNonRecoverableLow
                    | ThresholdCrossing::LowerNonRecoverableHigh
                )
            }
            TypedEvent::SensorSpecific(ss) => match ss {
                SensorSpecificEvent::Memory(m) => matches!(m,
                    MemoryEvent::UncorrectableEcc
                    | MemoryEvent::Parity
                    | MemoryEvent::CriticalOvertemperature
                ),
                SensorSpecificEvent::PowerSupply(p) => matches!(p,
                    PowerSupplyEvent::Failure
                    | PowerSupplyEvent::InputLost
                ),
                SensorSpecificEvent::Processor(p) => matches!(p,
                    ProcessorEvent::Ierr
                    | ProcessorEvent::ThermalTrip
                    | ProcessorEvent::UncorrectableMachineCheck
                ),
                // New sensor type variant added in a future version?
                // ❌ Compile error: non-exhaustive patterns
                _ => false,
            },
            TypedEvent::Discrete { .. } => false,
        },
        // OEM records are not alertable in this policy
        ValidSelRecord::OemTimestamped(_) => false,
        ValidSelRecord::OemNonTimestamped(_) => false,
    }
}

/// Generate a human-readable description.
/// Every branch produces a specific message β€” no "unknown event" fallback.
fn describe(record: &ValidSelRecord) -> String {
    match record {
        ValidSelRecord::SystemEvent(sys) => {
            let sensor = format!("{:?} sensor #{}", sys.sensor_type, sys.sensor_number);
            let dir = match sys.event_direction {
                EventDirection::Assertion => "asserted",
                EventDirection::Deassertion => "deasserted",
            };
            match &sys.event {
                TypedEvent::Threshold(t) => {
                    format!("{sensor}: {:?} {dir} (reading: 0x{:02X}, threshold: 0x{:02X})",
                        t.crossing, t.trigger_reading, t.threshold_value)
                }
                TypedEvent::SensorSpecific(ss) => {
                    format!("{sensor}: {ss:?} {dir}")
                }
                TypedEvent::Discrete { offset, .. } => {
                    format!("{sensor}: discrete offset {offset:#x} {dir}")
                }
            }
        }
        ValidSelRecord::OemTimestamped(oem) =>
            format!("OEM record 0x{:04X} (mfr {:02X}{:02X}{:02X})",
                oem.record_id,
                oem.manufacturer_id[0], oem.manufacturer_id[1], oem.manufacturer_id[2]),
        ValidSelRecord::OemNonTimestamped(oem) =>
            format!("OEM non-ts record 0x{:04X}", oem.record_id),
    }
}

Walkthrough: End-to-End SEL Processing

Here’s a complete flow β€” from raw bytes off the wire to an alert decision β€” showing every typed handoff:

/// Process all SEL entries from a BMC, producing typed alerts.
fn process_sel_log(raw_entries: &[[u8; 16]]) -> Vec<String> {
    let mut alerts = Vec::new();

    for (i, raw_bytes) in raw_entries.iter().enumerate() {
        // ─── Boundary: raw bytes β†’ validated record ───
        let raw = RawSelRecord(*raw_bytes);
        let record = match ValidSelRecord::try_from(raw) {
            Ok(r) => r,
            Err(e) => {
                eprintln!("SEL entry {i}: parse error: {e}");
                continue;
            }
        };

        // ─── From here, everything is typed ───

        // 1. Describe the event (exhaustive match β€” every variant covered)
        let description = describe(&record);
        println!("SEL[{i}]: {description}");

        // 2. Check alert policy (exhaustive match β€” compiler proves completeness)
        if should_alert(&record) {
            alerts.push(description);
        }

        // 3. Extract dimensional readings from threshold events
        if let ValidSelRecord::SystemEvent(sys) = &record {
            if let TypedEvent::Threshold(t) = &sys.event {
                // The compiler knows t.trigger_reading is a threshold event reading,
                // not an arbitrary byte. After SDR linearization (ch06), this becomes:
                //   let temp: Celsius = linearize(t.trigger_reading, &sdr);
                // And then Celsius can't be compared with Rpm.
                println!(
                    "  β†’ raw reading: 0x{:02X}, raw threshold: 0x{:02X}",
                    t.trigger_reading, t.threshold_value
                );
            }
        }
    }

    alerts
}

fn main() {
    // Example: two SEL entries (fabricated for illustration)
    let sel_data: Vec<[u8; 16]> = vec![
        // Entry 1: System event, Memory sensor #3, sensor-specific,
        //          offset 0x00 = CorrectableEcc, assertion
        [
            0x01, 0x00,       // record ID: 1
            0x02,             // record type: system event
            0x00, 0x00, 0x00, 0x00, // timestamp (stub)
            0x20,             // generator: IPMB slave addr 0x20
            0x00,             // channel/lun
            0x04,             // event message rev
            0x0C,             // sensor type: Memory (0x0C)
            0x03,             // sensor number: 3
            0x6F,             // event dir: assertion, event type: sensor-specific
            0x00,             // event data 1: offset 0x00 = CorrectableEcc
            0x00, 0x00,       // event data 2-3
        ],
        // Entry 2: System event, Temperature sensor #1, threshold,
        //          offset 0x09 = UpperCriticalHigh, reading=95, threshold=90
        [
            0x02, 0x00,       // record ID: 2
            0x02,             // record type: system event
            0x00, 0x00, 0x00, 0x00, // timestamp (stub)
            0x20,             // generator
            0x00,             // channel/lun
            0x04,             // event message rev
            0x01,             // sensor type: Temperature (0x01)
            0x01,             // sensor number: 1
            0x01,             // event dir: assertion, event type: threshold (0x01)
            0x09,             // event data 1: offset 0x09 = UpperCriticalHigh
            0x5F,             // event data 2: trigger reading (95 raw)
            0x5A,             // event data 3: threshold value (90 raw)
        ],
    ];

    let alerts = process_sel_log(&sel_data);
    println!("\n=== ALERTS ({}) ===", alerts.len());
    for alert in &alerts {
        println!("  🚨 {alert}");
    }
}

Expected output:

SEL[0]: Memory sensor #3: Memory(CorrectableEcc) asserted
SEL[1]: Temperature sensor #1: UpperCriticalHigh asserted (reading: 0x5F, threshold: 0x5A)
  β†’ raw reading: 0x5F, raw threshold: 0x5A

=== ALERTS (1) ===
  🚨 Temperature sensor #1: UpperCriticalHigh asserted (reading: 0x5F, threshold: 0x5A)

Entry 0 (correctable ECC) is logged but not alerted. Entry 1 (upper critical temperature) triggers an alert. Both decisions are enforced by exhaustive pattern matching β€” the compiler proves every sensor type and threshold crossing is handled.

From Parsed Events to Redfish Health: The Consumer Pipeline

The walkthrough above ends with alerts β€” but in a real BMC, parsed SEL records flow into the Redfish health rollup (ch18). The current handoff is a lossy bool:

// ❌ Lossy β€” throws away per-subsystem detail
pub struct SelSummary {
    pub has_critical_events: bool,
    pub total_entries: u32,
}

This loses everything the type system just gave us: which subsystem is affected, what severity level, and whether the reading carries dimensional data. Let’s build the full pipeline.

Step 1 β€” SDR Linearization: Raw Bytes β†’ Dimensional Types (ch06)

Threshold SEL events carry raw sensor readings in event data bytes 2-3. The IPMI SDR (Sensor Data Record) provides the linearization formula. After linearization, the raw byte becomes a dimensional type:

/// SDR linearization coefficients for a single sensor.
/// See IPMI spec section 36.3 for the full formula.
pub struct SdrLinearization {
    pub sensor_type: SensorType,
    pub m: i16,        // multiplier
    pub b: i16,        // offset
    pub r_exp: i8,     // result exponent (power-of-10)
    pub b_exp: i8,     // B exponent
}

/// A linearized sensor reading with its unit attached.
/// The return type depends on the sensor type β€” the compiler
/// enforces that temperature sensors produce Celsius, not Rpm.
#[derive(Debug, Clone)]
pub enum LinearizedReading {
    Temperature(Celsius),
    Voltage(Volts),
    Fan(Rpm),
    Current(Amps),
    Power(Watts),
}

#[derive(Debug, Clone, Copy, PartialEq, PartialOrd)]
pub struct Amps(pub f64);

impl SdrLinearization {
    /// Apply the IPMI linearization formula:
    ///   y = (M Γ— raw + B Γ— 10^B_exp) Γ— 10^R_exp
    /// Returns a dimensional type based on the sensor type.
    pub fn linearize(&self, raw: u8) -> LinearizedReading {
        let y = (self.m as f64 * raw as f64
                + self.b as f64 * 10_f64.powi(self.b_exp as i32))
                * 10_f64.powi(self.r_exp as i32);

        match self.sensor_type {
            SensorType::Temperature => LinearizedReading::Temperature(Celsius(y)),
            SensorType::Voltage     => LinearizedReading::Voltage(Volts(y)),
            SensorType::Fan         => LinearizedReading::Fan(Rpm(y as u32)),
            SensorType::Current     => LinearizedReading::Current(Amps(y)),
            SensorType::PowerSupply => LinearizedReading::Power(Watts(y)),
            // Other sensor types β€” extend as needed
            _ => LinearizedReading::Temperature(Celsius(y)),
        }
    }
}

With this, the raw byte 0x5F (95 decimal) from our SEL walkthrough becomes Celsius(95.0) β€” and the compiler prevents comparing it with Rpm or Watts.

Step 2 β€” Per-Subsystem Health Classification

Instead of collapsing everything into has_critical_events: bool, classify each parsed SEL event into a per-subsystem health bucket:

/// Health contribution from a single SEL event, classified by subsystem.
#[derive(Debug, Clone)]
pub enum SubsystemHealth {
    Processor(HealthValue),
    Memory(HealthValue),
    PowerSupply(HealthValue),
    Thermal(HealthValue),
    Fan(HealthValue),
    Storage(HealthValue),
    Security(HealthValue),
}

/// Classify a typed SEL event into per-subsystem health.
/// Exhaustive matching ensures every sensor type contributes.
fn classify_event_health(record: &SystemEventRecord) -> SubsystemHealth {
    match &record.event {
        TypedEvent::Threshold(t) => {
            // Threshold severity depends on the crossing level
            let health = match t.crossing {
                // Non-critical β†’ Warning
                ThresholdCrossing::UpperNonCriticalLow
                | ThresholdCrossing::UpperNonCriticalHigh
                | ThresholdCrossing::LowerNonCriticalLow
                | ThresholdCrossing::LowerNonCriticalHigh => HealthValue::Warning,

                // Critical or Non-recoverable β†’ Critical
                ThresholdCrossing::UpperCriticalLow
                | ThresholdCrossing::UpperCriticalHigh
                | ThresholdCrossing::LowerCriticalLow
                | ThresholdCrossing::LowerCriticalHigh
                | ThresholdCrossing::UpperNonRecoverableLow
                | ThresholdCrossing::UpperNonRecoverableHigh
                | ThresholdCrossing::LowerNonRecoverableLow
                | ThresholdCrossing::LowerNonRecoverableHigh => HealthValue::Critical,
            };

            // Route to the correct subsystem based on sensor type
            match record.sensor_type {
                SensorType::Temperature => SubsystemHealth::Thermal(health),
                SensorType::Voltage     => SubsystemHealth::PowerSupply(health),
                SensorType::Current     => SubsystemHealth::PowerSupply(health),
                SensorType::Fan         => SubsystemHealth::Fan(health),
                SensorType::Processor   => SubsystemHealth::Processor(health),
                SensorType::PowerSupply => SubsystemHealth::PowerSupply(health),
                SensorType::Memory      => SubsystemHealth::Memory(health),
                _                       => SubsystemHealth::Thermal(health),
            }
        }

        TypedEvent::SensorSpecific(ss) => match ss {
            SensorSpecificEvent::Memory(m) => {
                let health = match m {
                    MemoryEvent::UncorrectableEcc
                    | MemoryEvent::Parity
                    | MemoryEvent::CriticalOvertemperature => HealthValue::Critical,

                    MemoryEvent::CorrectableEccLogLimit
                    | MemoryEvent::MemoryBoardScrubFailed
                    | MemoryEvent::Throttled => HealthValue::Warning,

                    MemoryEvent::CorrectableEcc
                    | MemoryEvent::PresenceDetected
                    | MemoryEvent::MemoryDeviceDisabled
                    | MemoryEvent::ConfigurationError
                    | MemoryEvent::Spare => HealthValue::OK,
                };
                SubsystemHealth::Memory(health)
            }

            SensorSpecificEvent::PowerSupply(p) => {
                let health = match p {
                    PowerSupplyEvent::Failure
                    | PowerSupplyEvent::InputLost => HealthValue::Critical,

                    PowerSupplyEvent::PredictiveFailure
                    | PowerSupplyEvent::InputOutOfRange
                    | PowerSupplyEvent::InputLostOrOutOfRange
                    | PowerSupplyEvent::ConfigurationError => HealthValue::Warning,

                    PowerSupplyEvent::PresenceDetected
                    | PowerSupplyEvent::InactiveStandby => HealthValue::OK,
                };
                SubsystemHealth::PowerSupply(health)
            }

            SensorSpecificEvent::Processor(p) => {
                let health = match p {
                    ProcessorEvent::Ierr
                    | ProcessorEvent::ThermalTrip
                    | ProcessorEvent::UncorrectableMachineCheck => HealthValue::Critical,

                    ProcessorEvent::Frb1BistFailure
                    | ProcessorEvent::Frb2HangInPost
                    | ProcessorEvent::Frb3ProcessorStartupFailure
                    | ProcessorEvent::ConfigurationError
                    | ProcessorEvent::Disabled => HealthValue::Warning,

                    ProcessorEvent::PresenceDetected
                    | ProcessorEvent::TerminatorPresenceDetected
                    | ProcessorEvent::Throttled => HealthValue::OK,
                };
                SubsystemHealth::Processor(health)
            }

            SensorSpecificEvent::PhysicalSecurity(_) =>
                SubsystemHealth::Security(HealthValue::Warning),

            SensorSpecificEvent::Watchdog(_) =>
                SubsystemHealth::Processor(HealthValue::Warning),

            // Temperature, Voltage, Fan sensor-specific events
            SensorSpecificEvent::Temperature(_) =>
                SubsystemHealth::Thermal(HealthValue::Warning),
            SensorSpecificEvent::Voltage(_) =>
                SubsystemHealth::PowerSupply(HealthValue::Warning),
            SensorSpecificEvent::Fan(_) =>
                SubsystemHealth::Fan(HealthValue::Warning),
        },

        TypedEvent::Discrete { .. } => {
            // Generic discrete β€” classify by sensor type with Warning
            match record.sensor_type {
                SensorType::Processor => SubsystemHealth::Processor(HealthValue::Warning),
                SensorType::Memory    => SubsystemHealth::Memory(HealthValue::Warning),
                _                     => SubsystemHealth::Thermal(HealthValue::OK),
            }
        }
    }
}

Every match arm is exhaustive β€” add a new MemoryEvent variant and the compiler forces you to decide its severity. Add a new SensorSpecificEvent variant and every consumer must classify it. This is the payoff of the enum tree from the parsing section.

Step 3 β€” Aggregate into a Typed SEL Summary

Replace the lossy bool with a structured summary that preserves per-subsystem health:

use std::collections::HashMap;

/// Rich SEL summary β€” per-subsystem health derived from typed events.
/// This is what gets handed to the Redfish server (ch18) for health rollup.
#[derive(Debug, Clone)]
pub struct TypedSelSummary {
    pub total_entries: u32,
    pub processor_health: HealthValue,
    pub memory_health: HealthValue,
    pub power_health: HealthValue,
    pub thermal_health: HealthValue,
    pub fan_health: HealthValue,
    pub storage_health: HealthValue,
    pub security_health: HealthValue,
    /// Dimensional readings from threshold events (post-linearization).
    pub threshold_readings: Vec<LinearizedThresholdEvent>,
}

/// A threshold event with linearized readings attached.
#[derive(Debug, Clone)]
pub struct LinearizedThresholdEvent {
    pub sensor_type: SensorType,
    pub sensor_number: u8,
    pub crossing: ThresholdCrossing,
    pub trigger_reading: LinearizedReading,
    pub threshold_value: LinearizedReading,
}

/// Build a TypedSelSummary from parsed SEL records.
/// This is the consumer pipeline: parse (Step 0 above) β†’ classify β†’ aggregate.
pub fn summarize_sel(
    records: &[ValidSelRecord],
    sdr_table: &HashMap<u8, SdrLinearization>,
) -> TypedSelSummary {
    let mut processor = HealthValue::OK;
    let mut memory = HealthValue::OK;
    let mut power = HealthValue::OK;
    let mut thermal = HealthValue::OK;
    let mut fan = HealthValue::OK;
    let mut storage = HealthValue::OK;
    let mut security = HealthValue::OK;
    let mut threshold_readings = Vec::new();
    let mut count = 0u32;

    for record in records {
        count += 1;

        let ValidSelRecord::SystemEvent(sys) = record else {
            continue; // OEM records don't contribute to health
        };

        // ── Classify event β†’ per-subsystem health ──
        let health = classify_event_health(sys);
        match &health {
            SubsystemHealth::Processor(h) => processor = processor.max(*h),
            SubsystemHealth::Memory(h)    => memory = memory.max(*h),
            SubsystemHealth::PowerSupply(h) => power = power.max(*h),
            SubsystemHealth::Thermal(h)   => thermal = thermal.max(*h),
            SubsystemHealth::Fan(h)       => fan = fan.max(*h),
            SubsystemHealth::Storage(h)   => storage = storage.max(*h),
            SubsystemHealth::Security(h)  => security = security.max(*h),
        }

        // ── Linearize threshold readings if SDR is available ──
        if let TypedEvent::Threshold(t) = &sys.event {
            if let Some(sdr) = sdr_table.get(&sys.sensor_number) {
                threshold_readings.push(LinearizedThresholdEvent {
                    sensor_type: sys.sensor_type,
                    sensor_number: sys.sensor_number,
                    crossing: t.crossing,
                    trigger_reading: sdr.linearize(t.trigger_reading),
                    threshold_value: sdr.linearize(t.threshold_value),
                });
            }
        }
    }

    TypedSelSummary {
        total_entries: count,
        processor_health: processor,
        memory_health: memory,
        power_health: power,
        thermal_health: thermal,
        fan_health: fan,
        storage_health: storage,
        security_health: security,
        threshold_readings,
    }
}

Step 4 β€” The Full Pipeline: Raw Bytes β†’ Redfish Health

Here’s the complete consumer pipeline, showing every typed handoff from raw SEL bytes to Redfish-ready health values:

flowchart LR
    RAW["Raw [u8; 16]\nSEL entries"]
    PARSE["TryFrom:\nValidSelRecord\n(enum tree)"]
    CLASSIFY["classify_event_health\n(exhaustive match)"]
    LINEARIZE["SDR linearize\nraw β†’ Celsius/Rpm/Watts"]
    SUMMARY["TypedSelSummary\n(per-subsystem health\n+ dimensional readings)"]
    REDFISH["ch18: health rollup\n→ Status.Health JSON"]

    RAW -->|"ch07 Β§Parse"| PARSE
    PARSE -->|"typed events"| CLASSIFY
    PARSE -->|"threshold bytes"| LINEARIZE
    CLASSIFY -->|"SubsystemHealth"| SUMMARY
    LINEARIZE -->|"LinearizedReading"| SUMMARY
    SUMMARY -->|"TypedSelSummary"| REDFISH

    style RAW fill:#fff3e0,color:#000
    style PARSE fill:#e1f5fe,color:#000
    style CLASSIFY fill:#f3e5f5,color:#000
    style LINEARIZE fill:#e8f5e9,color:#000
    style SUMMARY fill:#c8e6c9,color:#000
    style REDFISH fill:#bbdefb,color:#000
use std::collections::HashMap;

fn full_sel_pipeline() {
    // ── Raw SEL data from BMC ──
    let raw_entries: Vec<[u8; 16]> = vec![
        // Memory correctable ECC on sensor #3
        [0x01,0x00, 0x02, 0x00,0x00,0x00,0x00,
         0x20,0x00, 0x04, 0x0C, 0x03, 0x6F, 0x00, 0x00,0x00],
        // Temperature upper critical on sensor #1, reading=95, threshold=90
        [0x02,0x00, 0x02, 0x00,0x00,0x00,0x00,
         0x20,0x00, 0x04, 0x01, 0x01, 0x01, 0x09, 0x5F,0x5A],
        // PSU failure on sensor #5
        [0x03,0x00, 0x02, 0x00,0x00,0x00,0x00,
         0x20,0x00, 0x04, 0x08, 0x05, 0x6F, 0x01, 0x00,0x00],
    ];

    // ── Step 0: Parse at the boundary (ch07 TryFrom) ──
    let records: Vec<ValidSelRecord> = raw_entries.iter()
        .filter_map(|raw| ValidSelRecord::try_from(RawSelRecord(*raw)).ok())
        .collect();

    // ── Step 1-3: Classify + linearize + aggregate ──
    let mut sdr_table = HashMap::new();
    sdr_table.insert(1u8, SdrLinearization {
        sensor_type: SensorType::Temperature,
        m: 1, b: 0, r_exp: 0, b_exp: 0,  // 1:1 mapping for this example
    });

    let summary = summarize_sel(&records, &sdr_table);

    // ── Result: structured, typed, Redfish-ready ──
    println!("SEL Summary:");
    println!("  Total entries: {}", summary.total_entries);
    println!("  Processor:  {:?}", summary.processor_health);  // OK
    println!("  Memory:     {:?}", summary.memory_health);      // OK (correctable β†’ OK)
    println!("  Power:      {:?}", summary.power_health);       // Critical (PSU failure)
    println!("  Thermal:    {:?}", summary.thermal_health);     // Critical (upper critical)
    println!("  Fan:        {:?}", summary.fan_health);         // OK
    println!("  Security:   {:?}", summary.security_health);    // OK

    // Dimensional readings preserved from threshold events:
    for r in &summary.threshold_readings {
        println!("  Threshold: sensor {:?} #{} β€” {:?} crossed {:?}",
            r.sensor_type, r.sensor_number,
            r.trigger_reading, r.crossing);
        // trigger_reading is LinearizedReading::Temperature(Celsius(95.0))
        // β€” not a raw byte, not an untyped f64
    }

    // ── This summary feeds directly into ch18's health rollup ──
    // compute_system_health() can now use per-subsystem values
    // instead of a single `has_critical_events: bool`
}

Expected output:

SEL Summary:
  Total entries: 3
  Processor:  OK
  Memory:     OK
  Power:      Critical
  Thermal:    Critical
  Fan:        OK
  Security:   OK
  Threshold: sensor Temperature #1 β€” Temperature(Celsius(95.0)) crossed UpperCriticalHigh

What the Consumer Pipeline Proves

StagePatternWhat’s Enforced
ParseValidated boundary (ch07)Every consumer works with typed enums, never raw bytes
ClassifyExhaustive matchingEvery sensor type and event variant maps to a health value β€” can’t forget one
LinearizeDimensional analysis (ch06)Raw byte 0x5F becomes Celsius(95.0), not f64 β€” can’t confuse with RPM
AggregateTyped foldPer-subsystem health uses HealthValue::max() β€” Ord guarantees correctness
HandoffStructured summarych18 receives TypedSelSummary with 7 subsystem health values, not a bool

Compare with the untyped C pipeline:

StepCRust
Parse record typeswitch with possible fallthroughmatch on enum β€” exhaustive
Classify severitymanual if chain, forgot PSUexhaustive match β€” compiler error on missing variant
Linearize readingdouble β€” no unitCelsius / Rpm / Watts β€” distinct types
Aggregate healthbool has_critical7 typed subsystem fields
Handoff to Redfishuntyped json_object_set("Health", "OK")TypedSelSummary β†’ typed health rollup (ch18)

The Rust pipeline doesn’t just prevent more bugs β€” it produces richer output. The C pipeline loses information at every stage (polymorphic β†’ flat, dimensional β†’ untyped, per-subsystem β†’ single bool). The Rust pipeline preserves it all, because the type system makes it easier to keep the structure than to throw it away.

What the Compiler Proves

Bug in CHow Rust prevents it
Forgot to check record typematch on ValidSelRecord β€” must handle all three variants
Wrong byte index for trigger readingParsed once into ThresholdEvent.trigger_reading β€” consumers never touch raw bytes
Missing case for a sensor typeSensorSpecificEvent match is exhaustive β€” compiler error on missing variant
Silently dropped OEM recordsEnum variant exists β€” must be handled or explicitly _ => ignored
Compared threshold reading (Β°C) with fan offsetAfter SDR linearization, Celsius β‰  Rpm (ch06)
Added new sensor type, forgot alert logic#[non_exhaustive] + exhaustive match β†’ compiler error in downstream crates
Event data parsed differently in two code pathsSingle parse_system_event() boundary β€” one source of truth

The Three-Beat Pattern

Looking back at this chapter’s three case studies, notice the graduated arc:

Case StudyInput ShapeParsing ComplexityKey Technique
FRU (bytes)Flat, fixed layoutOne TryFrom, check fieldsValidated boundary type
Redfish (JSON)Structured, known schemaOne TryFrom, check fields + nestingSame technique, different transport
SEL (polymorphic bytes)Nested discriminated unionDispatch chain: record type β†’ event type β†’ sensor typeEnum tree + exhaustive matching

The principle is identical in all three: validate once at the boundary, carry the proof in the type, never re-check. The SEL case study shows this principle scales to arbitrarily complex polymorphic data β€” the type system handles nested dispatch just as naturally as flat field validation.

Composing Validated Types

Validated types compose β€” a struct of validated fields is itself validated:

#[derive(Debug)]
pub struct ValidFru { format_version: u8 }
#[derive(Debug)]
pub struct ValidThermalResponse { }

/// A fully validated system snapshot.
/// Each field was validated independently; the composite is also valid.
#[derive(Debug)]
pub struct ValidSystemSnapshot {
    pub fru: ValidFru,
    pub thermal: ValidThermalResponse,
    // Each field carries its own validity guarantee.
    // No need for a "validate_snapshot()" function.
}

/// Because ValidSystemSnapshot is composed of validated parts,
/// any function that receives it can trust ALL the data.
fn generate_health_report(snapshot: &ValidSystemSnapshot) {
    println!("FRU version: {}", snapshot.fru.format_version);
    // No validation needed β€” the type guarantees everything
}

The Key Insight

Validate at the boundary. Carry the proof in the type. Never re-check.

This eliminates an entire class of bugs: β€œforgot to validate in this one function.” If a function takes &ValidFru, the data IS valid. Period.

When to Use Validated Boundary Types

Data SourceUse validated boundary type?
IPMI FRU data from BMCβœ… Always β€” complex binary format
Redfish JSON responsesβœ… Always β€” many required fields
PCIe configuration spaceβœ… Always β€” register layout is strict
SMBIOS tablesβœ… Always β€” versioned format with checksums
User-provided test parametersβœ… Always β€” prevent injection
Internal function calls❌ Usually not β€” types already constrain
Log messages❌ No β€” best-effort, not safety-critical

Validation Boundary Flow

flowchart LR
    RAW["Raw bytes / JSON"] -->|"TryFrom / serde"| V{"Valid?"}
    V -->|Yes| VT["ValidFru / ValidRedfish"]
    V -->|No| E["Err(ParseError)"]
    VT -->|"&ValidFru"| F1["fn process()"] & F2["fn report()"] & F3["fn store()"]
    style RAW fill:#fff3e0,color:#000
    style V fill:#e1f5fe,color:#000
    style VT fill:#c8e6c9,color:#000
    style E fill:#ffcdd2,color:#000
    style F1 fill:#e8f5e9,color:#000
    style F2 fill:#e8f5e9,color:#000
    style F3 fill:#e8f5e9,color:#000

Exercise: Validated SMBIOS Table

Design a ValidSmbiosType17 type for SMBIOS Type 17 (Memory Device) records:

  • Raw input is &[u8]; minimum length 21 bytes, byte 0 must be 0x11.
  • Fields: handle: u16, size_mb: u16, speed_mhz: u16.
  • Use TryFrom<&[u8]> so that all downstream functions take &ValidSmbiosType17.
Solution
#[derive(Debug)]
pub struct ValidSmbiosType17 {
    pub handle: u16,
    pub size_mb: u16,
    pub speed_mhz: u16,
}

impl TryFrom<&[u8]> for ValidSmbiosType17 {
    type Error = String;
    fn try_from(raw: &[u8]) -> Result<Self, Self::Error> {
        if raw.len() < 21 {
            return Err(format!("too short: {} < 21", raw.len()));
        }
        if raw[0] != 0x11 {
            return Err(format!("wrong type: 0x{:02X} != 0x11", raw[0]));
        }
        Ok(ValidSmbiosType17 {
            handle: u16::from_le_bytes([raw[1], raw[2]]),
            size_mb: u16::from_le_bytes([raw[12], raw[13]]),
            speed_mhz: u16::from_le_bytes([raw[19], raw[20]]),
        })
    }
}

// Downstream functions take the validated type β€” no re-checking
pub fn report_dimm(dimm: &ValidSmbiosType17) -> String {
    format!("DIMM handle 0x{:04X}: {}MB @ {}MHz",
        dimm.handle, dimm.size_mb, dimm.speed_mhz)
}

Key Takeaways

  1. Parse once at the boundary β€” TryFrom validates raw data exactly once; all downstream code trusts the type.
  2. Eliminate shotgun validation β€” if a function takes &ValidFru, the data IS valid. Period.
  3. The pattern scales from flat to polymorphic β€” FRU (flat bytes), Redfish (structured JSON), and SEL (nested discriminated union) all use the same technique at increasing complexity.
  4. Exhaustive matching is validation β€” for polymorphic data like SEL, the compiler’s enum exhaustiveness check prevents the β€œforgot a sensor type” class of bugs with zero runtime cost.
  5. The consumer pipeline preserves structure β€” parsing β†’ classification β†’ linearization β†’ aggregation keeps per-subsystem health and dimensional readings intact, where C lossy-reduces to a single bool. The type system makes it easier to keep information than to throw it away.
  6. serde is a natural boundary β€” #[derive(Deserialize)] with #[serde(try_from)] validates JSON at parse time.
  7. Compose validated types β€” a ValidServerHealth can require ValidFru + ValidThermal + ValidPower.
  8. Pair with proptest (ch14) β€” fuzz the TryFrom boundary to ensure no valid input is rejected and no invalid input sneaks through.
  9. These patterns compose into full Redfish workflows β€” ch17 applies validated boundaries on the client side (parsing JSON responses into typed structs), while ch18 inverts the pattern on the server side (builder type-state ensures every required field is present before serialization). The SEL consumer pipeline built here feeds directly into ch18’s TypedSelSummary health rollup.

Capability Mixins β€” Compile-Time Hardware Contracts 🟑

What you’ll learn: How ingredient traits (bus capabilities) combined with mixin traits and blanket impls eliminate diagnostic code duplication while guaranteeing every hardware dependency is satisfied at compile time.

Cross-references: ch04 (capability tokens), ch09 (phantom types), ch10 (integration)

The Problem: Diagnostic Code Duplication

Server platforms share diagnostic patterns across subsystems. Fan diagnostics, temperature monitoring, and power sequencing all follow similar workflows but operate on different hardware buses. Without abstraction, you get copy-paste:

// C β€” duplicated logic across subsystems
int run_fan_diag(spi_bus_t *spi, i2c_bus_t *i2c) {
    // ... 50 lines of SPI sensor read ...
    // ... 30 lines of I2C register check ...
    // ... 20 lines of threshold comparison (same as CPU diag) ...
}

int run_cpu_temp_diag(i2c_bus_t *i2c, gpio_t *gpio) {
    // ... 30 lines of I2C register check (same as fan diag) ...
    // ... 15 lines of GPIO alert check ...
    // ... 20 lines of threshold comparison (same as fan diag) ...
}

The threshold comparison logic is identical, but you can’t extract it because the bus types differ. With capability mixins, each hardware bus is an ingredient trait, and diagnostic behaviors are automatically provided when the right ingredients are present.

Ingredient Traits (Hardware Capabilities)

Each bus or peripheral is an associated type on a trait. A diagnostic controller declares which buses it has:

/// SPI bus capability.
pub trait HasSpi {
    type Spi: SpiBus;
    fn spi(&self) -> &Self::Spi;
}

/// I2C bus capability.
pub trait HasI2c {
    type I2c: I2cBus;
    fn i2c(&self) -> &Self::I2c;
}

/// GPIO pin access capability.
pub trait HasGpio {
    type Gpio: GpioController;
    fn gpio(&self) -> &Self::Gpio;
}

/// IPMI access capability.
pub trait HasIpmi {
    type Ipmi: IpmiClient;
    fn ipmi(&self) -> &Self::Ipmi;
}

// Bus trait definitions:
pub trait SpiBus {
    fn transfer(&self, data: &[u8]) -> Vec<u8>;
}

pub trait I2cBus {
    fn read_register(&self, addr: u8, reg: u8) -> u8;
    fn write_register(&self, addr: u8, reg: u8, value: u8);
}

pub trait GpioController {
    fn read_pin(&self, pin: u32) -> bool;
    fn set_pin(&self, pin: u32, value: bool);
}

pub trait IpmiClient {
    fn send_raw(&self, netfn: u8, cmd: u8, data: &[u8]) -> Vec<u8>;
}

Mixin Traits (Diagnostic Behaviors)

A mixin provides behavior automatically to any type that has the required capabilities:

pub trait SpiBus { fn transfer(&self, data: &[u8]) -> Vec<u8>; }
pub trait I2cBus {
    fn read_register(&self, addr: u8, reg: u8) -> u8;
    fn write_register(&self, addr: u8, reg: u8, value: u8);
}
pub trait GpioController { fn read_pin(&self, pin: u32) -> bool; }
pub trait IpmiClient { fn send_raw(&self, netfn: u8, cmd: u8, data: &[u8]) -> Vec<u8>; }
pub trait HasSpi { type Spi: SpiBus; fn spi(&self) -> &Self::Spi; }
pub trait HasI2c { type I2c: I2cBus; fn i2c(&self) -> &Self::I2c; }
pub trait HasGpio { type Gpio: GpioController; fn gpio(&self) -> &Self::Gpio; }
pub trait HasIpmi { type Ipmi: IpmiClient; fn ipmi(&self) -> &Self::Ipmi; }

/// Fan diagnostic mixin β€” auto-implemented for anything with SPI + I2C.
pub trait FanDiagMixin: HasSpi + HasI2c {
    fn read_fan_speed(&self, fan_id: u8) -> u32 {
        // Read tachometer via SPI
        let cmd = [0x80 | fan_id, 0x00];
        let response = self.spi().transfer(&cmd);
        u32::from_be_bytes([0, 0, response[0], response[1]])
    }

    fn set_fan_pwm(&self, fan_id: u8, duty_percent: u8) {
        // Set PWM via I2C controller
        self.i2c().write_register(0x2E, fan_id, duty_percent);
    }

    fn run_fan_diagnostic(&self) -> bool {
        // Full diagnostic: read all fans, check thresholds
        for fan_id in 0..6 {
            let speed = self.read_fan_speed(fan_id);
            if speed < 1000 || speed > 20000 {
                println!("Fan {fan_id}: FAIL ({speed} RPM)");
                return false;
            }
        }
        true
    }
}

// Blanket implementation β€” ANY type with SPI + I2C gets FanDiagMixin for free
impl<T: HasSpi + HasI2c> FanDiagMixin for T {}

/// Temperature monitoring mixin β€” requires I2C + GPIO.
pub trait TempMonitorMixin: HasI2c + HasGpio {
    fn read_temperature(&self, sensor_addr: u8) -> f64 {
        let raw = self.i2c().read_register(sensor_addr, 0x00);
        raw as f64 * 0.5  // 0.5Β°C per LSB
    }

    fn check_thermal_alert(&self, alert_pin: u32) -> bool {
        self.gpio().read_pin(alert_pin)
    }

    fn run_thermal_diagnostic(&self) -> bool {
        for addr in [0x48, 0x49, 0x4A] {
            let temp = self.read_temperature(addr);
            if temp > 95.0 {
                println!("Sensor 0x{addr:02X}: CRITICAL ({temp}Β°C)");
                return false;
            }
            if self.check_thermal_alert(addr as u32) {
                println!("Sensor 0x{addr:02X}: ALERT pin asserted");
                return false;
            }
        }
        true
    }
}

impl<T: HasI2c + HasGpio> TempMonitorMixin for T {}

/// Power sequencing mixin β€” requires I2C + IPMI.
pub trait PowerSeqMixin: HasI2c + HasIpmi {
    fn read_voltage_rail(&self, rail: u8) -> f64 {
        let raw = self.i2c().read_register(0x40, rail);
        raw as f64 * 0.01  // 10mV per LSB
    }

    fn check_power_good(&self) -> bool {
        let resp = self.ipmi().send_raw(0x04, 0x2D, &[0x01]);
        !resp.is_empty() && resp[0] == 0x00
    }
}

impl<T: HasI2c + HasIpmi> PowerSeqMixin for T {}

Concrete Controller β€” Mix and Match

A concrete diagnostic controller declares its capabilities, and automatically inherits all matching mixins:

pub trait SpiBus { fn transfer(&self, data: &[u8]) -> Vec<u8>; }
pub trait I2cBus {
    fn read_register(&self, addr: u8, reg: u8) -> u8;
    fn write_register(&self, addr: u8, reg: u8, value: u8);
}
pub trait GpioController {
    fn read_pin(&self, pin: u32) -> bool;
    fn set_pin(&self, pin: u32, value: bool);
}
pub trait IpmiClient { fn send_raw(&self, netfn: u8, cmd: u8, data: &[u8]) -> Vec<u8>; }
pub trait HasSpi { type Spi: SpiBus; fn spi(&self) -> &Self::Spi; }
pub trait HasI2c { type I2c: I2cBus; fn i2c(&self) -> &Self::I2c; }
pub trait HasGpio { type Gpio: GpioController; fn gpio(&self) -> &Self::Gpio; }
pub trait HasIpmi { type Ipmi: IpmiClient; fn ipmi(&self) -> &Self::Ipmi; }
pub trait FanDiagMixin: HasSpi + HasI2c {}
impl<T: HasSpi + HasI2c> FanDiagMixin for T {}
pub trait TempMonitorMixin: HasI2c + HasGpio {}
impl<T: HasI2c + HasGpio> TempMonitorMixin for T {}
pub trait PowerSeqMixin: HasI2c + HasIpmi {}
impl<T: HasI2c + HasIpmi> PowerSeqMixin for T {}

// Concrete bus implementations (stubs for illustration)
pub struct LinuxSpi { bus: u8 }
impl SpiBus for LinuxSpi {
    fn transfer(&self, data: &[u8]) -> Vec<u8> { vec![0; data.len()] }
}

pub struct LinuxI2c { bus: u8 }
impl I2cBus for LinuxI2c {
    fn read_register(&self, _addr: u8, _reg: u8) -> u8 { 42 }
    fn write_register(&self, _addr: u8, _reg: u8, _value: u8) {}
}

pub struct LinuxGpio;
impl GpioController for LinuxGpio {
    fn read_pin(&self, _pin: u32) -> bool { false }
    fn set_pin(&self, _pin: u32, _value: bool) {}
}

pub struct IpmiToolClient;
impl IpmiClient for IpmiToolClient {
    fn send_raw(&self, _netfn: u8, _cmd: u8, _data: &[u8]) -> Vec<u8> { vec![0x00] }
}

/// BaseBoardController has ALL buses β†’ gets ALL mixins.
pub struct BaseBoardController {
    spi: LinuxSpi,
    i2c: LinuxI2c,
    gpio: LinuxGpio,
    ipmi: IpmiToolClient,
}

impl HasSpi for BaseBoardController {
    type Spi = LinuxSpi;
    fn spi(&self) -> &LinuxSpi { &self.spi }
}

impl HasI2c for BaseBoardController {
    type I2c = LinuxI2c;
    fn i2c(&self) -> &LinuxI2c { &self.i2c }
}

impl HasGpio for BaseBoardController {
    type Gpio = LinuxGpio;
    fn gpio(&self) -> &LinuxGpio { &self.gpio }
}

impl HasIpmi for BaseBoardController {
    type Ipmi = IpmiToolClient;
    fn ipmi(&self) -> &IpmiToolClient { &self.ipmi }
}

// BaseBoardController now automatically has:
// - FanDiagMixin    (because it HasSpi + HasI2c)
// - TempMonitorMixin (because it HasI2c + HasGpio)
// - PowerSeqMixin   (because it HasI2c + HasIpmi)
// No manual implementation needed β€” blanket impls do it all.

Correct-by-Construction Aspect

The mixin pattern is correct-by-construction because:

  1. You can’t call read_fan_speed() without SPI β€” the method only exists on types that implement HasSpi + HasI2c
  2. You can’t forget a bus β€” if you remove HasSpi from BaseBoardController, FanDiagMixin methods disappear at compile time
  3. Mock testing is automatic β€” replace LinuxSpi with MockSpi and all mixin logic works with the mock
  4. New platforms just declare capabilities β€” a GPU daughter card with only I2C gets TempMonitorMixin (if it also has GPIO) but not FanDiagMixin (no SPI)

When to Use Capability Mixins

ScenarioUse mixins?
Cross-cutting diagnostic behaviorsβœ… Yes β€” prevent copy-paste
Multi-bus hardware controllersβœ… Yes β€” declare capabilities, get behaviors
Platform-specific test harnessesβœ… Yes β€” mock capabilities for testing
Single-bus simple peripherals⚠️ Overhead may not be worth it
Pure business logic (no hardware)❌ Simpler patterns suffice

Mixin Trait Architecture

flowchart TD
    subgraph "Ingredient Traits"
        SPI["HasSpi"]
        I2C["HasI2c"]
        GPIO["HasGpio"]
    end
    subgraph "Mixin Traits (blanket impls)"
        FAN["FanDiagMixin"]
        TEMP["TempMonitorMixin"]
    end
    SPI & I2C -->|"requires both"| FAN
    I2C & GPIO -->|"requires both"| TEMP
    subgraph "Concrete Types"
        BBC["BaseBoardController"]
    end
    BBC -->|"impl HasSpi + HasI2c + HasGpio"| FAN & TEMP
    style SPI fill:#e1f5fe,color:#000
    style I2C fill:#e1f5fe,color:#000
    style GPIO fill:#e1f5fe,color:#000
    style FAN fill:#c8e6c9,color:#000
    style TEMP fill:#c8e6c9,color:#000
    style BBC fill:#fff3e0,color:#000

Exercise: Network Diagnostic Mixins

Design a mixin system for network diagnostics:

  • Ingredient traits: HasEthernet, HasIpmi
  • Mixin: LinkHealthMixin (requires HasEthernet) with check_link_status(&self)
  • Mixin: RemoteDiagMixin (requires HasEthernet + HasIpmi) with remote_health_check(&self)
  • Concrete type: NicController that implements both ingredients.
Solution
pub trait HasEthernet {
    fn eth_link_up(&self) -> bool;
}

pub trait HasIpmi {
    fn ipmi_ping(&self) -> bool;
}

pub trait LinkHealthMixin: HasEthernet {
    fn check_link_status(&self) -> &'static str {
        if self.eth_link_up() { "link: UP" } else { "link: DOWN" }
    }
}
impl<T: HasEthernet> LinkHealthMixin for T {}

pub trait RemoteDiagMixin: HasEthernet + HasIpmi {
    fn remote_health_check(&self) -> &'static str {
        if self.eth_link_up() && self.ipmi_ping() {
            "remote: HEALTHY"
        } else {
            "remote: DEGRADED"
        }
    }
}
impl<T: HasEthernet + HasIpmi> RemoteDiagMixin for T {}

pub struct NicController;
impl HasEthernet for NicController {
    fn eth_link_up(&self) -> bool { true }
}
impl HasIpmi for NicController {
    fn ipmi_ping(&self) -> bool { true }
}
// NicController automatically gets both mixin methods

Key Takeaways

  1. Ingredient traits declare hardware capabilities β€” HasSpi, HasI2c, HasGpio are associated-type traits.
  2. Mixin traits provide behaviour via blanket impls β€” impl<T: HasSpi + HasI2c> FanDiagMixin for T {}.
  3. Adding a new platform = listing its capabilities β€” the compiler provides all matching mixin methods.
  4. Removing a bus = compile errors everywhere it’s used β€” you can’t forget to update downstream code.
  5. Mock testing is free β€” swap LinuxSpi for MockSpi; all mixin logic works unchanged.

Phantom Types for Resource Tracking 🟑

What you’ll learn: How PhantomData markers encode register width, DMA direction, and file-descriptor state at the type level β€” preventing an entire class of resource-mismatch bugs at zero runtime cost.

Cross-references: ch05 (type-state), ch06 (dimensional types), ch08 (mixins), ch10 (integration)

The Problem: Mixing Up Resources

Hardware resources look alike in code but aren’t interchangeable:

  • A 32-bit register and a 16-bit register are both β€œregisters”
  • A DMA buffer for read and a DMA buffer for write both look like *mut u8
  • An open file descriptor and a closed one are both i32

In C:

// C β€” all registers look the same
uint32_t read_reg32(volatile void *base, uint32_t offset);
uint16_t read_reg16(volatile void *base, uint32_t offset);

// Bug: reading a 16-bit register with the 32-bit function
uint32_t status = read_reg32(pcie_bar, LINK_STATUS_REG);  // should be reg16!

Phantom Type Parameters

A phantom type is a type parameter that appears in the struct definition but not in any field. It exists purely to carry type-level information:

use std::marker::PhantomData;

// Register width markers β€” zero-sized
pub struct Width8;
pub struct Width16;
pub struct Width32;
pub struct Width64;

/// A register handle parameterised by its width.
/// PhantomData<W> costs zero bytes β€” it's a compile-time-only marker.
pub struct Register<W> {
    base: usize,
    offset: usize,
    _width: PhantomData<W>,
}

impl Register<Width8> {
    pub fn read(&self) -> u8 {
        // ... read 1 byte from base + offset ...
        0 // stub
    }
    pub fn write(&self, _value: u8) {
        // ... write 1 byte ...
    }
}

impl Register<Width16> {
    pub fn read(&self) -> u16 {
        // ... read 2 bytes from base + offset ...
        0 // stub
    }
    pub fn write(&self, _value: u16) {
        // ... write 2 bytes ...
    }
}

impl Register<Width32> {
    pub fn read(&self) -> u32 {
        // ... read 4 bytes from base + offset ...
        0 // stub
    }
    pub fn write(&self, _value: u32) {
        // ... write 4 bytes ...
    }
}

/// PCIe config space register definitions.
pub struct PcieConfig {
    base: usize,
}

impl PcieConfig {
    pub fn vendor_id(&self) -> Register<Width16> {
        Register { base: self.base, offset: 0x00, _width: PhantomData }
    }

    pub fn device_id(&self) -> Register<Width16> {
        Register { base: self.base, offset: 0x02, _width: PhantomData }
    }

    pub fn command(&self) -> Register<Width16> {
        Register { base: self.base, offset: 0x04, _width: PhantomData }
    }

    pub fn status(&self) -> Register<Width16> {
        Register { base: self.base, offset: 0x06, _width: PhantomData }
    }

    pub fn bar0(&self) -> Register<Width32> {
        Register { base: self.base, offset: 0x10, _width: PhantomData }
    }
}

fn pcie_example() {
    let cfg = PcieConfig { base: 0xFE00_0000 };

    let vid: u16 = cfg.vendor_id().read();    // returns u16 βœ…
    let bar: u32 = cfg.bar0().read();         // returns u32 βœ…

    // Can't mix them up:
    // let bad: u32 = cfg.vendor_id().read(); // ❌ ERROR: expected u16
    // cfg.bar0().write(0u16);                // ❌ ERROR: expected u32
}

DMA Buffer Access Control

DMA buffers have direction: some are for device-to-host (read), others for host-to-device (write). Using the wrong direction corrupts data or causes bus errors:

use std::marker::PhantomData;

// Direction markers
pub struct ToDevice;     // host writes, device reads
pub struct FromDevice;   // device writes, host reads

/// A DMA buffer with direction enforcement.
pub struct DmaBuffer<Dir> {
    ptr: *mut u8,
    len: usize,
    dma_addr: u64,  // physical address for the device
    _dir: PhantomData<Dir>,
}

impl DmaBuffer<ToDevice> {
    /// Fill the buffer with data to send to the device.
    pub fn write_data(&mut self, data: &[u8]) {
        assert!(data.len() <= self.len);
        // SAFETY: ptr is valid for self.len bytes (allocated at construction),
        // and data.len() <= self.len (asserted above).
        unsafe { std::ptr::copy_nonoverlapping(data.as_ptr(), self.ptr, data.len()) }
    }

    /// Get the DMA address for the device to read from.
    pub fn device_addr(&self) -> u64 {
        self.dma_addr
    }
}

impl DmaBuffer<FromDevice> {
    /// Read data that the device wrote into the buffer.
    pub fn read_data(&self) -> &[u8] {
        // SAFETY: ptr is valid for self.len bytes, and the device
        // has finished writing (caller ensures DMA transfer is complete).
        unsafe { std::slice::from_raw_parts(self.ptr, self.len) }
    }

    /// Get the DMA address for the device to write to.
    pub fn device_addr(&self) -> u64 {
        self.dma_addr
    }
}

// Can't write to a FromDevice buffer:
// fn oops(buf: &mut DmaBuffer<FromDevice>) {
//     buf.write_data(&[1, 2, 3]);  // ❌ no method `write_data` on DmaBuffer<FromDevice>
// }

// Can't read from a ToDevice buffer:
// fn oops2(buf: &DmaBuffer<ToDevice>) {
//     let data = buf.read_data();  // ❌ no method `read_data` on DmaBuffer<ToDevice>
// }

File Descriptor Ownership

A common bug: using a file descriptor after it’s been closed. Phantom types can track open/closed state:

use std::marker::PhantomData;

pub struct Open;
pub struct Closed;

/// A file descriptor with state tracking.
pub struct Fd<State> {
    raw: i32,
    _state: PhantomData<State>,
}

impl Fd<Open> {
    pub fn open(path: &str) -> Result<Self, String> {
        // ... open the file ...
        Ok(Fd { raw: 3, _state: PhantomData }) // stub
    }

    pub fn read(&self, buf: &mut [u8]) -> Result<usize, String> {
        // ... read from fd ...
        Ok(0) // stub
    }

    pub fn write(&self, data: &[u8]) -> Result<usize, String> {
        // ... write to fd ...
        Ok(data.len()) // stub
    }

    /// Close the fd β€” returns a Closed handle.
    /// The Open handle is consumed, preventing use-after-close.
    pub fn close(self) -> Fd<Closed> {
        // ... close the fd ...
        Fd { raw: self.raw, _state: PhantomData }
    }
}

impl Fd<Closed> {
    // No read() or write() methods β€” they don't exist on Fd<Closed>.
    // This makes use-after-close a compile error.

    pub fn raw_fd(&self) -> i32 {
        self.raw
    }
}

fn fd_example() -> Result<(), String> {
    let fd = Fd::open("/dev/ipmi0")?;
    let mut buf = [0u8; 256];
    fd.read(&mut buf)?;

    let closed = fd.close();

    // closed.read(&mut buf)?;  // ❌ no method `read` on Fd<Closed>
    // closed.write(&[1])?;     // ❌ no method `write` on Fd<Closed>

    Ok(())
}

Combining Phantom Types with Earlier Patterns

Phantom types compose with everything we’ve seen:

use std::marker::PhantomData;
pub struct Width32;
pub struct Width16;
pub struct Register<W> { _w: PhantomData<W> }
impl Register<Width16> { pub fn read(&self) -> u16 { 0 } }
impl Register<Width32> { pub fn read(&self) -> u32 { 0 } }
#[derive(Debug, Clone, Copy, PartialEq, PartialOrd)]
pub struct Celsius(pub f64);

/// Combine phantom types (register width) with dimensional types (Celsius).
fn read_temp_sensor(reg: &Register<Width16>) -> Celsius {
    let raw = reg.read();  // guaranteed u16 by phantom type
    Celsius(raw as f64 * 0.0625)  // guaranteed Celsius by return type
}

// The compiler enforces:
// 1. The register is 16-bit (phantom type)
// 2. The result is Celsius (newtype)
// Both at zero runtime cost.

When to Use Phantom Types

ScenarioUse phantom parameter?
Register width encodingβœ… Always β€” prevents width mismatch
DMA buffer directionβœ… Always β€” prevents data corruption
File descriptor stateβœ… Always β€” prevents use-after-close
Memory region permissions (R/W/X)βœ… Always β€” enforces access control
Generic container (Vec, HashMap)❌ No β€” use concrete type parameters
Runtime-variable attributes❌ No β€” phantom types are compile-time only

Phantom Type Resource Matrix

flowchart TD
    subgraph "Width Markers"
        W8["Width8"] 
        W16["Width16"]
        W32["Width32"]
    end
    subgraph "Direction Markers"
        RD["Read"]
        WR["Write"]
    end
    subgraph "Typed Resources"
        R1["Register<Width16>"]
        R2["DmaBuffer<Read>"]
        R3["DmaBuffer<Write>"]
    end
    W16 --> R1
    RD --> R2
    WR --> R3
    R2 -.->|"write attempt"| ERR["❌ Compile Error"]
    style W8 fill:#e1f5fe,color:#000
    style W16 fill:#e1f5fe,color:#000
    style W32 fill:#e1f5fe,color:#000
    style RD fill:#c8e6c9,color:#000
    style WR fill:#fff3e0,color:#000
    style R1 fill:#e8eaf6,color:#000
    style R2 fill:#c8e6c9,color:#000
    style R3 fill:#fff3e0,color:#000
    style ERR fill:#ffcdd2,color:#000

Exercise: Memory Region Permissions

Design phantom types for memory regions with read, write, and execute permissions:

  • MemRegion<ReadOnly> has fn read(&self, offset: usize) -> u8
  • MemRegion<ReadWrite> has both read and write
  • MemRegion<Executable> has read and fn execute(&self)
  • Writing to ReadOnly or executing ReadWrite should not compile.
Solution
use std::marker::PhantomData;

pub struct ReadOnly;
pub struct ReadWrite;
pub struct Executable;

pub struct MemRegion<Perm> {
    base: *mut u8,
    len: usize,
    _perm: PhantomData<Perm>,
}

// Read available on all permission types
impl<P> MemRegion<P> {
    pub fn read(&self, offset: usize) -> u8 {
        assert!(offset < self.len);
        // SAFETY: offset < self.len (asserted above), base is valid for len bytes.
        unsafe { *self.base.add(offset) }
    }
}

impl MemRegion<ReadWrite> {
    pub fn write(&mut self, offset: usize, val: u8) {
        assert!(offset < self.len);
        // SAFETY: offset < self.len (asserted above), base is valid for len bytes,
        // and &mut self ensures exclusive access.
        unsafe { *self.base.add(offset) = val; }
    }
}

impl MemRegion<Executable> {
    pub fn execute(&self) {
        // Jump to base address (conceptual)
    }
}

// ❌ region_ro.write(0, 0xFF);  // Compile error: no method `write`
// ❌ region_rw.execute();       // Compile error: no method `execute`

Key Takeaways

  1. PhantomData carries type-level information at zero size β€” the marker exists only for the compiler.
  2. Register width mismatches become compile errors β€” Register<Width16> returns u16, not u32.
  3. DMA direction is enforced structurally β€” DmaBuffer<Read> has no write() method.
  4. Combine with dimensional types (ch06) β€” Register<Width16> can return Celsius via the parse step.
  5. Phantom types are compile-time only β€” they don’t work for runtime-variable attributes; use enums for those.

Const Fn β€” Compile-Time Correctness Proofs 🟠

What you’ll learn: How const fn and assert! turn the compiler into a proof engine β€” verifying SRAM memory maps, register layouts, protocol frames, bitfield masks, clock trees, and lookup tables at compile time with zero runtime cost.

Cross-references: ch04 (capability tokens), ch06 (dimensional analysis), ch09 (phantom types)

The Problem: Memory Maps That Lie

In embedded and systems programming, memory maps are the foundation of everything β€” they define where bootloaders, firmware, data sections, and stacks live. Get a boundary wrong, and two subsystems silently corrupt each other. In C, these maps are typically #define constants with no structural relationship:

/* STM32F4 SRAM layout β€” 256 KB at 0x20000000 */
#define SRAM_BASE       0x20000000
#define SRAM_SIZE       (256 * 1024)

#define BOOT_BASE       0x20000000
#define BOOT_SIZE       (16 * 1024)

#define FW_BASE         0x20004000
#define FW_SIZE         (128 * 1024)

#define DATA_BASE       0x20024000
#define DATA_SIZE       (80 * 1024)     /* Someone bumped this from 64K to 80K */

#define STACK_BASE      0x20038000
#define STACK_SIZE      (48 * 1024)     /* 0x20038000 + 48K = 0x20044000 β€” past SRAM end! */

The bug: 16 + 128 + 80 + 48 = 272 KB, but SRAM is only 256 KB. The stack extends 16 KB past the end of physical memory. No compiler warning, no linker error, no runtime check β€” just silent corruption when the stack grows into unmapped space.

Every failure mode is discovered after deployment β€” potentially as a mysterious crash that only happens under heavy stack usage, weeks after the data section was resized.

Const Fn: Turning the Compiler into a Proof Engine

Rust’s const fn functions can run at compile time. When a const fn panics during compile-time evaluation, the panic becomes a compile error. Combined with assert!, this turns the compiler into a theorem prover for your invariants:

pub const fn checked_add(a: u32, b: u32) -> u32 {
    let sum = a as u64 + b as u64;
    assert!(sum <= u32::MAX as u64, "overflow");
    sum as u32
}

// βœ… Compiles β€” 100 + 200 fits in u32
const X: u32 = checked_add(100, 200);

// ❌ Compile error: "overflow"
// const Y: u32 = checked_add(u32::MAX, 1);

fn main() {
    println!("{X}");
}

The key insight: const fn + assert! = a proof obligation. Each assertion is a theorem that the compiler must verify. If the proof fails, the program does not compile. No test suite needed, no code review catch β€” the compiler itself is the auditor.

Building a Verified SRAM Memory Map

The Region Type

A Region represents a contiguous block of memory. Its constructor is a const fn that enforces basic validity:

#[derive(Debug, Clone, Copy)]
pub struct Region {
    pub base: u32,
    pub size: u32,
}

impl Region {
    /// Create a region. Panics at compile time if invariants fail.
    pub const fn new(base: u32, size: u32) -> Self {
        assert!(size > 0, "region size must be non-zero");
        assert!(
            base as u64 + size as u64 <= u32::MAX as u64,
            "region overflows 32-bit address space"
        );
        Self { base, size }
    }

    pub const fn end(&self) -> u32 {
        self.base + self.size
    }

    /// True if `inner` fits entirely within `self`.
    pub const fn contains(&self, inner: &Region) -> bool {
        inner.base >= self.base && inner.end() <= self.end()
    }

    /// True if two regions share any addresses.
    pub const fn overlaps(&self, other: &Region) -> bool {
        self.base < other.end() && other.base < self.end()
    }

    /// True if `addr` falls within this region.
    pub const fn contains_addr(&self, addr: u32) -> bool {
        addr >= self.base && addr < self.end()
    }
}

// Every Region is born valid β€” you cannot construct an invalid one
const R: Region = Region::new(0x2000_0000, 1024);

fn main() {
    println!("Region: {:#010X}..{:#010X}", R.base, R.end());
}

The Verified Memory Map

Now we compose regions into a full SRAM map. The constructor proves six overlap-freedom invariants and four containment invariants β€” all at compile time:

#[derive(Debug, Clone, Copy)]
pub struct Region { pub base: u32, pub size: u32 }
impl Region {
    pub const fn new(base: u32, size: u32) -> Self {
        assert!(size > 0, "region size must be non-zero");
        assert!(base as u64 + size as u64 <= u32::MAX as u64, "overflow");
        Self { base, size }
    }
    pub const fn end(&self) -> u32 { self.base + self.size }
    pub const fn contains(&self, inner: &Region) -> bool {
        inner.base >= self.base && inner.end() <= self.end()
    }
    pub const fn overlaps(&self, other: &Region) -> bool {
        self.base < other.end() && other.base < self.end()
    }
}
pub struct SramMap {
    pub total:      Region,
    pub bootloader: Region,
    pub firmware:   Region,
    pub data:       Region,
    pub stack:      Region,
}

impl SramMap {
    pub const fn verified(
        total: Region,
        bootloader: Region,
        firmware: Region,
        data: Region,
        stack: Region,
    ) -> Self {
        // ── Containment: every sub-region fits within total SRAM ──
        assert!(total.contains(&bootloader), "bootloader exceeds SRAM");
        assert!(total.contains(&firmware),   "firmware exceeds SRAM");
        assert!(total.contains(&data),       "data section exceeds SRAM");
        assert!(total.contains(&stack),      "stack exceeds SRAM");

        // ── Overlap freedom: no pair of sub-regions shares an address ──
        assert!(!bootloader.overlaps(&firmware), "bootloader/firmware overlap");
        assert!(!bootloader.overlaps(&data),     "bootloader/data overlap");
        assert!(!bootloader.overlaps(&stack),    "bootloader/stack overlap");
        assert!(!firmware.overlaps(&data),       "firmware/data overlap");
        assert!(!firmware.overlaps(&stack),      "firmware/stack overlap");
        assert!(!data.overlaps(&stack),          "data/stack overlap");

        Self { total, bootloader, firmware, data, stack }
    }
}

// βœ… All 10 invariants verified at compile time β€” zero runtime cost
const SRAM: SramMap = SramMap::verified(
    Region::new(0x2000_0000, 256 * 1024),   // 256 KB total SRAM
    Region::new(0x2000_0000,  16 * 1024),   // bootloader: 16 KB
    Region::new(0x2000_4000, 128 * 1024),   // firmware:  128 KB
    Region::new(0x2002_4000,  64 * 1024),   // data:       64 KB
    Region::new(0x2003_4000,  48 * 1024),   // stack:      48 KB
);

fn main() {
    println!("SRAM:  {:#010X} β€” {} KB", SRAM.total.base, SRAM.total.size / 1024);
    println!("Boot:  {:#010X} β€” {} KB", SRAM.bootloader.base, SRAM.bootloader.size / 1024);
    println!("FW:    {:#010X} β€” {} KB", SRAM.firmware.base, SRAM.firmware.size / 1024);
    println!("Data:  {:#010X} β€” {} KB", SRAM.data.base, SRAM.data.size / 1024);
    println!("Stack: {:#010X} β€” {} KB", SRAM.stack.base, SRAM.stack.size / 1024);
}

Ten compile-time checks, zero runtime instructions. The binary contains only the verified constants.

Breaking the Map

Suppose someone increases the data section from 64 KB to 80 KB without adjusting anything else:

// ❌ Does not compile
const BAD_SRAM: SramMap = SramMap::verified(
    Region::new(0x2000_0000, 256 * 1024),
    Region::new(0x2000_0000,  16 * 1024),
    Region::new(0x2000_4000, 128 * 1024),
    Region::new(0x2002_4000,  80 * 1024),   // 80 KB β€” 16 KB too large
    Region::new(0x2003_8000,  48 * 1024),   // stack pushed past SRAM end
);

The compiler reports:

error[E0080]: evaluation of constant value failed
  --> src/main.rs:38:9
   |
38 |         assert!(total.contains(&stack), "stack exceeds SRAM");
   |         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   |         the evaluated program panicked at 'stack exceeds SRAM'

The bug that would have been a mysterious field failure is now a compile error. No unit test needed, no code review catch β€” the compiler proves it impossible. Compare this to C, where the same bug would ship silently and surface as a stack corruption months later in the field.

Layering Access Control with Phantom Types

Combine const fn verification with phantom-typed access permissions (ch09) to enforce read/write constraints at the type level:

use std::marker::PhantomData;

pub struct ReadOnly;
pub struct ReadWrite;

pub struct TypedRegion<Access> {
    base: u32,
    size: u32,
    _access: PhantomData<Access>,
}

impl<A> TypedRegion<A> {
    pub const fn new(base: u32, size: u32) -> Self {
        assert!(size > 0, "region size must be non-zero");
        Self { base, size, _access: PhantomData }
    }
}

// Read is available for any access level
fn read_word<A>(region: &TypedRegion<A>, offset: u32) -> u32 {
    assert!(offset + 4 <= region.size, "read out of bounds");
    // In real firmware: unsafe { core::ptr::read_volatile((region.base + offset) as *const u32) }
    0 // stub
}

// Write requires ReadWrite β€” the function signature enforces it
fn write_word(region: &TypedRegion<ReadWrite>, offset: u32, value: u32) {
    assert!(offset + 4 <= region.size, "write out of bounds");
    // In real firmware: unsafe { core::ptr::write_volatile(...) }
    let _ = value; // stub
}

const BOOTLOADER: TypedRegion<ReadOnly>  = TypedRegion::new(0x2000_0000, 16 * 1024);
const DATA:       TypedRegion<ReadWrite> = TypedRegion::new(0x2002_4000, 64 * 1024);

fn main() {
    read_word(&BOOTLOADER, 0);      // βœ… read from read-only region
    read_word(&DATA, 0);            // βœ… read from read-write region
    write_word(&DATA, 0, 42);       // βœ… write to read-write region
    // write_word(&BOOTLOADER, 0, 42); // ❌ Compile error: expected ReadWrite, found ReadOnly
}

The bootloader region is physically writeable (it’s SRAM), but the type system prevents accidental writes. This distinction between hardware capability and software permission is exactly what correct-by-construction means.

Pointer Provenance: Proving Addresses Belong to Regions

Taking it further, we can create verified addresses β€” values that are statically proven to lie within a specific region:

#[derive(Debug, Clone, Copy)]
pub struct Region { pub base: u32, pub size: u32 }
impl Region {
    pub const fn new(base: u32, size: u32) -> Self {
        assert!(size > 0);
        assert!(base as u64 + size as u64 <= u32::MAX as u64);
        Self { base, size }
    }
    pub const fn end(&self) -> u32 { self.base + self.size }
    pub const fn contains_addr(&self, addr: u32) -> bool {
        addr >= self.base && addr < self.end()
    }
}
/// An address proven at compile time to lie within a Region.
pub struct VerifiedAddr {
    addr: u32, // private β€” can only be created through the checked constructor
}

impl VerifiedAddr {
    /// Panics at compile time if `addr` is outside `region`.
    pub const fn new(region: &Region, addr: u32) -> Self {
        assert!(region.contains_addr(addr), "address outside region");
        Self { addr }
    }

    pub const fn raw(&self) -> u32 {
        self.addr
    }
}

const DATA: Region = Region::new(0x2002_4000, 64 * 1024);

// βœ… Proven at compile time to be inside the data region
const STATUS_WORD: VerifiedAddr = VerifiedAddr::new(&DATA, 0x2002_4000);
const CONFIG_WORD: VerifiedAddr = VerifiedAddr::new(&DATA, 0x2002_5000);

// ❌ Would not compile: address is in the bootloader region, not data
// const BAD_ADDR: VerifiedAddr = VerifiedAddr::new(&DATA, 0x2000_0000);

fn main() {
    println!("Status register at {:#010X}", STATUS_WORD.raw());
    println!("Config register at {:#010X}", CONFIG_WORD.raw());
}

Provenance established at compile time β€” no runtime bounds check needed when accessing these addresses. The constructor is private, so a VerifiedAddr can only exist if the compiler has proven it valid.

Beyond Memory Maps

The const fn proof pattern applies wherever you have compile-time-known values with structural invariants. The SRAM map above proved inter-region properties (containment, non-overlap). The same technique scales to increasingly fine-grained domains:

flowchart TD
    subgraph coarse["Coarse-Grained"]
        MEM["Memory Maps<br/>regions don't overlap"]
        REG["Register Maps<br/>offsets are aligned & disjoint"]
    end

    subgraph fine["Fine-Grained"]
        BIT["Bitfield Layouts<br/>masks are disjoint within a register"]
        FRAME["Protocol Frames<br/>fields are contiguous, total ≀ max"]
    end

    subgraph derived["Derived-Value Chains"]
        PLL["Clock Trees / PLL<br/>each intermediate freq in range"]
        LUT["Lookup Tables<br/>computed & verified at compile time"]
    end

    MEM --> REG --> BIT
    MEM --> FRAME
    REG --> PLL
    PLL --> LUT

    style MEM fill:#c8e6c9,color:#000
    style REG fill:#c8e6c9,color:#000
    style BIT fill:#e1f5fe,color:#000
    style FRAME fill:#e1f5fe,color:#000
    style PLL fill:#fff3e0,color:#000
    style LUT fill:#fff3e0,color:#000

Each subsection below follows the same pattern: define a type with a const fn constructor that encodes the invariants, then use const _: () = { ... } or a const binding to trigger verification.

Register Maps

Hardware register blocks have fixed offsets and widths. A misaligned or overlapping register definition is always a bug:

#[derive(Debug, Clone, Copy)]
pub struct Register {
    pub offset: u32,
    pub width: u32,
}

impl Register {
    pub const fn new(offset: u32, width: u32) -> Self {
        assert!(
            width == 1 || width == 2 || width == 4,
            "register width must be 1, 2, or 4 bytes"
        );
        assert!(offset % width == 0, "register must be naturally aligned");
        Self { offset, width }
    }

    pub const fn end(&self) -> u32 {
        self.offset + self.width
    }
}

const fn disjoint(a: &Register, b: &Register) -> bool {
    a.end() <= b.offset || b.end() <= a.offset
}

// UART peripheral registers
const DATA:   Register = Register::new(0x00, 4);
const STATUS: Register = Register::new(0x04, 4);
const CTRL:   Register = Register::new(0x08, 4);
const BAUD:   Register = Register::new(0x0C, 4);

// Compile-time proof: no register overlaps another
const _: () = {
    assert!(disjoint(&DATA,   &STATUS));
    assert!(disjoint(&DATA,   &CTRL));
    assert!(disjoint(&DATA,   &BAUD));
    assert!(disjoint(&STATUS, &CTRL));
    assert!(disjoint(&STATUS, &BAUD));
    assert!(disjoint(&CTRL,   &BAUD));
};

fn main() {
    println!("UART DATA:   offset={:#04X}, width={}", DATA.offset, DATA.width);
    println!("UART STATUS: offset={:#04X}, width={}", STATUS.offset, STATUS.width);
}

Note the const _: () = { ... }; idiom β€” an unnamed constant whose only purpose is to run compile-time assertions. If any assertion fails, the constant can’t be evaluated and compilation stops.

Mini-Exercise: SPI Register Bank

Given these SPI controller registers, add const fn assertions proving:

  1. Every register is naturally aligned (offset % width == 0)
  2. No two registers overlap
  3. All registers fit within a 64-byte register block
Hint

Reuse the Register and disjoint functions from the UART example above. Define three or four const Register values (e.g., CTRL at offset 0x00 width 4, STATUS at 0x04 width 4, TX_DATA at 0x08 width 1, RX_DATA at 0x0C width 1) and assert the three properties.

Protocol Frame Layouts

Network or bus protocol frames have fields at specific offsets. The then() method makes contiguity structural β€” gaps and overlaps are impossible by construction:

#[derive(Debug, Clone, Copy)]
pub struct Field {
    pub offset: usize,
    pub size: usize,
}

impl Field {
    pub const fn new(offset: usize, size: usize) -> Self {
        assert!(size > 0, "field size must be non-zero");
        Self { offset, size }
    }

    pub const fn end(&self) -> usize {
        self.offset + self.size
    }

    /// Create the next field immediately after this one.
    pub const fn then(&self, size: usize) -> Field {
        Field::new(self.end(), size)
    }
}

const MAX_FRAME: usize = 256;

const HEADER:  Field = Field::new(0, 4);
const SEQ_NUM: Field = HEADER.then(2);
const PAYLOAD: Field = SEQ_NUM.then(246);
const CRC:     Field = PAYLOAD.then(4);

// Compile-time proof: frame fits within maximum size
const _: () = assert!(CRC.end() <= MAX_FRAME, "frame exceeds maximum size");

fn main() {
    println!("Header:  [{}..{})", HEADER.offset, HEADER.end());
    println!("SeqNum:  [{}..{})", SEQ_NUM.offset, SEQ_NUM.end());
    println!("Payload: [{}..{})", PAYLOAD.offset, PAYLOAD.end());
    println!("CRC:     [{}..{})", CRC.offset, CRC.end());
    println!("Total:   {}/{} bytes", CRC.end(), MAX_FRAME);
}

Fields are contiguous by construction β€” each starts exactly where the previous one ends. The final assertion proves the frame fits within the protocol’s maximum size.

Inline Const Blocks for Generic Validation

Since Rust 1.79, const { ... } blocks let you validate const generic parameters at the point of use β€” perfect for DMA buffer size constraints or alignment requirements:

fn dma_transfer<const N: usize>(buf: &[u8; N]) {
    const { assert!(N % 4 == 0, "DMA buffer must be 4-byte aligned in size") };
    const { assert!(N <= 65536, "DMA transfer exceeds maximum size") };
    // ... initiate transfer ...
}

dma_transfer(&[0u8; 1024]);   // βœ… 1024 is divisible by 4 and ≀ 65536
// dma_transfer(&[0u8; 1023]); // ❌ Compile error: not 4-byte aligned

The assertions are evaluated when the function is monomorphized β€” each call site with a different N gets its own compile-time check.

Bitfield Layouts Within a Register

Register maps prove that registers don’t overlap each other β€” but what about the bits within a single register? Control registers pack multiple fields into one word. If two fields share a bit position, reads and writes silently corrupt each other. In C, this is typically caught (or not) by manual review of mask constants.

A const fn can prove that every field’s mask/shift pair is disjoint from every other field in the same register:

#[derive(Debug, Clone, Copy)]
pub struct BitField {
    pub mask: u32,
    pub shift: u8,
}

impl BitField {
    pub const fn new(shift: u8, width: u8) -> Self {
        assert!(width > 0, "bit field width must be non-zero");
        assert!(shift as u32 + width as u32 <= 32, "bit field exceeds 32-bit register");
        // Build mask: `width` ones starting at bit `shift`
        let mask = ((1u64 << width as u64) - 1) as u32;
        Self { mask: mask << shift as u32, shift }
    }

    pub const fn positioned_mask(&self) -> u32 {
        self.mask
    }

    pub const fn encode(&self, value: u32) -> u32 {
        assert!(value & !( self.mask >> self.shift as u32 ) == 0, "value exceeds field width");
        value << self.shift as u32
    }
}

const fn fields_disjoint(a: &BitField, b: &BitField) -> bool {
    a.positioned_mask() & b.positioned_mask() == 0
}

// SPI Control Register fields: enable[0], mode[1:2], clock_div[4:7], irq_en[8]
const SPI_EN:     BitField = BitField::new(0, 1);   // bit 0
const SPI_MODE:   BitField = BitField::new(1, 2);   // bits 1–2
const SPI_CLKDIV: BitField = BitField::new(4, 4);   // bits 4–7
const SPI_IRQ:    BitField = BitField::new(8, 1);   // bit 8

// Compile-time proof: no field shares a bit position
const _: () = {
    assert!(fields_disjoint(&SPI_EN,   &SPI_MODE));
    assert!(fields_disjoint(&SPI_EN,   &SPI_CLKDIV));
    assert!(fields_disjoint(&SPI_EN,   &SPI_IRQ));
    assert!(fields_disjoint(&SPI_MODE, &SPI_CLKDIV));
    assert!(fields_disjoint(&SPI_MODE, &SPI_IRQ));
    assert!(fields_disjoint(&SPI_CLKDIV, &SPI_IRQ));
};

fn main() {
    let ctrl = SPI_EN.encode(1)
             | SPI_MODE.encode(0b10)
             | SPI_CLKDIV.encode(0b0110)
             | SPI_IRQ.encode(1);
    println!("SPI_CTRL = {:#010b} ({:#06X})", ctrl, ctrl);
}

This complements the register map pattern above β€” register maps prove inter-register disjointness while bitfield layouts prove intra-register disjointness. Together they provide full coverage from the register block down to individual bits.

Clock Tree / PLL Configuration

Microcontrollers derive peripheral clocks through multiplier/divider chains. A PLL produces f_vco = f_in Γ— N / M, and the VCO frequency must stay within a hardware-specified range. Get any parameter wrong for a specific board, and the chip outputs garbage clocks or refuses to lock. These constraints are perfect for const fn:

#[derive(Debug, Clone, Copy)]
pub struct PllConfig {
    pub input_khz: u32,     // external oscillator
    pub m: u32,             // input divider
    pub n: u32,             // VCO multiplier
    pub p: u32,             // system clock divider
}

impl PllConfig {
    pub const fn verified(input_khz: u32, m: u32, n: u32, p: u32) -> Self {
        // Input divider produces the PLL input frequency
        let pll_input = input_khz / m;
        assert!(pll_input >= 1_000 && pll_input <= 2_000,
            "PLL input must be 1–2 MHz");

        // VCO frequency must be within hardware limits
        let vco = pll_input as u64 * n as u64;
        assert!(vco >= 192_000 && vco <= 432_000,
            "VCO must be 192–432 MHz");

        // System clock divider must be even (hardware constraint)
        assert!(p == 2 || p == 4 || p == 6 || p == 8,
            "P must be 2, 4, 6, or 8");

        // Final system clock
        let sysclk = vco / p as u64;
        assert!(sysclk <= 168_000,
            "system clock exceeds 168 MHz maximum");

        Self { input_khz, m, n, p }
    }

    pub const fn vco_khz(&self) -> u32 {
        (self.input_khz / self.m) * self.n
    }

    pub const fn sysclk_khz(&self) -> u32 {
        self.vco_khz() / self.p
    }
}

// STM32F4 with 8 MHz HSE crystal β†’ 168 MHz system clock
const PLL: PllConfig = PllConfig::verified(8_000, 8, 336, 2);

// ❌ Would not compile: VCO = 480 MHz exceeds 432 MHz limit
// const BAD: PllConfig = PllConfig::verified(8_000, 8, 480, 2);

fn main() {
    println!("VCO:    {} MHz", PLL.vco_khz() / 1_000);
    println!("SYSCLK: {} MHz", PLL.sysclk_khz() / 1_000);
}

Uncommenting the BAD constant produces a compile-time error that pinpoints the violated constraint:

error[E0080]: evaluation of constant value failed
  --> src/main.rs:18:9
   |
18 |         assert!(vco >= 192_000 && vco <= 432_000,
   |         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   |         the evaluated program panicked at 'VCO must be 192–432 MHz'

The compiler catches the constraint violation in the middle of the derivation chain β€” not at the end. If you had instead violated the system clock limit (sysclk > 168 MHz), the error message would point to that assertion instead.

Derived-value constraint chains turn a single const fn into a multi-stage proof. Each intermediate value has its own hardware-mandated range. Changing one parameter (e.g., swapping to a 25 MHz crystal) immediately surfaces any downstream violation.

Derived-value constraint chains β€” the VCO frequency depends on input / m Γ— n, and the system clock depends on vco / p. Each intermediate value has its own hardware-mandated range. A single const fn verifies the entire chain, so changing one parameter (e.g., swapping to a 25 MHz crystal) immediately surfaces any downstream violation.

Compile-Time Lookup Tables

const fn can compute entire lookup tables at compile time, placing them in .rodata with zero startup cost. This is especially valuable for CRC tables, trigonometry, encoding maps, and error-correction codes β€” anywhere you’d normally use a build script or code generation:

const fn crc32_table() -> [u32; 256] {
    let mut table = [0u32; 256];
    let mut i: usize = 0;
    while i < 256 {
        let mut crc = i as u32;
        let mut j = 0;
        while j < 8 {
            if crc & 1 != 0 {
                crc = (crc >> 1) ^ 0xEDB8_8320; // standard CRC-32 polynomial
            } else {
                crc >>= 1;
            }
            j += 1;
        }
        table[i] = crc;
        i += 1;
    }
    table
}

/// Full CRC-32 table β€” computed at compile time, placed in .rodata
const CRC32_TABLE: [u32; 256] = crc32_table();

/// Compute CRC-32 over a byte slice at runtime using the precomputed table.
fn crc32(data: &[u8]) -> u32 {
    let mut crc: u32 = !0;
    for &byte in data {
        let index = ((crc ^ byte as u32) & 0xFF) as usize;
        crc = (crc >> 8) ^ CRC32_TABLE[index];
    }
    !crc
}

// Smoke-test: well-known CRC-32 of "123456789"
const _: () = {
    // Verify a single table entry at compile time
    assert!(CRC32_TABLE[0] == 0x0000_0000);
    assert!(CRC32_TABLE[1] == 0x7707_3096);
};

fn main() {
    let check = crc32(b"123456789");
    // Known CRC-32 of "123456789" is 0xCBF43926
    assert_eq!(check, 0xCBF4_3926);
    println!("CRC-32 of '123456789' = {:#010X} βœ“", check);
    println!("Table size: {} entries Γ— 4 bytes = {} bytes in .rodata",
        CRC32_TABLE.len(), CRC32_TABLE.len() * 4);
}

The crc32_table() function runs entirely during compilation. The resulting 1 KB table is baked into the binary’s read-only data section β€” no allocator, no initialization code, no startup cost. Compare this with a C approach that either uses a code generator or computes the table at startup. The Rust version is provably correct (the const _ assertions verify known values) and provably complete (the compiler will reject the program if the function fails to produce a valid table).

When to Use Const Fn Proofs

ScenarioRecommendation
Memory maps, register offsets, partition tablesβœ… Always
Protocol frame layouts with fixed fieldsβœ… Always
Bitfield masks within a registerβœ… Always
Clock tree / PLL parameter chainsβœ… Always
Lookup tables (CRC, trig, encoding)βœ… Always β€” zero startup cost
Constants with cross-value invariants (non-overlap, sum ≀ bound)βœ… Always
Configuration values with domain constraintsβœ… When values are known at compile time
Values computed from user input or files❌ Use runtime validation
Highly dynamic structures (trees, graphs)❌ Use property-based testing
Single-value range checks⚠️ Consider newtype + From instead (ch07)

Cost Summary

WhatRuntime cost
const fn assertions (assert!, panic!)Compile time only β€” 0 instructions
const _: () = { ... } validation blocksCompile time only β€” not in binary
Region, Register, Field structsPlain data β€” same layout as raw integers
Inline const { } generic validationMonomorphised at compile time β€” 0 cost
Lookup tables (crc32_table())Computed at compile time β€” placed in .rodata
Phantom-typed access markers (TypedRegion<RW>)Zero-sized β€” optimised away

Every row is zero runtime cost β€” the proofs exist only during compilation. The resulting binary contains only the verified constants and lookup tables, with no assertion-checking code.

Exercise: Flash Partition Map

Design a verified flash partition map for a 1 MB NOR flash starting at 0x0800_0000. Requirements:

  1. Four partitions: bootloader (64 KB), application (640 KB), config (64 KB), OTA staging (256 KB)
  2. Every partition must be 4 KB aligned (flash erase granularity): both base and size must be multiples of 4096
  3. No partition may overlap another
  4. All partitions must fit within flash
  5. Add a const fn total_used() that returns the sum of all partition sizes and assert it equals 1 MB
Solution
#[derive(Debug, Clone, Copy)]
pub struct FlashRegion {
    pub base: u32,
    pub size: u32,
}

impl FlashRegion {
    pub const fn new(base: u32, size: u32) -> Self {
        assert!(size > 0, "partition size must be non-zero");
        assert!(base % 4096 == 0, "partition base must be 4 KB aligned");
        assert!(size % 4096 == 0, "partition size must be 4 KB aligned");
        assert!(
            base as u64 + size as u64 <= u32::MAX as u64,
            "partition overflows address space"
        );
        Self { base, size }
    }

    pub const fn end(&self) -> u32 { self.base + self.size }

    pub const fn contains(&self, inner: &FlashRegion) -> bool {
        inner.base >= self.base && inner.end() <= self.end()
    }

    pub const fn overlaps(&self, other: &FlashRegion) -> bool {
        self.base < other.end() && other.base < self.end()
    }
}

pub struct FlashMap {
    pub total:  FlashRegion,
    pub boot:   FlashRegion,
    pub app:    FlashRegion,
    pub config: FlashRegion,
    pub ota:    FlashRegion,
}

impl FlashMap {
    pub const fn verified(
        total: FlashRegion,
        boot: FlashRegion,
        app: FlashRegion,
        config: FlashRegion,
        ota: FlashRegion,
    ) -> Self {
        assert!(total.contains(&boot),   "bootloader exceeds flash");
        assert!(total.contains(&app),    "application exceeds flash");
        assert!(total.contains(&config), "config exceeds flash");
        assert!(total.contains(&ota),    "OTA staging exceeds flash");

        assert!(!boot.overlaps(&app),    "boot/app overlap");
        assert!(!boot.overlaps(&config), "boot/config overlap");
        assert!(!boot.overlaps(&ota),    "boot/ota overlap");
        assert!(!app.overlaps(&config),  "app/config overlap");
        assert!(!app.overlaps(&ota),     "app/ota overlap");
        assert!(!config.overlaps(&ota),  "config/ota overlap");

        Self { total, boot, app, config, ota }
    }

    pub const fn total_used(&self) -> u32 {
        self.boot.size + self.app.size + self.config.size + self.ota.size
    }
}

const FLASH: FlashMap = FlashMap::verified(
    FlashRegion::new(0x0800_0000, 1024 * 1024),  // 1 MB total
    FlashRegion::new(0x0800_0000,   64 * 1024),   // bootloader: 64 KB
    FlashRegion::new(0x0801_0000,  640 * 1024),   // application: 640 KB
    FlashRegion::new(0x080B_0000,   64 * 1024),   // config: 64 KB
    FlashRegion::new(0x080C_0000,  256 * 1024),   // OTA staging: 256 KB
);

// Every byte of flash is accounted for
const _: () = assert!(
    FLASH.total_used() == 1024 * 1024,
    "partitions must exactly fill flash"
);

fn main() {
    println!("Flash map: {} KB used / {} KB total",
        FLASH.total_used() / 1024,
        FLASH.total.size / 1024);
}
flowchart LR
    subgraph compile["Compile Time β€” zero runtime cost"]
        direction TB
        RGN["Region::new()<br/>βœ… size &gt; 0<br/>βœ… no overflow"]
        MAP["SramMap::verified()<br/>βœ… containment<br/>βœ… non-overlap"]
        ACC["TypedRegion&lt;RW&gt;<br/>βœ… access control"]
        PROV["VerifiedAddr::new()<br/>βœ… provenance"]
    end

    subgraph runtime["Runtime"]
        HW["Hardware access<br/>No bounds checks<br/>No permission checks"]
    end

    RGN --> MAP --> ACC --> PROV --> HW

    style RGN fill:#c8e6c9,color:#000
    style MAP fill:#c8e6c9,color:#000
    style ACC fill:#e1f5fe,color:#000
    style PROV fill:#e1f5fe,color:#000
    style HW fill:#fff3e0,color:#000

Key Takeaways

  1. const fn + assert! = compile-time proof obligation β€” if the assertion fails during const evaluation, the program does not compile. No test needed, no code review catch β€” the compiler proves it.

  2. Memory maps are ideal candidates β€” sub-region containment, overlap freedom, total-size bounds, and alignment constraints are all expressible as const fn assertions. The C #define approach offers none of these guarantees.

  3. Phantom types layer on top β€” combine const fn (value verification) with phantom-typed access markers (permission verification) for defense in depth at zero runtime cost.

  4. Provenance can be established at compile time β€” VerifiedAddr proves at compile time that an address belongs to a specific region, eliminating runtime bounds checks on every access.

  5. The pattern generalizes beyond memory β€” register maps, bitfield masks, protocol frames, clock trees, DMA parameters β€” anywhere you have compile-time-known values with structural invariants.

  6. Bitfields and clock trees are ideal candidates β€” intra-register bit disjointness and derived-value constraint chains (VCO range, divider limits) are exactly the kind of invariant that const fn proves effortlessly.

  7. const fn replaces code generators and build scripts for lookup tables β€” CRC tables, trigonometry, encoding maps β€” computed at compile time, placed in .rodata, with zero startup cost and no external tooling.

  8. Inline const { } blocks validate generic parameters β€” since Rust 1.79, you can enforce constraints on const generics at the call site, catching misuse before any code runs.

Send & Sync β€” Compile-Time Concurrency Proofs 🟠

What you’ll learn: How Rust’s Send and Sync auto-traits turn the compiler into a concurrency auditor β€” proving at compile time which types can cross thread boundaries and which can be shared, with zero runtime cost.

Cross-references: ch04 (capability tokens), ch09 (phantom types), ch15 (const fn proofs)

The Problem: Concurrent Access Without a Safety Net

In systems programming, peripherals, shared buffers, and global state are accessed from multiple contexts β€” main loops, interrupt handlers, DMA callbacks, and worker threads. In C, the compiler offers no enforcement whatsoever:

/* Shared sensor buffer β€” accessed from main loop and ISR */
volatile uint32_t sensor_buf[64];
volatile uint32_t buf_index = 0;

void SENSOR_IRQHandler(void) {
    sensor_buf[buf_index++] = read_sensor();  /* Race: buf_index read + write */
}

void process_sensors(void) {
    for (uint32_t i = 0; i < buf_index; i++) {  /* buf_index changes mid-loop */
        process(sensor_buf[i]);                   /* Data overwritten mid-read */
    }
    buf_index = 0;                                /* ISR fires between these lines */
}

The volatile keyword prevents the compiler from optimizing away the reads, but it does nothing about data races. Two contexts can read and write buf_index simultaneously, producing torn values, lost updates, or buffer overruns. The same problem appears with pthread_mutex_t β€” the compiler will happily let you forget to lock:

pthread_mutex_t lock;
int shared_counter;

void increment(void) {
    shared_counter++;  /* Oops β€” forgot pthread_mutex_lock(&lock) */
}

Every concurrent bug is discovered at runtime β€” typically under load, in production, and intermittently.

What Send and Sync Prove

Rust defines two marker traits that the compiler derives automatically:

TraitProofInformal meaning
SendA value of type T can be safely moved to another threadβ€œThis can cross a thread boundary”
SyncA shared reference &T can be safely used by multiple threadsβ€œThis can be read from multiple threads”

These are auto-traits β€” the compiler derives them by inspecting every field. A struct is Send if all its fields are Send. A struct is Sync if all its fields are Sync. If any field opts out, the entire struct opts out. No annotation needed, no runtime overhead β€” the proof is structural.

flowchart TD
    STRUCT["Your struct"]
    INSPECT["Compiler inspects<br/>every field"]
    ALL_SEND{"All fields<br/>Send?"}
    ALL_SYNC{"All fields<br/>Sync?"}
    SEND_YES["Send βœ…<br/><i>can cross thread boundaries</i>"]
    SEND_NO["!Send ❌<br/><i>confined to one thread</i>"]
    SYNC_YES["Sync βœ…<br/><i>shareable across threads</i>"]
    SYNC_NO["!Sync ❌<br/><i>no concurrent references</i>"]

    STRUCT --> INSPECT
    INSPECT --> ALL_SEND
    INSPECT --> ALL_SYNC
    ALL_SEND -->|Yes| SEND_YES
    ALL_SEND -->|"Any field !Send<br/>(e.g., Rc, *const T)"| SEND_NO
    ALL_SYNC -->|Yes| SYNC_YES
    ALL_SYNC -->|"Any field !Sync<br/>(e.g., Cell, RefCell)"| SYNC_NO

    style SEND_YES fill:#c8e6c9,color:#000
    style SYNC_YES fill:#c8e6c9,color:#000
    style SEND_NO fill:#ffcdd2,color:#000
    style SYNC_NO fill:#ffcdd2,color:#000

The compiler is the auditor. In C, thread-safety annotations live in comments and header documentation β€” advisory, never enforced. In Rust, Send and Sync are derived from the structure of the type itself. Adding a single Cell<f32> field automatically makes the containing struct !Sync. No programmer action required, no way to forget.

The two traits are linked by a key identity:

T is Sync if and only if &T is Send.

This makes intuitive sense: if a shared reference can be safely sent to another thread, then the underlying type is safe for concurrent reads.

Types That Opt Out

Certain types are deliberately !Send or !Sync:

TypeSendSyncWhy
u32, String, Vec<T>βœ…βœ…No interior mutability, no raw pointers
Cell<T>, RefCell<T>βœ…βŒInterior mutability without synchronization
Rc<T>❌❌Reference count is not atomic
*const T, *mut T❌❌Raw pointers have no safety guarantees
Arc<T> (where T: Send + Sync)βœ…βœ…Atomic reference count
Mutex<T> (where T: Send)βœ…βœ…Lock serializes all access

Every ❌ in this table is a compile-time invariant. You cannot accidentally send an Rc to another thread β€” the compiler rejects it.

!Send Peripheral Handles

In embedded systems, a peripheral register block lives at a fixed memory address and should only be accessed from a single execution context. Raw pointers are inherently !Send and !Sync, so wrapping one automatically opts the containing type out of both traits:

/// A handle to a memory-mapped UART peripheral.
/// The raw pointer makes this automatically !Send and !Sync.
pub struct Uart {
    regs: *const u32,
}

impl Uart {
    pub fn new(base: usize) -> Self {
        Self { regs: base as *const u32 }
    }

    pub fn write_byte(&self, byte: u8) {
        // In real firmware: unsafe { write_volatile(self.regs.add(DATA_OFFSET), byte as u32) }
        println!("UART TX: {:#04X}", byte);
    }
}

fn main() {
    let uart = Uart::new(0x4000_1000);
    uart.write_byte(b'A');  // βœ… Use on the creating thread

    // ❌ Would not compile: Uart is !Send
    // std::thread::spawn(move || {
    //     uart.write_byte(b'B');
    // });
}

The commented-out thread::spawn would produce:

error[E0277]: `*const u32` cannot be sent between threads safely
   |
   |     std::thread::spawn(move || {
   |     ^^^^^^^^^^^^^^^^^^ within `Uart`, the trait `Send` is not
   |                        implemented for `*const u32`

No raw pointer? Use PhantomData. Sometimes a type has no raw pointer but should still be confined to one thread β€” for example, a file descriptor index or a handle obtained from a C library:

use std::marker::PhantomData;

/// An opaque handle from a C library. PhantomData<*const ()> makes it
/// !Send + !Sync even though the inner fd is just a plain integer.
pub struct LibHandle {
    fd: i32,
    _not_send: PhantomData<*const ()>,
}

impl LibHandle {
    pub fn open(path: &str) -> Self {
        let _ = path;
        Self { fd: 42, _not_send: PhantomData }
    }

    pub fn fd(&self) -> i32 { self.fd }
}

fn main() {
    let handle = LibHandle::open("/dev/sensor0");
    println!("fd = {}", handle.fd());

    // ❌ Would not compile: LibHandle is !Send
    // std::thread::spawn(move || { let _ = handle.fd(); });
}

This is the compile-time equivalent of C’s β€œplease read the documentation that says this handle isn’t thread-safe.” In Rust, the compiler enforces it.

Mutex Transforms !Sync into Sync

Cell<T> and RefCell<T> provide interior mutability without any synchronization β€” so they’re !Sync. But sometimes you genuinely need to share mutable state across threads. Mutex<T> adds the missing synchronization, and the compiler recognizes this:

If T: Send, then Mutex<T>: Send + Sync.

The lock serializes all access, so the !Sync inner type becomes safe to share. The compiler proves this structurally β€” no runtime check for β€œdid the programmer remember to lock”:

use std::sync::{Arc, Mutex};
use std::cell::Cell;

/// A sensor cache using Cell for interior mutability.
/// Cell<u32> is !Sync β€” can't be shared across threads directly.
struct SensorCache {
    last_reading: Cell<u32>,
    reading_count: Cell<u32>,
}

fn main() {
    // Mutex makes SensorCache safe to share β€” compiler proves it
    let cache = Arc::new(Mutex::new(SensorCache {
        last_reading: Cell::new(0),
        reading_count: Cell::new(0),
    }));

    let handles: Vec<_> = (0..4).map(|i| {
        let c = Arc::clone(&cache);
        std::thread::spawn(move || {
            let guard = c.lock().unwrap();  // Must lock before access
            guard.last_reading.set(i * 10);
            guard.reading_count.set(guard.reading_count.get() + 1);
        })
    }).collect();

    for h in handles { h.join().unwrap(); }

    let guard = cache.lock().unwrap();
    println!("Last reading: {}", guard.last_reading.get());
    println!("Total reads:  {}", guard.reading_count.get());
}

Compare to the C version: pthread_mutex_lock is a runtime call that the programmer can forget. Here, the type system makes it impossible to access SensorCache without going through the Mutex. The proof is structural β€” the only runtime cost is the lock itself.

Mutex doesn’t just synchronize β€” it proves synchronization. Mutex::lock() returns a MutexGuard that Derefs to &T. There is no way to obtain a reference to the inner data without going through the lock. The API makes β€œforgot to lock” structurally unrepresentable.

Function Bounds as Theorems

std::thread::spawn has this signature:

pub fn spawn<F, T>(f: F) -> JoinHandle<T>
where
    F: FnOnce() -> T + Send + 'static,
    T: Send + 'static,

The Send + 'static bound isn’t just an implementation detail β€” it’s a theorem:

β€œAny closure and return value passed to spawn is proven at compile time to be safe to run on another thread, with no dangling references.”

You can apply the same pattern to your own APIs:

use std::sync::mpsc;

/// Run a task on a background thread and return its result.
/// The bounds prove: the closure and its result are thread-safe.
fn run_on_background<F, T>(task: F) -> T
where
    F: FnOnce() -> T + Send + 'static,
    T: Send + 'static,
{
    let (tx, rx) = mpsc::channel();
    std::thread::spawn(move || {
        let _ = tx.send(task());
    });
    rx.recv().expect("background task panicked")
}

fn main() {
    // βœ… u32 is Send, closure captures nothing non-Send
    let result = run_on_background(|| 6 * 7);
    println!("Result: {result}");

    // βœ… String is Send
    let greeting = run_on_background(|| String::from("hello from background"));
    println!("{greeting}");

    // ❌ Would not compile: Rc is !Send
    // use std::rc::Rc;
    // let data = Rc::new(42);
    // run_on_background(move || *data);
}

Uncommenting the Rc example produces a precise diagnostic:

error[E0277]: `Rc<i32>` cannot be sent between threads safely
   --> src/main.rs
    |
    |     run_on_background(move || *data);
    |     ^^^^^^^^^^^^^^^^^^ `Rc<i32>` cannot be sent between threads safely
    |
note: required by a bound in `run_on_background`
    |
    |     F: FnOnce() -> T + Send + 'static,
    |                        ^^^^ required by this bound

The compiler traces the violation back to the exact bound β€” and tells the programmer why. Compare to C’s pthread_create:

int pthread_create(pthread_t *thread, const pthread_attr_t *attr,
                   void *(*start_routine)(void *), void *arg);

The void *arg accepts anything β€” thread-safe or not. The C compiler can’t distinguish a non-atomic refcount from a plain integer. Rust’s trait bounds make the distinction at the type level.

When to Use Send/Sync Proofs

ScenarioApproach
Peripheral handle wrapping a raw pointerAutomatic !Send + !Sync β€” nothing to do
Handle from C library (integer fd/handle)Add PhantomData<*const ()> for !Send + !Sync
Shared config behind a lockArc<Mutex<T>> β€” compiler proves access is safe
Cross-thread message passingmpsc::channel β€” Send bound enforced automatically
Task spawner or thread pool APIRequire F: Send + 'static in signature
Single-threaded resource (e.g., GPU context)PhantomData<*const ()> to prevent sharing
Type should be Send but contains a raw pointerunsafe impl Send with documented safety justification

Cost Summary

WhatRuntime cost
Send / Sync auto-derivationCompile time only β€” 0 bytes
PhantomData<*const ()> fieldZero-sized β€” optimised away
!Send / !Sync enforcementCompile time only β€” no runtime check
F: Send + 'static function boundsMonomorphised β€” static dispatch, no boxing
Mutex<T> lockRuntime lock (unavoidable for shared mutation)
Arc<T> reference countingAtomic increment/decrement (unavoidable for shared ownership)

The first four rows are zero-cost β€” they exist only in the type system and vanish after compilation. Mutex and Arc carry unavoidable runtime costs, but those costs are the minimum any correct concurrent program must pay β€” Rust just makes sure you pay them.

Exercise: DMA Transfer Guard

Design a DmaTransfer<T> that holds a buffer while a DMA transfer is in flight. Requirements:

  1. DmaTransfer must be !Send β€” the DMA controller uses physical addresses tied to this core’s memory bus
  2. DmaTransfer must be !Sync β€” concurrent reads while DMA is writing would see torn data
  3. Provide a wait() method that consumes the guard and returns the buffer β€” ownership proves the transfer is complete
  4. The buffer type T must implement a DmaSafe marker trait
Solution
use std::marker::PhantomData;

/// Marker trait for types that can be used as DMA buffers.
/// In real firmware: type must be repr(C) with no padding.
trait DmaSafe {}

impl DmaSafe for [u8; 64] {}
impl DmaSafe for [u8; 256] {}

/// A guard representing an in-flight DMA transfer.
/// !Send + !Sync: can't be sent to another thread or shared.
pub struct DmaTransfer<T: DmaSafe> {
    buffer: T,
    channel: u8,
    _no_send_sync: PhantomData<*const ()>,
}

impl<T: DmaSafe> DmaTransfer<T> {
    /// Start a DMA transfer. The buffer is consumed β€” no one else can touch it.
    pub fn start(buffer: T, channel: u8) -> Self {
        // In real firmware: configure DMA channel, set source/dest, start transfer
        println!("DMA channel {} started", channel);
        Self {
            buffer,
            channel,
            _no_send_sync: PhantomData,
        }
    }

    /// Wait for the transfer to complete and return the buffer.
    /// Consumes self β€” the guard no longer exists after this.
    pub fn wait(self) -> T {
        // In real firmware: poll DMA status register until complete
        println!("DMA channel {} complete", self.channel);
        self.buffer
    }
}

fn main() {
    let buf = [0u8; 64];

    // Start transfer β€” buf is moved into the guard
    let transfer = DmaTransfer::start(buf, 2);

    // ❌ buf is no longer accessible β€” ownership prevents use-during-DMA
    // println!("{:?}", buf);

    // ❌ Would not compile: DmaTransfer is !Send
    // std::thread::spawn(move || { transfer.wait(); });

    // βœ… Wait on the original thread, get the buffer back
    let buf = transfer.wait();
    println!("Buffer recovered: {} bytes", buf.len());
}
flowchart TB
    subgraph compiler["Compile Time β€” Auto-Derived Proofs"]
        direction TB
        SEND["Send<br/>βœ… safe to move across threads"]
        SYNC["Sync<br/>βœ… safe to share references"]
        NOTSEND["!Send<br/>❌ confined to one thread"]
        NOTSYNC["!Sync<br/>❌ no concurrent sharing"]
    end

    subgraph types["Type Taxonomy"]
        direction TB
        PLAIN["Primitives, String, Vec<br/>Send + Sync"]
        CELL["Cell, RefCell<br/>Send + !Sync"]
        RC["Rc, raw pointers<br/>!Send + !Sync"]
        MUTEX["Mutex&lt;T&gt;<br/>restores Sync"]
        ARC["Arc&lt;T&gt;<br/>shared ownership + Send"]
    end

    subgraph runtime["Runtime"]
        SAFE["Thread-safe access<br/>No data races<br/>No forgotten locks"]
    end

    SEND --> PLAIN
    NOTSYNC --> CELL
    NOTSEND --> RC
    CELL --> MUTEX --> SAFE
    RC --> ARC --> SAFE
    PLAIN --> SAFE

    style SEND fill:#c8e6c9,color:#000
    style SYNC fill:#c8e6c9,color:#000
    style NOTSEND fill:#ffcdd2,color:#000
    style NOTSYNC fill:#ffcdd2,color:#000
    style PLAIN fill:#c8e6c9,color:#000
    style CELL fill:#fff3e0,color:#000
    style RC fill:#ffcdd2,color:#000
    style MUTEX fill:#e1f5fe,color:#000
    style ARC fill:#e1f5fe,color:#000
    style SAFE fill:#c8e6c9,color:#000

Key Takeaways

  1. Send and Sync are compile-time proofs about concurrency safety β€” the compiler derives them structurally by inspecting every field. No annotation, no runtime cost, no opt-in needed.

  2. Raw pointers automatically opt out β€” any type containing *const T or *mut T becomes !Send + !Sync. This makes peripheral handles naturally thread-confined.

  3. PhantomData<*const ()> is the explicit opt-out β€” when a type has no raw pointer but should still be thread-confined (C library handles, GPU contexts), a phantom field does the job.

  4. Mutex<T> restores Sync with proof β€” the compiler structurally proves that all access goes through the lock. Unlike C’s pthread_mutex_t, you cannot forget to lock.

  5. Function bounds are theorems β€” F: Send + 'static in a spawner’s signature is a compile-time proof obligation: every call site must prove its closure is thread-safe. Compare to C’s void *arg which accepts anything.

  6. The pattern complements all other correctness techniques β€” typestate proves protocol sequencing, phantom types prove permissions, const fn proves value invariants, and Send/Sync prove concurrency safety. Together they cover the full correctness surface.

Putting It All Together β€” A Complete Diagnostic Platform 🟑

What you’ll learn: How all seven core patterns (ch02–ch09) compose into a single diagnostic workflow β€” authentication, sessions, typed commands, audit tokens, dimensional results, validated data, and phantom-typed registers β€” with zero total runtime overhead.

Cross-references: Every core pattern chapter (ch02–ch09), ch14 (testing these guarantees)

Goal

This chapter combines seven patterns from chapters 2–9 into a single, realistic diagnostic workflow. We’ll build a server health check that:

  1. Authenticates (capability token β€” ch04)
  2. Opens an IPMI session (type-state β€” ch05)
  3. Sends typed commands (typed commands β€” ch02)
  4. Uses single-use tokens for audit logging (single-use types β€” ch03)
  5. Returns dimensional results (dimensional analysis β€” ch06)
  6. Validates FRU data (validated boundaries β€” ch07)
  7. Reads typed registers (phantom types β€” ch09)
use std::marker::PhantomData;
use std::io;
// ──── Pattern 1: Dimensional Types (ch06) ────

#[derive(Debug, Clone, Copy, PartialEq, PartialOrd)]
pub struct Celsius(pub f64);

#[derive(Debug, Clone, Copy, PartialEq, PartialOrd)]
pub struct Rpm(pub f64);

#[derive(Debug, Clone, Copy, PartialEq, PartialOrd)]
pub struct Volts(pub f64);

// ──── Pattern 2: Typed Commands (ch02) ────

/// Same trait shape as ch02, using methods (not associated constants)
/// for consistency. Associated constants (`const NETFN: u8`) are an
/// equally valid alternative when the value is truly fixed per type.
pub trait IpmiCmd {
    type Response;
    fn net_fn(&self) -> u8;
    fn cmd_byte(&self) -> u8;
    fn payload(&self) -> Vec<u8>;
    fn parse_response(&self, raw: &[u8]) -> io::Result<Self::Response>;
}

pub struct ReadTemp { pub sensor_id: u8 }
impl IpmiCmd for ReadTemp {
    type Response = Celsius;   // ← dimensional type!
    fn net_fn(&self) -> u8 { 0x04 }
    fn cmd_byte(&self) -> u8 { 0x2D }
    fn payload(&self) -> Vec<u8> { vec![self.sensor_id] }
    fn parse_response(&self, raw: &[u8]) -> io::Result<Celsius> {
        if raw.is_empty() {
            return Err(io::Error::new(io::ErrorKind::InvalidData, "empty"));
        }
        Ok(Celsius(raw[0] as f64))
    }
}

pub struct ReadFanSpeed { pub fan_id: u8 }
impl IpmiCmd for ReadFanSpeed {
    type Response = Rpm;
    fn net_fn(&self) -> u8 { 0x04 }
    fn cmd_byte(&self) -> u8 { 0x2D }
    fn payload(&self) -> Vec<u8> { vec![self.fan_id] }
    fn parse_response(&self, raw: &[u8]) -> io::Result<Rpm> {
        if raw.len() < 2 {
            return Err(io::Error::new(io::ErrorKind::InvalidData, "need 2 bytes"));
        }
        Ok(Rpm(u16::from_le_bytes([raw[0], raw[1]]) as f64))
    }
}

// ──── Pattern 3: Capability Token (ch04) ────

pub struct AdminToken { _private: () }

pub fn authenticate(user: &str, pass: &str) -> Result<AdminToken, &'static str> {
    if user == "admin" && pass == "secret" {
        Ok(AdminToken { _private: () })
    } else {
        Err("authentication failed")
    }
}

// ──── Pattern 4: Type-State Session (ch05) ────

pub struct Idle;
pub struct Active;

pub struct Session<State> {
    host: String,
    _state: PhantomData<State>,
}

impl Session<Idle> {
    pub fn connect(host: &str) -> Self {
        Session { host: host.to_string(), _state: PhantomData }
    }

    pub fn activate(
        self,
        _admin: &AdminToken,  // ← requires capability token
    ) -> Result<Session<Active>, String> {
        println!("Session activated on {}", self.host);
        Ok(Session { host: self.host, _state: PhantomData })
    }
}

impl Session<Active> {
    /// Execute a typed command β€” only available on Active sessions.
    /// Returns io::Result to propagate transport errors (consistent with ch02).
    pub fn execute<C: IpmiCmd>(&mut self, cmd: &C) -> io::Result<C::Response> {
        let raw_response = self.raw_send(cmd.net_fn(), cmd.cmd_byte(), &cmd.payload())?;
        cmd.parse_response(&raw_response)
    }

    fn raw_send(&self, _nf: u8, _cmd: u8, _data: &[u8]) -> io::Result<Vec<u8>> {
        Ok(vec![42, 0x1E]) // stub: raw IPMI response
    }

    pub fn close(self) { println!("Session closed"); }
}

// ──── Pattern 5: Single-Use Audit Token (ch03) ────

/// Each diagnostic run gets a unique audit token.
/// Not Clone, not Copy β€” ensures each audit entry is unique.
pub struct AuditToken {
    run_id: u64,
}

impl AuditToken {
    pub fn issue(run_id: u64) -> Self {
        AuditToken { run_id }
    }

    /// Consume the token to write an audit log entry.
    pub fn log(self, message: &str) {
        println!("[AUDIT run_id={}] {}", self.run_id, message);
        // token is consumed β€” can't log the same run_id twice
    }
}

// ──── Pattern 6: Validated Boundary (ch07) ────
// Simplified from ch07's full ValidFru β€” only the fields needed for this
// composite example.  See ch07 for the complete TryFrom<RawFruData> version.

pub struct ValidFru {
    pub board_serial: String,
    pub product_name: String,
}

impl ValidFru {
    pub fn parse(raw: &[u8]) -> Result<Self, &'static str> {
        if raw.len() < 8 { return Err("FRU too short"); }
        if raw[0] != 0x01 { return Err("bad FRU version"); }
        Ok(ValidFru {
            board_serial: "SN12345".to_string(),  // stub
            product_name: "ServerX".to_string(),
        })
    }
}

// ──── Pattern 7: Phantom-Typed Registers (ch09) ────

pub struct Width16;
pub struct Reg<W> { offset: u16, _w: PhantomData<W> }

impl Reg<Width16> {
    pub fn read(&self) -> u16 { 0x8086 } // stub
}

pub struct PcieDev {
    pub vendor_id: Reg<Width16>,
    pub device_id: Reg<Width16>,
}

impl PcieDev {
    pub fn new() -> Self {
        PcieDev {
            vendor_id: Reg { offset: 0x00, _w: PhantomData },
            device_id: Reg { offset: 0x02, _w: PhantomData },
        }
    }
}

// ──── Composite Workflow ────

fn full_diagnostic() -> Result<(), String> {
    // 1. Authenticate β†’ get capability token
    let admin = authenticate("admin", "secret")
        .map_err(|e| e.to_string())?;

    // 2. Connect and activate session (type-state: Idle β†’ Active)
    let session = Session::connect("192.168.1.100");
    let mut session = session.activate(&admin)?;  // requires AdminToken

    // 3. Send typed commands (response type matches command)
    let temp: Celsius = session.execute(&ReadTemp { sensor_id: 0 })
        .map_err(|e| e.to_string())?;
    let fan: Rpm = session.execute(&ReadFanSpeed { fan_id: 1 })
        .map_err(|e| e.to_string())?;

    // Type mismatch would be caught:
    // let wrong: Volts = session.execute(&ReadTemp { sensor_id: 0 })?;
    //  ❌ ERROR: expected Celsius, found Volts

    // 4. Read phantom-typed PCIe registers
    let pcie = PcieDev::new();
    let vid: u16 = pcie.vendor_id.read();  // guaranteed u16

    // 5. Validate FRU data at the boundary
    let raw_fru = vec![0x01, 0x00, 0x00, 0x01, 0x01, 0x00, 0x00, 0xFD];
    let fru = ValidFru::parse(&raw_fru)
        .map_err(|e| e.to_string())?;

    // 6. Issue single-use audit token
    let audit = AuditToken::issue(1001);

    // 7. Generate report (all data is typed and validated)
    let report = format!(
        "Server: {} (SN: {}), VID: 0x{:04X}, CPU: {:?}, Fan: {:?}",
        fru.product_name, fru.board_serial, vid, temp, fan,
    );

    // 8. Consume audit token β€” can't log twice
    audit.log(&report);
    // audit.log("oops");  // ❌ use of moved value

    // 9. Close session (type-state: Active β†’ dropped)
    session.close();

    Ok(())
}

What the Compiler Proves

Bug classHow it’s preventedPattern
Unauthenticated accessactivate() requires &AdminTokenCapability token
Command in wrong session stateexecute() only exists on Session<Active>Type-state
Wrong response typeReadTemp::Response = Celsius, fixed by traitTyped commands
Unit confusion (Β°C vs RPM)Celsius β‰  Rpm β‰  VoltsDimensional types
Register width mismatchReg<Width16> returns u16Phantom types
Processing unvalidated dataMust call ValidFru::parse() firstValidated boundary
Duplicate audit entriesAuditToken is consumed on logSingle-use type
Out-of-order power sequencingEach step requires previous tokenCapability tokens (ch04)

Total runtime overhead of ALL these guarantees: zero.

Every check happens at compile time. The generated assembly is identical to hand-written C code with no checks at all β€” but C can have bugs, this can’t.

Key Takeaways

  1. Seven patterns compose seamlessly β€” capability tokens, type-state, typed commands, single-use types, dimensional types, validated boundaries, and phantom types all work together.
  2. The compiler proves eight bug classes impossible β€” see the β€œWhat the Compiler Proves” table above.
  3. Zero total runtime overhead β€” the generated assembly is identical to unchecked C code.
  4. Each pattern is independently useful β€” you don’t need all seven; adopt them incrementally.
  5. The integration chapter is a design template β€” use it as a starting point for your own typed diagnostic workflows.
  6. From IPMI to Redfish at scale β€” ch17 and ch18 apply these same seven patterns (plus capability mixins from ch08) to a full Redfish client and server. The IPMI workflow here is the foundation; the Redfish walkthroughs show how the composition scales to production systems with multiple data sources and schema-version constraints.

Applied Walkthrough β€” Type-Safe Redfish Client 🟑

What you’ll learn: How to compose type-state sessions, capability tokens, phantom-typed resource navigation, dimensional analysis, validated boundaries, builder type-state, and single-use types into a complete, zero-overhead Redfish client β€” where every protocol violation is a compile error.

Cross-references: ch02 (typed commands), ch03 (single-use types), ch04 (capability tokens), ch05 (type-state), ch06 (dimensional types), ch07 (validated boundaries), ch09 (phantom types), ch10 (IPMI integration), ch11 (trick 4 β€” builder type-state)

Why Redfish Deserves Its Own Chapter

Chapter 10 composes the core patterns around IPMI β€” a byte-level protocol. But most BMC platforms now expose a Redfish REST API alongside (or instead of) IPMI, and Redfish introduces its own category of correctness hazards:

HazardExampleConsequence
Malformed URIGET /redfish/v1/Chassis/1/Processors (wrong parent)404 or wrong data silently returned
Action on wrong power stateReset(ForceOff) on an already-off systemBMC returns error, or worse, races with another operation
Missing privilegeOperator-level code calls Manager.ResetToDefaults403 in production, security audit finding
Incomplete PATCHOmit a required BIOS attribute from a PATCH bodySilent no-op or partial config corruption
Unverified firmware applySimpleUpdate invoked before image integrity checkBricked BMC
Schema version mismatchAccess LastResetTime on a v1.5 BMC (added in v1.13)null field β†’ runtime panic
Unit confusion in telemetryCompare inlet temperature (Β°C) to power draw (W)Nonsensical threshold decisions

In C, Python, or untyped Rust, every one of these is prevented by discipline and testing alone. This chapter makes them compile errors.

The Untyped Redfish Client

A typical Redfish client looks like this:

use std::collections::HashMap;

struct RedfishClient {
    base_url: String,
    token: Option<String>,
}

impl RedfishClient {
    fn get(&self, path: &str) -> Result<serde_json::Value, String> {
        // ... HTTP GET ...
        Ok(serde_json::json!({})) // stub
    }

    fn patch(&self, path: &str, body: &serde_json::Value) -> Result<(), String> {
        // ... HTTP PATCH ...
        Ok(()) // stub
    }

    fn post_action(&self, path: &str, body: &serde_json::Value) -> Result<(), String> {
        // ... HTTP POST ...
        Ok(()) // stub
    }
}

fn check_thermal(client: &RedfishClient) -> Result<(), String> {
    let resp = client.get("/redfish/v1/Chassis/1/Thermal")?;

    // πŸ› Is this field always present? What if the BMC returns null?
    let cpu_temp = resp["Temperatures"][0]["ReadingCelsius"]
        .as_f64().unwrap();

    let fan_rpm = resp["Fans"][0]["Reading"]
        .as_f64().unwrap();

    // πŸ› Comparing Β°C to RPM β€” both are f64
    if cpu_temp > fan_rpm {
        println!("thermal issue");
    }

    // πŸ› Is this the right path? No compile-time check.
    client.post_action(
        "/redfish/v1/Systems/1/Actions/ComputerSystem.Reset",
        &serde_json::json!({"ResetType": "ForceOff"})
    )?;

    Ok(())
}

This β€œworks” β€” until it doesn’t. Every unwrap() is a potential panic, every string path is an unchecked assumption, and unit confusion is invisible.


Section 1 β€” Session Lifecycle (Type-State, ch05)

A Redfish session has a strict lifecycle: connect β†’ authenticate β†’ use β†’ close. Encode each state as a distinct type.

stateDiagram-v2
    [*] --> Disconnected
    Disconnected --> Connected : connect(host)
    Connected --> Authenticated : login(user, pass)
    Authenticated --> Authenticated : get() / patch() / post_action()
    Authenticated --> Closed : logout()
    Closed --> [*]

    note right of Authenticated : API calls only exist here
    note right of Connected : get() β†’ compile error
use std::marker::PhantomData;

// ──── Session States ────

pub struct Disconnected;
pub struct Connected;
pub struct Authenticated;

pub struct RedfishSession<S> {
    base_url: String,
    auth_token: Option<String>,
    _state: PhantomData<S>,
}

impl RedfishSession<Disconnected> {
    pub fn new(host: &str) -> Self {
        RedfishSession {
            base_url: format!("https://{}", host),
            auth_token: None,
            _state: PhantomData,
        }
    }

    /// Transition: Disconnected β†’ Connected.
    /// Verifies the service root is reachable.
    pub fn connect(self) -> Result<RedfishSession<Connected>, RedfishError> {
        // GET /redfish/v1 β€” verify service root
        println!("Connecting to {}/redfish/v1", self.base_url);
        Ok(RedfishSession {
            base_url: self.base_url,
            auth_token: None,
            _state: PhantomData,
        })
    }
}

impl RedfishSession<Connected> {
    /// Transition: Connected β†’ Authenticated.
    /// Creates a session via POST /redfish/v1/SessionService/Sessions.
    pub fn login(
        self,
        user: &str,
        _pass: &str,
    ) -> Result<(RedfishSession<Authenticated>, LoginToken), RedfishError> {
        // POST /redfish/v1/SessionService/Sessions
        println!("Authenticated as {}", user);
        let token = "X-Auth-Token-abc123".to_string();
        Ok((
            RedfishSession {
                base_url: self.base_url,
                auth_token: Some(token),
                _state: PhantomData,
            },
            LoginToken { _private: () },
        ))
    }
}

impl RedfishSession<Authenticated> {
    /// Only available on Authenticated sessions.
    fn http_get(&self, path: &str) -> Result<serde_json::Value, RedfishError> {
        let _url = format!("{}{}", self.base_url, path);
        // ... HTTP GET with auth_token header ...
        Ok(serde_json::json!({})) // stub
    }

    fn http_patch(
        &self,
        path: &str,
        body: &serde_json::Value,
    ) -> Result<serde_json::Value, RedfishError> {
        let _url = format!("{}{}", self.base_url, path);
        let _ = body;
        Ok(serde_json::json!({})) // stub
    }

    fn http_post(
        &self,
        path: &str,
        body: &serde_json::Value,
    ) -> Result<serde_json::Value, RedfishError> {
        let _url = format!("{}{}", self.base_url, path);
        let _ = body;
        Ok(serde_json::json!({})) // stub
    }

    /// Transition: Authenticated β†’ Closed (session consumed).
    pub fn logout(self) {
        // DELETE /redfish/v1/SessionService/Sessions/{id}
        println!("Session closed");
        // self is consumed β€” can't use the session after logout
    }
}

// Attempting to call http_get on a non-Authenticated session:
//
//   let session = RedfishSession::new("bmc01").connect()?;
//   session.http_get("/redfish/v1/Systems");
//   ❌ ERROR: method `http_get` not found for `RedfishSession<Connected>`

#[derive(Debug)]
pub enum RedfishError {
    ConnectionFailed(String),
    AuthenticationFailed(String),
    HttpError { status: u16, message: String },
    ValidationError(String),
}

impl std::fmt::Display for RedfishError {
    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
        match self {
            Self::ConnectionFailed(msg) => write!(f, "connection failed: {msg}"),
            Self::AuthenticationFailed(msg) => write!(f, "auth failed: {msg}"),
            Self::HttpError { status, message } =>
                write!(f, "HTTP {status}: {message}"),
            Self::ValidationError(msg) => write!(f, "validation: {msg}"),
        }
    }
}

Bug class eliminated: sending requests on a disconnected or unauthenticated session. The method simply doesn’t exist β€” no runtime check to forget.


Section 2 β€” Privilege Tokens (Capability Tokens, ch04)

Redfish defines four privilege levels: Login, ConfigureComponents, ConfigureManager, ConfigureSelf. Rather than checking permissions at runtime, encode them as zero-sized proof tokens.

// ──── Privilege Tokens (zero-sized) ────

/// Proof the caller has Login privilege.
/// Returned by successful login β€” the only way to obtain one.
pub struct LoginToken { _private: () }

/// Proof the caller has ConfigureComponents privilege.
/// Only obtainable by admin-level authentication.
pub struct ConfigureComponentsToken { _private: () }

/// Proof the caller has ConfigureManager privilege (firmware updates, etc.).
pub struct ConfigureManagerToken { _private: () }

// Extend login to return privilege tokens based on role:

impl RedfishSession<Connected> {
    /// Admin login β€” returns all privilege tokens.
    pub fn login_admin(
        self,
        user: &str,
        pass: &str,
    ) -> Result<(
        RedfishSession<Authenticated>,
        LoginToken,
        ConfigureComponentsToken,
        ConfigureManagerToken,
    ), RedfishError> {
        let (session, login_tok) = self.login(user, pass)?;
        Ok((
            session,
            login_tok,
            ConfigureComponentsToken { _private: () },
            ConfigureManagerToken { _private: () },
        ))
    }

    /// Operator login β€” returns Login + ConfigureComponents only.
    pub fn login_operator(
        self,
        user: &str,
        pass: &str,
    ) -> Result<(
        RedfishSession<Authenticated>,
        LoginToken,
        ConfigureComponentsToken,
    ), RedfishError> {
        let (session, login_tok) = self.login(user, pass)?;
        Ok((
            session,
            login_tok,
            ConfigureComponentsToken { _private: () },
        ))
    }

    /// Read-only login β€” returns Login token only.
    pub fn login_readonly(
        self,
        user: &str,
        pass: &str,
    ) -> Result<(RedfishSession<Authenticated>, LoginToken), RedfishError> {
        self.login(user, pass)
    }
}

Now privilege requirements are part of the function signature:

use std::marker::PhantomData;
pub struct Authenticated;
pub struct RedfishSession<S> { base_url: String, auth_token: Option<String>, _state: PhantomData<S> }
pub struct LoginToken { _private: () }
pub struct ConfigureComponentsToken { _private: () }
pub struct ConfigureManagerToken { _private: () }
#[derive(Debug)] pub enum RedfishError { HttpError { status: u16, message: String } }

/// Anyone with Login can read thermal data.
fn get_thermal(
    session: &RedfishSession<Authenticated>,
    _proof: &LoginToken,
) -> Result<serde_json::Value, RedfishError> {
    // GET /redfish/v1/Chassis/1/Thermal
    Ok(serde_json::json!({})) // stub
}

/// Changing boot order requires ConfigureComponents.
fn set_boot_order(
    session: &RedfishSession<Authenticated>,
    _proof: &ConfigureComponentsToken,
    order: &[&str],
) -> Result<(), RedfishError> {
    let _ = order;
    // PATCH /redfish/v1/Systems/1
    Ok(())
}

/// Factory reset requires ConfigureManager.
fn reset_to_defaults(
    session: &RedfishSession<Authenticated>,
    _proof: &ConfigureManagerToken,
) -> Result<(), RedfishError> {
    // POST .../Actions/Manager.ResetToDefaults
    Ok(())
}

// Operator code calling reset_to_defaults:
//
//   let (session, login, configure) = session.login_operator("op", "pass")?;
//   reset_to_defaults(&session, &???);
//   ❌ ERROR: no ConfigureManagerToken available β€” operator can't do this

Bug class eliminated: privilege escalation. An operator-level login physically cannot produce a ConfigureManagerToken β€” the compiler won’t let the code reference one. Zero runtime cost: for the compiled binary, these tokens don’t exist.


Section 3 β€” Typed Resource Navigation (Phantom Types, ch09)

Redfish resources form a tree. Encoding the hierarchy as types prevents constructing illegal URIs:

graph TD
    SR[ServiceRoot] --> Systems
    SR --> Chassis
    SR --> Managers
    SR --> UpdateService
    Systems --> CS[ComputerSystem]
    CS --> Processors
    CS --> Memory
    CS --> Bios
    Chassis --> Ch1[Chassis Instance]
    Ch1 --> Thermal
    Ch1 --> Power
    Managers --> Mgr[Manager Instance]
use std::marker::PhantomData;

// ──── Resource Type Markers ────

pub struct ServiceRoot;
pub struct SystemsCollection;
pub struct ComputerSystem;
pub struct ChassisCollection;
pub struct ChassisInstance;
pub struct ThermalResource;
pub struct PowerResource;
pub struct BiosResource;
pub struct ManagersCollection;
pub struct ManagerInstance;
pub struct UpdateServiceResource;

// ──── Typed Resource Path ────

pub struct RedfishPath<R> {
    uri: String,
    _resource: PhantomData<R>,
}

impl RedfishPath<ServiceRoot> {
    pub fn root() -> Self {
        RedfishPath {
            uri: "/redfish/v1".to_string(),
            _resource: PhantomData,
        }
    }

    pub fn systems(&self) -> RedfishPath<SystemsCollection> {
        RedfishPath {
            uri: format!("{}/Systems", self.uri),
            _resource: PhantomData,
        }
    }

    pub fn chassis(&self) -> RedfishPath<ChassisCollection> {
        RedfishPath {
            uri: format!("{}/Chassis", self.uri),
            _resource: PhantomData,
        }
    }

    pub fn managers(&self) -> RedfishPath<ManagersCollection> {
        RedfishPath {
            uri: format!("{}/Managers", self.uri),
            _resource: PhantomData,
        }
    }

    pub fn update_service(&self) -> RedfishPath<UpdateServiceResource> {
        RedfishPath {
            uri: format!("{}/UpdateService", self.uri),
            _resource: PhantomData,
        }
    }
}

impl RedfishPath<SystemsCollection> {
    pub fn system(&self, id: &str) -> RedfishPath<ComputerSystem> {
        RedfishPath {
            uri: format!("{}/{}", self.uri, id),
            _resource: PhantomData,
        }
    }
}

impl RedfishPath<ComputerSystem> {
    pub fn bios(&self) -> RedfishPath<BiosResource> {
        RedfishPath {
            uri: format!("{}/Bios", self.uri),
            _resource: PhantomData,
        }
    }
}

impl RedfishPath<ChassisCollection> {
    pub fn instance(&self, id: &str) -> RedfishPath<ChassisInstance> {
        RedfishPath {
            uri: format!("{}/{}", self.uri, id),
            _resource: PhantomData,
        }
    }
}

impl RedfishPath<ChassisInstance> {
    pub fn thermal(&self) -> RedfishPath<ThermalResource> {
        RedfishPath {
            uri: format!("{}/Thermal", self.uri),
            _resource: PhantomData,
        }
    }

    pub fn power(&self) -> RedfishPath<PowerResource> {
        RedfishPath {
            uri: format!("{}/Power", self.uri),
            _resource: PhantomData,
        }
    }
}

impl RedfishPath<ManagersCollection> {
    pub fn manager(&self, id: &str) -> RedfishPath<ManagerInstance> {
        RedfishPath {
            uri: format!("{}/{}", self.uri, id),
            _resource: PhantomData,
        }
    }
}

impl<R> RedfishPath<R> {
    pub fn uri(&self) -> &str {
        &self.uri
    }
}

// ── Usage ──

fn build_paths() {
    let root = RedfishPath::root();

    // βœ… Valid navigation
    let thermal = root.chassis().instance("1").thermal();
    assert_eq!(thermal.uri(), "/redfish/v1/Chassis/1/Thermal");

    let bios = root.systems().system("1").bios();
    assert_eq!(bios.uri(), "/redfish/v1/Systems/1/Bios");

    // ❌ Compile error: ServiceRoot has no .thermal() method
    // root.thermal();

    // ❌ Compile error: SystemsCollection has no .bios() method
    // root.systems().bios();

    // ❌ Compile error: ChassisInstance has no .bios() method
    // root.chassis().instance("1").bios();
}

Bug class eliminated: malformed URIs, navigating to a child resource that doesn’t exist under the given parent. The hierarchy is enforced structurally β€” you can only reach Thermal through Chassis β†’ Instance β†’ Thermal.


Section 4 β€” Typed Telemetry Reads (Typed Commands + Dimensional Analysis, ch02 + ch06)

Combine typed resource paths with dimensional return types so the compiler knows what unit every reading carries:

use std::marker::PhantomData;

// ──── Dimensional Types (ch06) ────

#[derive(Debug, Clone, Copy, PartialEq, PartialOrd)]
pub struct Celsius(pub f64);

#[derive(Debug, Clone, Copy, PartialEq, PartialOrd)]
pub struct Rpm(pub u32);

#[derive(Debug, Clone, Copy, PartialEq, PartialOrd)]
pub struct Watts(pub f64);

#[derive(Debug, Clone, Copy, PartialEq, PartialOrd)]
pub struct Volts(pub f64);

// ──── Typed Redfish GET (ch02 pattern applied to REST) ────

/// A Redfish resource type determines its parsed response.
pub trait RedfishResource {
    type Response;
    fn parse(json: &serde_json::Value) -> Result<Self::Response, RedfishError>;
}

// ──── Validated Thermal Response (ch07) ────

#[derive(Debug)]
pub struct ValidThermalResponse {
    pub temperatures: Vec<TemperatureReading>,
    pub fans: Vec<FanReading>,
}

#[derive(Debug)]
pub struct TemperatureReading {
    pub name: String,
    pub reading: Celsius,           // ← dimensional type, not f64
    pub upper_critical: Celsius,
    pub status: HealthStatus,
}

#[derive(Debug)]
pub struct FanReading {
    pub name: String,
    pub reading: Rpm,               // ← dimensional type, not u32
    pub status: HealthStatus,
}

#[derive(Debug, Clone, Copy, PartialEq)]
pub enum HealthStatus { Ok, Warning, Critical }

impl RedfishResource for ThermalResource {
    type Response = ValidThermalResponse;

    fn parse(json: &serde_json::Value) -> Result<ValidThermalResponse, RedfishError> {
        // Parse and validate in one pass β€” boundary validation (ch07)
        let temps = json["Temperatures"]
            .as_array()
            .ok_or_else(|| RedfishError::ValidationError(
                "missing Temperatures array".into(),
            ))?
            .iter()
            .map(|t| {
                Ok(TemperatureReading {
                    name: t["Name"]
                        .as_str()
                        .ok_or_else(|| RedfishError::ValidationError(
                            "missing Name".into(),
                        ))?
                        .to_string(),
                    reading: Celsius(
                        t["ReadingCelsius"]
                            .as_f64()
                            .ok_or_else(|| RedfishError::ValidationError(
                                "missing ReadingCelsius".into(),
                            ))?,
                    ),
                    upper_critical: Celsius(
                        t["UpperThresholdCritical"]
                            .as_f64()
                            .unwrap_or(105.0), // safe default for missing threshold
                    ),
                    status: parse_health(
                        t["Status"]["Health"]
                            .as_str()
                            .unwrap_or("OK"),
                    ),
                })
            })
            .collect::<Result<Vec<_>, _>>()?;

        let fans = json["Fans"]
            .as_array()
            .ok_or_else(|| RedfishError::ValidationError(
                "missing Fans array".into(),
            ))?
            .iter()
            .map(|f| {
                Ok(FanReading {
                    name: f["Name"]
                        .as_str()
                        .ok_or_else(|| RedfishError::ValidationError(
                            "missing Name".into(),
                        ))?
                        .to_string(),
                    reading: Rpm(
                        f["Reading"]
                            .as_u64()
                            .ok_or_else(|| RedfishError::ValidationError(
                                "missing Reading".into(),
                            ))? as u32,
                    ),
                    status: parse_health(
                        f["Status"]["Health"]
                            .as_str()
                            .unwrap_or("OK"),
                    ),
                })
            })
            .collect::<Result<Vec<_>, _>>()?;

        Ok(ValidThermalResponse { temperatures: temps, fans })
    }
}

fn parse_health(s: &str) -> HealthStatus {
    match s {
        "OK" => HealthStatus::Ok,
        "Warning" => HealthStatus::Warning,
        _ => HealthStatus::Critical,
    }
}

// ──── Typed GET on Authenticated Session ────

impl RedfishSession<Authenticated> {
    pub fn get_resource<R: RedfishResource>(
        &self,
        path: &RedfishPath<R>,
    ) -> Result<R::Response, RedfishError> {
        let json = self.http_get(path.uri())?;
        R::parse(&json)
    }
}

// ── Usage ──

fn read_thermal(
    session: &RedfishSession<Authenticated>,
    _proof: &LoginToken,
) -> Result<(), RedfishError> {
    let path = RedfishPath::root().chassis().instance("1").thermal();

    // Response type is inferred: ValidThermalResponse
    let thermal = session.get_resource(&path)?;

    for t in &thermal.temperatures {
        // t.reading is Celsius β€” can only compare with Celsius
        if t.reading > t.upper_critical {
            println!("CRITICAL: {} at {:?}", t.name, t.reading);
        }

        // ❌ Compile error: cannot compare Celsius with Rpm
        // if t.reading > thermal.fans[0].reading { }

        // ❌ Compile error: cannot compare Celsius with Watts
        // if t.reading > Watts(350.0) { }
    }

    Ok(())
}

Bug classes eliminated:

  • Unit confusion: Celsius β‰  Rpm β‰  Watts β€” the compiler rejects comparisons.
  • Missing field panics: parse() validates at the boundary; ValidThermalResponse guarantees all fields are present.
  • Wrong response type: get_resource(&thermal_path) returns ValidThermalResponse, not raw JSON. The resource type determines the response type at compile time.

Section 5 β€” PATCH with Builder Type-State (ch11, Trick 4)

Redfish PATCH payloads must contain specific fields. A builder that gates .apply() on required fields being set prevents incomplete or empty patches:

use std::marker::PhantomData;

// ──── Type-level booleans for required fields ────

pub struct FieldUnset;
pub struct FieldSet;

// ──── BIOS Settings PATCH Builder ────

pub struct BiosPatchBuilder<BootOrder, TpmState> {
    boot_order: Option<Vec<String>>,
    tpm_enabled: Option<bool>,
    _markers: PhantomData<(BootOrder, TpmState)>,
}

impl BiosPatchBuilder<FieldUnset, FieldUnset> {
    pub fn new() -> Self {
        BiosPatchBuilder {
            boot_order: None,
            tpm_enabled: None,
            _markers: PhantomData,
        }
    }
}

impl<T> BiosPatchBuilder<FieldUnset, T> {
    /// Set boot order β€” transitions the BootOrder marker to FieldSet.
    pub fn boot_order(self, order: Vec<String>) -> BiosPatchBuilder<FieldSet, T> {
        BiosPatchBuilder {
            boot_order: Some(order),
            tpm_enabled: self.tpm_enabled,
            _markers: PhantomData,
        }
    }
}

impl<B> BiosPatchBuilder<B, FieldUnset> {
    /// Set TPM state β€” transitions the TpmState marker to FieldSet.
    pub fn tpm_enabled(self, enabled: bool) -> BiosPatchBuilder<B, FieldSet> {
        BiosPatchBuilder {
            boot_order: self.boot_order,
            tpm_enabled: Some(enabled),
            _markers: PhantomData,
        }
    }
}

impl BiosPatchBuilder<FieldSet, FieldSet> {
    /// .apply() only exists when ALL required fields are set.
    pub fn apply(
        self,
        session: &RedfishSession<Authenticated>,
        _proof: &ConfigureComponentsToken,
        system: &RedfishPath<ComputerSystem>,
    ) -> Result<(), RedfishError> {
        let body = serde_json::json!({
            "Boot": {
                "BootOrder": self.boot_order.unwrap(),
            },
            "Oem": {
                "TpmEnabled": self.tpm_enabled.unwrap(),
            }
        });
        session.http_patch(
            &format!("{}/Bios/Settings", system.uri()),
            &body,
        )?;
        Ok(())
    }
}

// ── Usage ──

fn configure_bios(
    session: &RedfishSession<Authenticated>,
    configure: &ConfigureComponentsToken,
) -> Result<(), RedfishError> {
    let system = RedfishPath::root().systems().system("1");

    // βœ… Both required fields set β€” .apply() is available
    BiosPatchBuilder::new()
        .boot_order(vec!["Pxe".into(), "Hdd".into()])
        .tpm_enabled(true)
        .apply(session, configure, &system)?;

    // ❌ Compile error: .apply() not found on BiosPatchBuilder<FieldSet, FieldUnset>
    // BiosPatchBuilder::new()
    //     .boot_order(vec!["Pxe".into()])
    //     .apply(session, configure, &system)?;

    // ❌ Compile error: .apply() not found on BiosPatchBuilder<FieldUnset, FieldUnset>
    // BiosPatchBuilder::new()
    //     .apply(session, configure, &system)?;

    Ok(())
}

Bug classes eliminated:

  • Empty PATCH: Can’t call .apply() without setting every required field.
  • Missing privilege: .apply() requires &ConfigureComponentsToken.
  • Wrong resource: Takes a &RedfishPath<ComputerSystem>, not a raw string.

Section 6 β€” Firmware Update Lifecycle (Single-Use + Type-State, ch03 + ch05)

The Redfish UpdateService has a strict sequence: push image β†’ verify β†’ apply β†’ reboot. Each phase must happen exactly once, in order.

stateDiagram-v2
    [*] --> Idle
    Idle --> Uploading : push_image()
    Uploading --> Uploaded : upload completes
    Uploaded --> Verified : verify() βœ“
    Uploaded --> Failed : verify() βœ—
    Verified --> Applying : apply() β€” consumes Verified
    Applying --> NeedsReboot : apply completes
    NeedsReboot --> [*] : reboot()
    Failed --> [*]

    note right of Verified : apply() consumes this state β€”
    note right of Verified : can't apply twice
use std::marker::PhantomData;

// ──── Firmware Update States ────

pub struct FwIdle;
pub struct FwUploaded;
pub struct FwVerified;
pub struct FwApplying;
pub struct FwNeedsReboot;

pub struct FirmwareUpdate<S> {
    task_uri: String,
    image_hash: String,
    _phase: PhantomData<S>,
}

impl FirmwareUpdate<FwIdle> {
    pub fn push_image(
        session: &RedfishSession<Authenticated>,
        _proof: &ConfigureManagerToken,
        image: &[u8],
    ) -> Result<FirmwareUpdate<FwUploaded>, RedfishError> {
        // POST /redfish/v1/UpdateService/Actions/UpdateService.SimpleUpdate
        // or multipart push to /redfish/v1/UpdateService/upload
        let _ = image;
        println!("Image uploaded ({} bytes)", image.len());
        Ok(FirmwareUpdate {
            task_uri: "/redfish/v1/TaskService/Tasks/1".to_string(),
            image_hash: "sha256:abc123".to_string(),
            _phase: PhantomData,
        })
    }
}

impl FirmwareUpdate<FwUploaded> {
    /// Verify image integrity. Returns FwVerified on success.
    pub fn verify(self) -> Result<FirmwareUpdate<FwVerified>, RedfishError> {
        // Poll task until verification complete
        println!("Image verified: {}", self.image_hash);
        Ok(FirmwareUpdate {
            task_uri: self.task_uri,
            image_hash: self.image_hash,
            _phase: PhantomData,
        })
    }
}

impl FirmwareUpdate<FwVerified> {
    /// Apply the update. Consumes self β€” can't apply twice.
    /// This is the single-use pattern from ch03.
    pub fn apply(self) -> Result<FirmwareUpdate<FwNeedsReboot>, RedfishError> {
        // PATCH /redfish/v1/UpdateService β€” set ApplyTime
        println!("Firmware applied from {}", self.task_uri);
        // self is moved β€” calling apply() again is a compile error
        Ok(FirmwareUpdate {
            task_uri: self.task_uri,
            image_hash: self.image_hash,
            _phase: PhantomData,
        })
    }
}

impl FirmwareUpdate<FwNeedsReboot> {
    /// Reboot to activate the new firmware.
    pub fn reboot(
        self,
        session: &RedfishSession<Authenticated>,
        _proof: &ConfigureManagerToken,
    ) -> Result<(), RedfishError> {
        // POST .../Actions/Manager.Reset {"ResetType": "GracefulRestart"}
        let _ = session;
        println!("BMC rebooting to activate firmware");
        Ok(())
    }
}

// ── Usage ──

fn update_bmc_firmware(
    session: &RedfishSession<Authenticated>,
    manager_proof: &ConfigureManagerToken,
    image: &[u8],
) -> Result<(), RedfishError> {
    // Each step returns the next state β€” the old state is consumed
    let uploaded = FirmwareUpdate::push_image(session, manager_proof, image)?;
    let verified = uploaded.verify()?;
    let needs_reboot = verified.apply()?;
    needs_reboot.reboot(session, manager_proof)?;

    // ❌ Compile error: use of moved value `verified`
    // verified.apply()?;

    // ❌ Compile error: FirmwareUpdate<FwUploaded> has no .apply() method
    // uploaded.apply()?;      // must verify first!

    // ❌ Compile error: push_image requires &ConfigureManagerToken
    // FirmwareUpdate::push_image(session, &login_token, image)?;

    Ok(())
}

Bug classes eliminated:

  • Applying unverified firmware: .apply() only exists on FwVerified.
  • Double apply: apply() consumes self β€” moved value can’t be reused.
  • Skipping reboot: FwNeedsReboot is a distinct type; you can’t accidentally continue normal operations while firmware is staged.
  • Unauthorized update: push_image() requires &ConfigureManagerToken.

Section 7 β€” Putting It All Together

Here’s the full diagnostic workflow composing all six sections:

fn full_redfish_diagnostic() -> Result<(), RedfishError> {
    // ── 1. Session lifecycle (Section 1) ──
    let session = RedfishSession::new("bmc01.lab.local");
    let session = session.connect()?;

    // ── 2. Privilege tokens (Section 2) ──
    // Admin login β€” receives all capability tokens
    let (session, _login, configure, manager) =
        session.login_admin("admin", "p@ssw0rd")?;

    // ── 3. Typed navigation (Section 3) ──
    let thermal_path = RedfishPath::root()
        .chassis()
        .instance("1")
        .thermal();

    // ── 4. Typed telemetry read (Section 4) ──
    let thermal: ValidThermalResponse = session.get_resource(&thermal_path)?;

    for t in &thermal.temperatures {
        // Celsius can only compare with Celsius β€” dimensional safety
        if t.reading > t.upper_critical {
            println!("πŸ”₯ {} is critical: {:?}", t.name, t.reading);
        }
    }

    for f in &thermal.fans {
        if f.reading < Rpm(1000) {
            println!("⚠ {} below threshold: {:?}", f.name, f.reading);
        }
    }

    // ── 5. Type-safe PATCH (Section 5) ──
    let system_path = RedfishPath::root().systems().system("1");

    BiosPatchBuilder::new()
        .boot_order(vec!["Pxe".into(), "Hdd".into()])
        .tpm_enabled(true)
        .apply(&session, &configure, &system_path)?;

    // ── 6. Firmware update lifecycle (Section 6) ──
    let firmware_image = include_bytes!("bmc_firmware.bin");
    let uploaded = FirmwareUpdate::push_image(&session, &manager, firmware_image)?;
    let verified = uploaded.verify()?;
    let needs_reboot = verified.apply()?;

    // ── 7. Clean shutdown ──
    needs_reboot.reboot(&session, &manager)?;
    session.logout();

    Ok(())
}

What the Compiler Proves

#Bug classHow it’s preventedPattern (Section)
1Request on unauthenticated sessionhttp_get() only exists on Session<Authenticated>Type-state (Β§1)
2Privilege escalationConfigureManagerToken not returned by operator loginCapability tokens (Β§2)
3Malformed Redfish URINavigation methods enforce parent→child hierarchyPhantom types (§3)
4Unit confusion (Β°C vs RPM vs W)Celsius, Rpm, Watts are distinct typesDimensional analysis (Β§4)
5Missing JSON field β†’ panicValidThermalResponse validates at parse boundaryValidated boundaries (Β§4)
6Wrong response typeRedfishResource::Response is fixed per resourceTyped commands (Β§4)
7Incomplete PATCH payload.apply() only exists when all fields are FieldSetBuilder type-state (Β§5)
8Missing privilege for PATCH.apply() requires &ConfigureComponentsTokenCapability tokens (Β§5)
9Applying unverified firmware.apply() only exists on FwVerifiedType-state (Β§6)
10Double firmware applyapply() consumes self β€” value is movedSingle-use types (Β§6)
11Firmware update without authoritypush_image() requires &ConfigureManagerTokenCapability tokens (Β§6)
12Use-after-logoutlogout() consumes the sessionOwnership (Β§1)

Total runtime overhead of ALL twelve guarantees: zero.

The generated binary makes the same HTTP calls as the untyped version β€” but the untyped version can have 12 classes of bugs. This version can’t.


Comparison: IPMI Integration (ch10) vs. Redfish Integration

Dimensionch10 (IPMI)This chapter (Redfish)
TransportRaw bytes over KCS/LANJSON over HTTPS
NavigationFlat command codes (NetFn/Cmd)Hierarchical URI tree
Response bindingIpmiCmd::ResponseRedfishResource::Response
Privilege modelSingle AdminTokenRole-based multi-token
Payload constructionByte arraysBuilder type-state for JSON
Update lifecycleNot coveredFull type-state chain
Patterns exercised78 (adds builder type-state)

The two chapters are complementary: ch10 shows the patterns work at the byte level, this chapter shows they work identically at the REST/JSON level. The type system doesn’t care about the transport β€” it proves correctness either way.

Key Takeaways

  1. Eight patterns compose into one Redfish client β€” session type-state, capability tokens, phantom-typed URIs, typed commands, dimensional analysis, validated boundaries, builder type-state, and single-use firmware apply.
  2. Twelve bug classes become compile errors β€” see the table above.
  3. Zero runtime overhead β€” every proof token, phantom type, and type-state marker compiles away. The binary is identical to hand-rolled untyped code.
  4. REST APIs benefit as much as byte protocols β€” the patterns from ch02–ch09 apply equally to JSON-over-HTTPS (Redfish) and bytes-over-KCS (IPMI).
  5. Privilege enforcement is structural, not procedural β€” the function signature declares what’s required; the compiler enforces it.
  6. This is a design template β€” adapt the resource type markers, capability tokens, and builder for your specific Redfish schema and organizational role hierarchy.

Applied Walkthrough β€” Type-Safe Redfish Server 🟑

What you’ll learn: How to compose response builder type-state, source-availability tokens, dimensional serialization, health rollup, schema versioning, and typed action dispatch into a Redfish server that cannot produce a schema-non-compliant response β€” the mirror of the client walkthrough in ch17.

Cross-references: ch02 (typed commands β€” inverted for action dispatch), ch04 (capability tokens β€” source availability), ch06 (dimensional types β€” serialization side), ch07 (validated boundaries β€” inverted: β€œconstruct, don’t serialize”), ch09 (phantom types β€” schema versioning), ch11 (trick 3 β€” #[non_exhaustive], trick 4 β€” builder type-state), ch17 (client counterpart)

The Mirror Problem

Chapter 17 asks: β€œHow do I consume Redfish correctly?” This chapter asks the mirror question: β€œHow do I produce Redfish correctly?”

On the client side, the danger is trusting bad data. On the server side, the danger is emitting bad data β€” and every client in the fleet trusts what you send.

A single GET /redfish/v1/Systems/1 response must fuse data from many sources:

flowchart LR
    subgraph Sources
        SMBIOS["SMBIOS\nType 1, Type 17"]
        SDR["IPMI Sensors\n(SDR + readings)"]
        SEL["IPMI SEL\n(critical events)"]
        PCIe["PCIe Config\nSpace"]
        FW["Firmware\nVersion Table"]
        PWR["Power State\nRegister"]
    end

    subgraph Server["Redfish Server"]
        Handler["GET handler"]
        Builder["ComputerSystem\nBuilder"]
    end

    SMBIOS -->|"Name, UUID, Serial"| Handler
    SDR -->|"Temperatures, Fans"| Handler
    SEL -->|"Health escalation"| Handler
    PCIe -->|"Device links"| Handler
    FW -->|"BIOS version"| Handler
    PWR -->|"PowerState"| Handler
    Handler --> Builder
    Builder -->|".build()"| JSON["Schema-compliant\nJSON response"]

    style JSON fill:#c8e6c9,color:#000
    style Builder fill:#e1f5fe,color:#000

In C, this is a 500-line handler that calls into six subsystems, manually builds a JSON tree with json_object_set(), and hopes every required field was populated. Forget one? The response violates the Redfish schema. Get the unit wrong? Every client sees corrupted telemetry.

// C β€” the assembly problem
json_t *get_computer_system(const char *id) {
    json_t *obj = json_object();
    json_object_set_new(obj, "@odata.type",
        json_string("#ComputerSystem.v1_13_0.ComputerSystem"));

    // πŸ› Forgot to set "Name" β€” schema requires it
    // πŸ› Forgot to set "UUID" β€” schema requires it

    smbios_type1_t *t1 = smbios_get_type1();
    if (t1) {
        json_object_set_new(obj, "Manufacturer",
            json_string(t1->manufacturer));
    }

    json_object_set_new(obj, "PowerState",
        json_string(get_power_state()));  // at least this one is always available

    // πŸ› Reading is in raw ADC counts, not Celsius β€” no type to catch it
    double cpu_temp = read_sensor(SENSOR_CPU_TEMP);
    // This number ends up in a Thermal response somewhere else...
    // but nothing ties it to "Celsius" at the type level

    // πŸ› Health is manually computed β€” forgot to include PSU status
    json_object_set_new(obj, "Status",
        build_status("Enabled", "OK")); // should be "Critical" β€” PSU is failing

    return obj; // missing 2 required fields, wrong health, raw units
}

Four bugs in one handler. On the client side, each bug affects one client. On the server side, each bug affects every client that queries this BMC.


Section 1 β€” Response Builder Type-State: β€œConstruct, Don’t Serialize” (ch07 Inverted)

Chapter 7 teaches β€œparse, don’t validate” β€” validate inbound data once, carry the proof in a type. The server-side mirror is β€œconstruct, don’t serialize” β€” build the outbound response through a builder that gates .build() on all required fields being present.

use std::marker::PhantomData;

// ──── Type-level field tracking ────

pub struct HasField;
pub struct MissingField;

// ──── Response Builder ────

/// Builder for a ComputerSystem Redfish resource.
/// Type parameters track which REQUIRED fields have been supplied.
/// Optional fields don't need type-level tracking.
pub struct ComputerSystemBuilder<Name, Uuid, PowerState, Status> {
    // Required fields β€” tracked at the type level
    name: Option<String>,
    uuid: Option<String>,
    power_state: Option<PowerStateValue>,
    status: Option<ResourceStatus>,
    // Optional fields β€” not tracked (always settable)
    manufacturer: Option<String>,
    model: Option<String>,
    serial_number: Option<String>,
    bios_version: Option<String>,
    processor_summary: Option<ProcessorSummary>,
    memory_summary: Option<MemorySummary>,
    _markers: PhantomData<(Name, Uuid, PowerState, Status)>,
}

#[derive(Debug, Clone, serde::Serialize)]
pub enum PowerStateValue { On, Off, PoweringOn, PoweringOff }

#[derive(Debug, Clone, serde::Serialize)]
pub struct ResourceStatus {
    #[serde(rename = "State")]
    pub state: StatusState,
    #[serde(rename = "Health")]
    pub health: HealthValue,
    #[serde(rename = "HealthRollup", skip_serializing_if = "Option::is_none")]
    pub health_rollup: Option<HealthValue>,
}

#[derive(Debug, Clone, Copy, serde::Serialize)]
pub enum StatusState { Enabled, Disabled, Absent, StandbyOffline, Starting }

#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord, serde::Serialize)]
pub enum HealthValue { OK, Warning, Critical }

#[derive(Debug, Clone, serde::Serialize)]
pub struct ProcessorSummary {
    #[serde(rename = "Count")]
    pub count: u32,
    #[serde(rename = "Status")]
    pub status: ResourceStatus,
}

#[derive(Debug, Clone, serde::Serialize)]
pub struct MemorySummary {
    #[serde(rename = "TotalSystemMemoryGiB")]
    pub total_gib: f64,
    #[serde(rename = "Status")]
    pub status: ResourceStatus,
}

// ──── Constructor: all fields start MissingField ────

impl ComputerSystemBuilder<MissingField, MissingField, MissingField, MissingField> {
    pub fn new() -> Self {
        ComputerSystemBuilder {
            name: None, uuid: None, power_state: None, status: None,
            manufacturer: None, model: None, serial_number: None,
            bios_version: None, processor_summary: None, memory_summary: None,
            _markers: PhantomData,
        }
    }
}

// ──── Required field setters β€” each transitions one type parameter ────

impl<U, P, S> ComputerSystemBuilder<MissingField, U, P, S> {
    pub fn name(self, name: String) -> ComputerSystemBuilder<HasField, U, P, S> {
        ComputerSystemBuilder {
            name: Some(name), uuid: self.uuid,
            power_state: self.power_state, status: self.status,
            manufacturer: self.manufacturer, model: self.model,
            serial_number: self.serial_number, bios_version: self.bios_version,
            processor_summary: self.processor_summary,
            memory_summary: self.memory_summary, _markers: PhantomData,
        }
    }
}

impl<N, P, S> ComputerSystemBuilder<N, MissingField, P, S> {
    pub fn uuid(self, uuid: String) -> ComputerSystemBuilder<N, HasField, P, S> {
        ComputerSystemBuilder {
            name: self.name, uuid: Some(uuid),
            power_state: self.power_state, status: self.status,
            manufacturer: self.manufacturer, model: self.model,
            serial_number: self.serial_number, bios_version: self.bios_version,
            processor_summary: self.processor_summary,
            memory_summary: self.memory_summary, _markers: PhantomData,
        }
    }
}

impl<N, U, S> ComputerSystemBuilder<N, U, MissingField, S> {
    pub fn power_state(self, ps: PowerStateValue)
        -> ComputerSystemBuilder<N, U, HasField, S>
    {
        ComputerSystemBuilder {
            name: self.name, uuid: self.uuid,
            power_state: Some(ps), status: self.status,
            manufacturer: self.manufacturer, model: self.model,
            serial_number: self.serial_number, bios_version: self.bios_version,
            processor_summary: self.processor_summary,
            memory_summary: self.memory_summary, _markers: PhantomData,
        }
    }
}

impl<N, U, P> ComputerSystemBuilder<N, U, P, MissingField> {
    pub fn status(self, status: ResourceStatus)
        -> ComputerSystemBuilder<N, U, P, HasField>
    {
        ComputerSystemBuilder {
            name: self.name, uuid: self.uuid,
            power_state: self.power_state, status: Some(status),
            manufacturer: self.manufacturer, model: self.model,
            serial_number: self.serial_number, bios_version: self.bios_version,
            processor_summary: self.processor_summary,
            memory_summary: self.memory_summary, _markers: PhantomData,
        }
    }
}

// ──── Optional field setters β€” available in any state ────

impl<N, U, P, S> ComputerSystemBuilder<N, U, P, S> {
    pub fn manufacturer(mut self, m: String) -> Self {
        self.manufacturer = Some(m); self
    }
    pub fn model(mut self, m: String) -> Self {
        self.model = Some(m); self
    }
    pub fn serial_number(mut self, s: String) -> Self {
        self.serial_number = Some(s); self
    }
    pub fn bios_version(mut self, v: String) -> Self {
        self.bios_version = Some(v); self
    }
    pub fn processor_summary(mut self, ps: ProcessorSummary) -> Self {
        self.processor_summary = Some(ps); self
    }
    pub fn memory_summary(mut self, ms: MemorySummary) -> Self {
        self.memory_summary = Some(ms); self
    }
}

// ──── .build() ONLY exists when all required fields are HasField ────

impl ComputerSystemBuilder<HasField, HasField, HasField, HasField> {
    pub fn build(self, id: &str) -> serde_json::Value {
        let mut obj = serde_json::json!({
            "@odata.id": format!("/redfish/v1/Systems/{id}"),
            "@odata.type": "#ComputerSystem.v1_13_0.ComputerSystem",
            "Id": id,
            // Type-state guarantees these are Some β€” .unwrap() is safe here.
            // In production, prefer .expect("guaranteed by type state").
            "Name": self.name.unwrap(),
            "UUID": self.uuid.unwrap(),
            "PowerState": self.power_state.unwrap(),
            "Status": self.status.unwrap(),
        });

        // Optional fields β€” included only if present
        if let Some(m) = self.manufacturer {
            obj["Manufacturer"] = serde_json::json!(m);
        }
        if let Some(m) = self.model {
            obj["Model"] = serde_json::json!(m);
        }
        if let Some(s) = self.serial_number {
            obj["SerialNumber"] = serde_json::json!(s);
        }
        if let Some(v) = self.bios_version {
            obj["BiosVersion"] = serde_json::json!(v);
        }
        // NOTE: .unwrap() on to_value() is used for brevity.
        // Production code should propagate serialization errors with `?`.
        if let Some(ps) = self.processor_summary {
            obj["ProcessorSummary"] = serde_json::to_value(ps).unwrap();
        }
        if let Some(ms) = self.memory_summary {
            obj["MemorySummary"] = serde_json::to_value(ms).unwrap();
        }

        obj
    }
}

//
// ── The Compiler Enforces Completeness ──
//
// βœ… All required fields set β€” .build() is available:
// ComputerSystemBuilder::new()
//     .name("PowerEdge R750".into())
//     .uuid("4c4c4544-...".into())
//     .power_state(PowerStateValue::On)
//     .status(ResourceStatus { ... })
//     .manufacturer("Dell".into())        // optional β€” fine to include
//     .build("1")
//
// ❌ Missing "Name" β€” compile error:
// ComputerSystemBuilder::new()
//     .uuid("4c4c4544-...".into())
//     .power_state(PowerStateValue::On)
//     .status(ResourceStatus { ... })
//     .build("1")
//   ERROR: method `build` not found for
//   `ComputerSystemBuilder<MissingField, HasField, HasField, HasField>`

Bug class eliminated: schema-non-compliant responses. The handler physically cannot serialize a ComputerSystem without supplying every required field. The compiler error message even tells you which field is missing β€” it’s right there in the type parameter: MissingField in the Name position.


Section 2 β€” Source-Availability Tokens (Capability Tokens, ch04 β€” New Twist)

In ch04 and ch17, capability tokens prove authorization β€” β€œthe caller is allowed to do this.” On the server side, the same pattern proves availability β€” β€œthis data source was successfully initialized.”

Each subsystem the BMC queries can fail independently. SMBIOS tables might be corrupt. The sensor subsystem might still be initializing. PCIe bus scan might have timed out. Encode each as a proof token:

/// Proof that SMBIOS tables were successfully parsed.
/// Only produced by the SMBIOS init function.
pub struct SmbiosReady {
    _private: (),
}

/// Proof that IPMI sensor subsystem is responsive.
pub struct SensorsReady {
    _private: (),
}

/// Proof that PCIe bus scan completed.
pub struct PcieReady {
    _private: (),
}

/// Proof that the SEL was successfully read.
pub struct SelReady {
    _private: (),
}

// ──── Data source initialization ────

pub struct SmbiosTables {
    pub product_name: String,
    pub manufacturer: String,
    pub serial_number: String,
    pub uuid: String,
}

pub struct SensorCache {
    pub cpu_temp: Celsius,
    pub inlet_temp: Celsius,
    pub fan_readings: Vec<(String, Rpm)>,
    pub psu_power: Vec<(String, Watts)>,
}

/// Rich SEL summary β€” per-subsystem health derived from typed events.
/// Built by the consumer pipeline in ch07's SEL section.
/// Replaces the lossy `has_critical_events: bool` with typed granularity.
pub struct TypedSelSummary {
    pub total_entries: u32,
    pub processor_health: HealthValue,
    pub memory_health: HealthValue,
    pub power_health: HealthValue,
    pub thermal_health: HealthValue,
    pub fan_health: HealthValue,
    pub storage_health: HealthValue,
    pub security_health: HealthValue,
}

pub fn init_smbios() -> Option<(SmbiosReady, SmbiosTables)> {
    // Read SMBIOS entry point, parse tables...
    // Returns None if tables are absent or corrupt
    Some((
        SmbiosReady { _private: () },
        SmbiosTables {
            product_name: "PowerEdge R750".into(),
            manufacturer: "Dell Inc.".into(),
            serial_number: "SVC1234567".into(),
            uuid: "4c4c4544-004d-5610-804c-b2c04f435031".into(),
        },
    ))
}

pub fn init_sensors() -> Option<(SensorsReady, SensorCache)> {
    // Initialize SDR repository, read all sensors...
    // Returns None if IPMI subsystem is not responsive
    Some((
        SensorsReady { _private: () },
        SensorCache {
            cpu_temp: Celsius(68.0),
            inlet_temp: Celsius(24.0),
            fan_readings: vec![
                ("Fan1".into(), Rpm(8400)),
                ("Fan2".into(), Rpm(8200)),
            ],
            psu_power: vec![
                ("PSU1".into(), Watts(285.0)),
                ("PSU2".into(), Watts(290.0)),
            ],
        },
    ))
}

pub fn init_sel() -> Option<(SelReady, TypedSelSummary)> {
    // In production: read SEL entries, parse via ch07's TryFrom,
    // classify via classify_event_health(), aggregate via summarize_sel().
    Some((
        SelReady { _private: () },
        TypedSelSummary {
            total_entries: 42,
            processor_health: HealthValue::OK,
            memory_health: HealthValue::OK,
            power_health: HealthValue::OK,
            thermal_health: HealthValue::OK,
            fan_health: HealthValue::OK,
            storage_health: HealthValue::OK,
            security_health: HealthValue::OK,
        },
    ))
}

Now, functions that populate builder fields from a data source require the corresponding proof token:

/// Populate SMBIOS-sourced fields. Requires proof SMBIOS is available.
fn populate_from_smbios<P, S>(
    builder: ComputerSystemBuilder<MissingField, MissingField, P, S>,
    _proof: &SmbiosReady,
    tables: &SmbiosTables,
) -> ComputerSystemBuilder<HasField, HasField, P, S> {
    builder
        .name(tables.product_name.clone())
        .uuid(tables.uuid.clone())
        .manufacturer(tables.manufacturer.clone())
        .serial_number(tables.serial_number.clone())
}

/// Fallback when SMBIOS is unavailable β€” supplies required fields
/// with safe defaults.
fn populate_smbios_fallback<P, S>(
    builder: ComputerSystemBuilder<MissingField, MissingField, P, S>,
) -> ComputerSystemBuilder<HasField, HasField, P, S> {
    builder
        .name("Unknown System".into())
        .uuid("00000000-0000-0000-0000-000000000000".into())
}

The handler chooses the path based on which tokens are available:

fn build_computer_system(
    smbios: &Option<(SmbiosReady, SmbiosTables)>,
    power_state: PowerStateValue,
    health: ResourceStatus,
) -> serde_json::Value {
    let builder = ComputerSystemBuilder::new()
        .power_state(power_state)
        .status(health);

    let builder = match smbios {
        Some((proof, tables)) => populate_from_smbios(builder, proof, tables),
        None => populate_smbios_fallback(builder),
    };

    // Both paths produce HasField for Name and UUID.
    // .build() is available either way.
    builder.build("1")
}

Bug class eliminated: calling into a subsystem that failed initialization. If SMBIOS didn’t parse, you don’t have a SmbiosReady token β€” the compiler forces you through the fallback path. No runtime if (smbios != NULL) to forget.

Combining Source Tokens with Capability Mixins (ch08)

With multiple Redfish resource types to serve (ComputerSystem, Chassis, Manager, Thermal, Power), source-population logic repeats across handlers. The mixin pattern from ch08 eliminates this duplication. Declare what sources a handler has, and blanket impls provide the population methods automatically:

/// ── Ingredient Traits (ch08) for data sources ──

pub trait HasSmbios {
    fn smbios(&self) -> &(SmbiosReady, SmbiosTables);
}

pub trait HasSensors {
    fn sensors(&self) -> &(SensorsReady, SensorCache);
}

pub trait HasSel {
    fn sel(&self) -> &(SelReady, TypedSelSummary);
}

/// ── Mixin: any handler with SMBIOS + Sensors gets identity population ──

pub trait IdentityMixin: HasSmbios {
    fn populate_identity<P, S>(
        &self,
        builder: ComputerSystemBuilder<MissingField, MissingField, P, S>,
    ) -> ComputerSystemBuilder<HasField, HasField, P, S> {
        let (_, tables) = self.smbios();
        builder
            .name(tables.product_name.clone())
            .uuid(tables.uuid.clone())
            .manufacturer(tables.manufacturer.clone())
            .serial_number(tables.serial_number.clone())
    }
}

/// Auto-implement for any type that has SMBIOS capability.
impl<T: HasSmbios> IdentityMixin for T {}

/// ── Mixin: any handler with Sensors + SEL gets health rollup ──

pub trait HealthMixin: HasSensors + HasSel {
    fn compute_health(&self) -> ResourceStatus {
        let (_, cache) = self.sensors();
        let (_, sel_summary) = self.sel();
        compute_system_health(
            Some(&(SensorsReady { _private: () }, cache.clone())).as_ref(),
            Some(&(SelReady { _private: () }, sel_summary.clone())).as_ref(),
        )
    }
}

impl<T: HasSensors + HasSel> HealthMixin for T {}

/// ── Concrete handler owns available sources ──

struct FullPlatformHandler {
    smbios: (SmbiosReady, SmbiosTables),
    sensors: (SensorsReady, SensorCache),
    sel: (SelReady, TypedSelSummary),
}

impl HasSmbios  for FullPlatformHandler {
    fn smbios(&self) -> &(SmbiosReady, SmbiosTables) { &self.smbios }
}
impl HasSensors for FullPlatformHandler {
    fn sensors(&self) -> &(SensorsReady, SensorCache) { &self.sensors }
}
impl HasSel     for FullPlatformHandler {
    fn sel(&self) -> &(SelReady, TypedSelSummary) { &self.sel }
}

// FullPlatformHandler automatically gets:
//   IdentityMixin::populate_identity()   (via HasSmbios)
//   HealthMixin::compute_health()        (via HasSensors + HasSel)
//
// A SensorsOnlyHandler that impls HasSensors but NOT HasSel
// would get IdentityMixin (if it has SMBIOS) but NOT HealthMixin.
// Calling .compute_health() on it β†’ compile error.

This directly mirrors ch08’s BaseBoardController pattern: ingredient traits declare what you have, mixin traits provide behavior via blanket impls, and the compiler gates each mixin on its prerequisites. Adding a new data source (e.g., HasNvme) plus a mixin (e.g., StorageMixin: HasNvme + HasSel) gives health rollup for storage to every handler that has both β€” automatically.


Section 3 β€” Dimensional Types at the Serialization Boundary (ch06)

On the client side (ch17 Β§4), dimensional types prevent reading Β°C as RPM. On the server side, they prevent writing RPM into a Celsius JSON field. This is arguably more dangerous β€” a wrong value on the server propagates to every client.

use serde::Serialize;

// ──── Dimensional types from ch06, with Serialize ────

#[derive(Debug, Clone, Copy, PartialEq, PartialOrd, Serialize)]
pub struct Celsius(pub f64);

#[derive(Debug, Clone, Copy, PartialEq, PartialOrd, Serialize)]
pub struct Rpm(pub u32);

#[derive(Debug, Clone, Copy, PartialEq, PartialOrd, Serialize)]
pub struct Watts(pub f64);

// ──── Redfish Thermal response members ────
// Field types enforce which unit belongs in which JSON property.

#[derive(Serialize)]
#[serde(rename_all = "PascalCase")]
pub struct TemperatureMember {
    pub member_id: String,
    pub name: String,
    pub reading_celsius: Celsius,           // ← must be Celsius
    #[serde(skip_serializing_if = "Option::is_none")]
    pub upper_threshold_critical: Option<Celsius>,
    #[serde(skip_serializing_if = "Option::is_none")]
    pub upper_threshold_fatal: Option<Celsius>,
    pub status: ResourceStatus,
}

#[derive(Serialize)]
#[serde(rename_all = "PascalCase")]
pub struct FanMember {
    pub member_id: String,
    pub name: String,
    pub reading: Rpm,                       // ← must be Rpm
    pub reading_units: &'static str,        // always "RPM"
    pub status: ResourceStatus,
}

#[derive(Serialize)]
#[serde(rename_all = "PascalCase")]
pub struct PowerControlMember {
    pub member_id: String,
    pub name: String,
    pub power_consumed_watts: Watts,        // ← must be Watts
    #[serde(skip_serializing_if = "Option::is_none")]
    pub power_capacity_watts: Option<Watts>,
    pub status: ResourceStatus,
}

// ──── Building a Thermal response from sensor cache ────

fn build_thermal_response(
    _proof: &SensorsReady,
    cache: &SensorCache,
) -> serde_json::Value {
    let temps = vec![
        TemperatureMember {
            member_id: "0".into(),
            name: "CPU Temp".into(),
            reading_celsius: cache.cpu_temp,     // Celsius β†’ Celsius βœ…
            upper_threshold_critical: Some(Celsius(95.0)),
            upper_threshold_fatal: Some(Celsius(105.0)),
            status: ResourceStatus {
                state: StatusState::Enabled,
                health: if cache.cpu_temp < Celsius(95.0) {
                    HealthValue::OK
                } else {
                    HealthValue::Critical
                },
                health_rollup: None,
            },
        },
        TemperatureMember {
            member_id: "1".into(),
            name: "Inlet Temp".into(),
            reading_celsius: cache.inlet_temp,   // Celsius β†’ Celsius βœ…
            upper_threshold_critical: Some(Celsius(42.0)),
            upper_threshold_fatal: None,
            status: ResourceStatus {
                state: StatusState::Enabled,
                health: HealthValue::OK,
                health_rollup: None,
            },
        },

        // ❌ Compile error β€” can't put Rpm in a Celsius field:
        // TemperatureMember {
        //     reading_celsius: cache.fan_readings[0].1,  // Rpm β‰  Celsius
        //     ...
        // }
    ];

    let fans: Vec<FanMember> = cache.fan_readings.iter().enumerate().map(|(i, (name, rpm))| {
        FanMember {
            member_id: i.to_string(),
            name: name.clone(),
            reading: *rpm,                       // Rpm β†’ Rpm βœ…
            reading_units: "RPM",
            status: ResourceStatus {
                state: StatusState::Enabled,
                health: if *rpm > Rpm(1000) { HealthValue::OK } else { HealthValue::Critical },
                health_rollup: None,
            },
        }
    }).collect();

    serde_json::json!({
        "@odata.type": "#Thermal.v1_7_0.Thermal",
        "Temperatures": temps,
        "Fans": fans,
    })
}

Bug class eliminated: unit confusion at serialization. The Redfish schema says ReadingCelsius is in Β°C. The Rust type system says reading_celsius must be Celsius. If a developer accidentally passes Rpm(8400) or Watts(285.0), the compiler catches it before the value ever reaches JSON.


Section 4 β€” Health Rollup as a Typed Fold

Redfish Status.Health is a rollup β€” the worst health of all sub-components. In C, this is typically a series of if checks that inevitably misses a source. With typed enums and Ord, the rollup is a one-line fold β€” and the compiler ensures every source contributes:

/// Roll up health from multiple sources.
/// Ord on HealthValue: OK < Warning < Critical.
/// Returns the worst (max) value.
fn rollup(sources: &[HealthValue]) -> HealthValue {
    sources.iter().copied().max().unwrap_or(HealthValue::OK)
}

/// Compute system-level health from all sub-components.
/// Takes explicit references to every source β€” the caller must provide ALL of them.
fn compute_system_health(
    sensors: Option<&(SensorsReady, SensorCache)>,
    sel: Option<&(SelReady, TypedSelSummary)>,
) -> ResourceStatus {
    let mut inputs = Vec::new();

    // ── Live sensor readings ──
    if let Some((_proof, cache)) = sensors {
        // Temperature health (dimensional: Celsius comparison)
        if cache.cpu_temp > Celsius(95.0) {
            inputs.push(HealthValue::Critical);
        } else if cache.cpu_temp > Celsius(85.0) {
            inputs.push(HealthValue::Warning);
        } else {
            inputs.push(HealthValue::OK);
        }

        // Fan health (dimensional: Rpm comparison)
        for (_name, rpm) in &cache.fan_readings {
            if *rpm < Rpm(500) {
                inputs.push(HealthValue::Critical);
            } else if *rpm < Rpm(1000) {
                inputs.push(HealthValue::Warning);
            } else {
                inputs.push(HealthValue::OK);
            }
        }

        // PSU health (dimensional: Watts comparison)
        for (_name, watts) in &cache.psu_power {
            if *watts > Watts(800.0) {
                inputs.push(HealthValue::Critical);
            } else {
                inputs.push(HealthValue::OK);
            }
        }
    }

    // ── SEL per-subsystem health (from ch07's TypedSelSummary) ──
    // Each subsystem's health was derived by exhaustive matching over
    // every sensor type and event variant. No information was lost.
    if let Some((_proof, sel_summary)) = sel {
        inputs.push(sel_summary.processor_health);
        inputs.push(sel_summary.memory_health);
        inputs.push(sel_summary.power_health);
        inputs.push(sel_summary.thermal_health);
        inputs.push(sel_summary.fan_health);
        inputs.push(sel_summary.storage_health);
        inputs.push(sel_summary.security_health);
    }

    let health = rollup(&inputs);

    ResourceStatus {
        state: StatusState::Enabled,
        health,
        health_rollup: Some(health),
    }
}

Bug class eliminated: incomplete health rollup. In C, forgetting to include PSU status in the health calculation is a silent bug β€” the system reports β€œOK” while a PSU is failing. Here, compute_system_health takes explicit references to every data source. The SEL contribution is no longer a lossy bool β€” it’s seven per-subsystem HealthValue fields derived by exhaustive matching in ch07’s consumer pipeline. Adding a new SEL sensor type forces the classifier to handle it; adding a new subsystem field forces the rollup to include it.


Section 5 β€” Schema Versioning with Phantom Types (ch09)

If the BMC advertises ComputerSystem.v1_13_0, the response must include properties introduced in that schema version (LastResetTime, BootProgress). Advertising v1.13 without those fields is a Redfish Interop Validator failure. Phantom version markers make this a compile-time contract:

use std::marker::PhantomData;

// ──── Schema Version Markers ────

pub struct V1_5;
pub struct V1_13;

// ──── Version-Aware Response ────

pub struct ComputerSystemResponse<V> {
    pub base: ComputerSystemBase,
    _version: PhantomData<V>,
}

pub struct ComputerSystemBase {
    pub id: String,
    pub name: String,
    pub uuid: String,
    pub power_state: PowerStateValue,
    pub status: ResourceStatus,
    pub manufacturer: Option<String>,
    pub serial_number: Option<String>,
    pub bios_version: Option<String>,
}

// Methods available on ALL versions:
impl<V> ComputerSystemResponse<V> {
    pub fn base_json(&self) -> serde_json::Value {
        serde_json::json!({
            "Id": self.base.id,
            "Name": self.base.name,
            "UUID": self.base.uuid,
            "PowerState": self.base.power_state,
            "Status": self.base.status,
        })
    }
}

// ──── v1.13-specific fields ────

/// Date and time of the last system reset.
pub struct LastResetTime(pub String);

/// Boot progress information.
pub struct BootProgress {
    pub last_state: String,
    pub last_state_time: String,
}

impl ComputerSystemResponse<V1_13> {
    /// LastResetTime β€” REQUIRED in v1.13+.
    /// This method only exists on V1_13. If the BMC advertises v1.13
    /// and the handler doesn't call this, the field is missing.
    pub fn last_reset_time(&self) -> LastResetTime {
        // Read from RTC or boot timestamp register
        LastResetTime("2026-03-16T08:30:00Z".to_string())
    }

    /// BootProgress β€” REQUIRED in v1.13+.
    pub fn boot_progress(&self) -> BootProgress {
        BootProgress {
            last_state: "OSRunning".to_string(),
            last_state_time: "2026-03-16T08:32:00Z".to_string(),
        }
    }

    /// Build the full v1.13 JSON response, including version-specific fields.
    pub fn to_json(&self) -> serde_json::Value {
        let mut obj = self.base_json();
        obj["@odata.type"] =
            serde_json::json!("#ComputerSystem.v1_13_0.ComputerSystem");

        let reset_time = self.last_reset_time();
        obj["LastResetTime"] = serde_json::json!(reset_time.0);

        let boot = self.boot_progress();
        obj["BootProgress"] = serde_json::json!({
            "LastState": boot.last_state,
            "LastStateTime": boot.last_state_time,
        });

        obj
    }
}

impl ComputerSystemResponse<V1_5> {
    /// v1.5 JSON β€” no LastResetTime, no BootProgress.
    pub fn to_json(&self) -> serde_json::Value {
        let mut obj = self.base_json();
        obj["@odata.type"] =
            serde_json::json!("#ComputerSystem.v1_5_0.ComputerSystem");
        obj
    }

    // last_reset_time() doesn't exist here.
    // Calling it β†’ compile error:
    //   let resp: ComputerSystemResponse<V1_5> = ...;
    //   resp.last_reset_time();
    //   ❌ ERROR: method `last_reset_time` not found for
    //            `ComputerSystemResponse<V1_5>`
}

Bug class eliminated: schema version mismatch. If the BMC is configured to advertise v1.13, use ComputerSystemResponse<V1_13> and the compiler ensures every v1.13-required field is produced. Downgrade to v1.5? Change the type parameter β€” the v1.13 methods vanish, and no dead fields leak into the response.


Section 6 β€” Typed Action Dispatch (ch02 Inverted)

In ch02, the typed command pattern binds Request β†’ Response on the client side. On the server side, the same pattern validates incoming action payloads and dispatches them type-safely β€” the inverse direction.

use serde::Deserialize;

// ──── Action Trait (mirror of ch02's IpmiCmd trait) ────

/// A Redfish action: the framework deserializes Params from the POST body,
/// then calls execute(). If the JSON doesn't match Params, deserialization
/// fails β€” execute() is never called with bad input.
pub trait RedfishAction {
    /// The expected JSON body structure.
    type Params: serde::de::DeserializeOwned;
    /// The result of executing the action.
    type Result: serde::Serialize;

    fn execute(&self, params: Self::Params) -> Result<Self::Result, RedfishError>;
}

#[derive(Debug)]
pub enum RedfishError {
    InvalidPayload(String),
    ActionFailed(String),
}

// ──── ComputerSystem.Reset ────

pub struct ComputerSystemReset;

#[derive(Debug, Deserialize)]
pub enum ResetType {
    On,
    ForceOff,
    GracefulShutdown,
    GracefulRestart,
    ForceRestart,
    ForceOn,
    PushPowerButton,
}

#[derive(Debug, Deserialize)]
#[serde(rename_all = "PascalCase")]
pub struct ResetParams {
    pub reset_type: ResetType,
}

impl RedfishAction for ComputerSystemReset {
    type Params = ResetParams;
    type Result = ();

    fn execute(&self, params: ResetParams) -> Result<(), RedfishError> {
        match params.reset_type {
            ResetType::GracefulShutdown => {
                // Send ACPI shutdown to host
                println!("Initiating ACPI shutdown");
                Ok(())
            }
            ResetType::ForceOff => {
                // Assert power-off to host
                println!("Forcing power off");
                Ok(())
            }
            ResetType::On | ResetType::ForceOn => {
                println!("Powering on");
                Ok(())
            }
            ResetType::GracefulRestart => {
                println!("ACPI restart");
                Ok(())
            }
            ResetType::ForceRestart => {
                println!("Forced restart");
                Ok(())
            }
            ResetType::PushPowerButton => {
                println!("Simulating power button press");
                Ok(())
            }
            // Exhaustive β€” compiler catches missing variants
        }
    }
}

// ──── Manager.ResetToDefaults ────

pub struct ManagerResetToDefaults;

#[derive(Debug, Deserialize)]
pub enum ResetToDefaultsType {
    ResetAll,
    PreserveNetworkAndUsers,
    PreserveNetwork,
}

#[derive(Debug, Deserialize)]
#[serde(rename_all = "PascalCase")]
pub struct ResetToDefaultsParams {
    pub reset_to_defaults_type: ResetToDefaultsType,
}

impl RedfishAction for ManagerResetToDefaults {
    type Params = ResetToDefaultsParams;
    type Result = ();

    fn execute(&self, params: ResetToDefaultsParams) -> Result<(), RedfishError> {
        match params.reset_to_defaults_type {
            ResetToDefaultsType::ResetAll => {
                println!("Full factory reset");
                Ok(())
            }
            ResetToDefaultsType::PreserveNetworkAndUsers => {
                println!("Reset preserving network + users");
                Ok(())
            }
            ResetToDefaultsType::PreserveNetwork => {
                println!("Reset preserving network config");
                Ok(())
            }
        }
    }
}

// ──── Generic Action Dispatcher ────

fn dispatch_action<A: RedfishAction>(
    action: &A,
    raw_body: &str,
) -> Result<A::Result, RedfishError> {
    // Deserialization validates the payload structure.
    // If the JSON doesn't match A::Params, this fails
    // and execute() is never called.
    let params: A::Params = serde_json::from_str(raw_body)
        .map_err(|e| RedfishError::InvalidPayload(e.to_string()))?;

    action.execute(params)
}

// ── Usage ──

fn handle_reset_action(body: &str) -> Result<(), RedfishError> {
    // Type-safe: ResetParams is validated by serde before execute()
    dispatch_action(&ComputerSystemReset, body)?;
    Ok(())

    // Invalid JSON: {"ResetType": "Explode"}
    // β†’ serde error: "unknown variant `Explode`"
    // β†’ execute() never called

    // Missing field: {}
    // β†’ serde error: "missing field `ResetType`"
    // β†’ execute() never called
}

Bug classes eliminated:

  • Invalid action payload: serde rejects unknown enum variants and missing fields before execute() is called. No manual if (body["ResetType"] == ...) chains.
  • Missing variant handling: match params.reset_type is exhaustive β€” adding a new ResetType variant forces every action handler to be updated.
  • Type confusion: ComputerSystemReset expects ResetParams; ManagerResetToDefaults expects ResetToDefaultsParams. The trait system prevents passing one action’s params to another action’s handler.

Section 7 β€” Putting It All Together: The GET Handler

Here’s the complete handler that composes all six sections into a single schema-compliant response:

/// Complete GET /redfish/v1/Systems/1 handler.
///
/// Every required field is enforced by the builder type-state.
/// Every data source is gated by availability tokens.
/// Every unit is locked to its dimensional type.
/// Every health input feeds the typed rollup.
fn handle_get_computer_system(
    smbios: &Option<(SmbiosReady, SmbiosTables)>,
    sensors: &Option<(SensorsReady, SensorCache)>,
    sel: &Option<(SelReady, TypedSelSummary)>,
    power_state: PowerStateValue,
    bios_version: Option<String>,
) -> serde_json::Value {
    // ── 1. Health rollup (Section 4) ──
    // Folds health from sensors + SEL into a single typed status
    let health = compute_system_health(
        sensors.as_ref(),
        sel.as_ref(),
    );

    // ── 2. Builder type-state (Section 1) ──
    let builder = ComputerSystemBuilder::new()
        .power_state(power_state)
        .status(health);

    // ── 3. Source-availability tokens (Section 2) ──
    let builder = match smbios {
        Some((proof, tables)) => {
            // SMBIOS available β€” populate from hardware
            populate_from_smbios(builder, proof, tables)
        }
        None => {
            // SMBIOS unavailable β€” safe defaults
            populate_smbios_fallback(builder)
        }
    };

    // ── 4. Optional enrichment from sensors (Section 3) ──
    let builder = if let Some((_proof, cache)) = sensors {
        builder
            .processor_summary(ProcessorSummary {
                count: 2,
                status: ResourceStatus {
                    state: StatusState::Enabled,
                    health: if cache.cpu_temp < Celsius(95.0) {
                        HealthValue::OK
                    } else {
                        HealthValue::Critical
                    },
                    health_rollup: None,
                },
            })
    } else {
        builder
    };

    let builder = match bios_version {
        Some(v) => builder.bios_version(v),
        None => builder,
    };

    // ── 5. Build (Section 1) ──
    // .build() is available because both paths (SMBIOS present / absent)
    // produce HasField for Name and UUID. The compiler verified this.
    builder.build("1")
}

// ──── Server Startup ────

fn main() {
    // Initialize all data sources β€” each returns an availability token
    let smbios = init_smbios();
    let sensors = init_sensors();
    let sel = init_sel();

    // Simulate handler call
    let response = handle_get_computer_system(
        &smbios,
        &sensors,
        &sel,
        PowerStateValue::On,
        Some("2.10.1".into()),
    );

    // NOTE: .unwrap() is used for brevity β€” handle errors in production.
    println!("{}", serde_json::to_string_pretty(&response).unwrap());
}

Expected output:

{
  "@odata.id": "/redfish/v1/Systems/1",
  "@odata.type": "#ComputerSystem.v1_13_0.ComputerSystem",
  "Id": "1",
  "Name": "PowerEdge R750",
  "UUID": "4c4c4544-004d-5610-804c-b2c04f435031",
  "PowerState": "On",
  "Status": {
    "State": "Enabled",
    "Health": "OK",
    "HealthRollup": "OK"
  },
  "Manufacturer": "Dell Inc.",
  "SerialNumber": "SVC1234567",
  "BiosVersion": "2.10.1",
  "ProcessorSummary": {
    "Count": 2,
    "Status": {
      "State": "Enabled",
      "Health": "OK"
    }
  }
}

What the Compiler Proves (Server Side)

#Bug classHow it’s preventedPattern (Section)
1Missing required field in response.build() requires all type-state markers to be HasFieldBuilder type-state (Β§1)
2Calling into failed subsystemSource-availability tokens gate data accessCapability tokens (Β§2)
3No fallback for unavailable sourceBoth match arms (present/absent) must produce HasFieldType-state + exhaustive match (Β§2)
4Wrong unit in JSON fieldreading_celsius: Celsius β‰  Rpm β‰  WattsDimensional types (Β§3)
5Incomplete health rollupcompute_system_health takes explicit source refs; SEL provides per-subsystem HealthValue via ch07’s TypedSelSummaryTyped function signature + exhaustive matching (Β§4)
6Schema version mismatchComputerSystemResponse<V1_13> has last_reset_time(); V1_5 doesn’tPhantom types (Β§5)
7Invalid action payload acceptedserde rejects unknown/missing fields before execute()Typed action dispatch (Β§6)
8Missing action variant handlingmatch params.reset_type is exhaustiveEnum exhaustiveness (Β§6)
9Wrong action params to wrong handlerRedfishAction::Params is an associated typeTyped commands inverted (Β§6)

Total runtime overhead: zero. The builder markers, availability tokens, phantom version types, and dimensional newtypes all compile away. The JSON produced is identical to the hand-rolled C version β€” minus nine classes of bugs.


The Mirror: Client vs. Server Pattern Map

ConcernClient (ch17)Server (this chapter)
Boundary directionInbound: JSON β†’ typed valuesOutbound: typed values β†’ JSON
Core principleβ€œParse, don’t validateβ€β€œConstruct, don’t serialize”
Field completenessTryFrom validates required fields are presentBuilder type-state gates .build() on required fields
Unit safetyCelsius β‰  Rpm when readingCelsius β‰  Rpm when writing
Privilege / availabilityCapability tokens gate requestsAvailability tokens gate data source access
Data sourcesSingle source (BMC)Multiple sources (SMBIOS, sensors, SEL, PCIe, …)
Schema versionPhantom types prevent accessing unsupported fieldsPhantom types enforce providing version-required fields
ActionsClient sends typed action POSTServer validates + dispatches via RedfishAction trait
HealthRead and trust Status.HealthCompute Status.Health via typed rollup
Failure propagationOne bad parse β†’ one client errorOne bad serialization β†’ every client sees wrong data

The two chapters form a complete story. Ch17: β€œEvery response I consume is type-checked.” This chapter: β€œEvery response I produce is type-checked.” The same patterns flow in both directions β€” the type system doesn’t know or care which end of the wire you’re on.

Key Takeaways

  1. β€œConstruct, don’t serialize” is the server-side mirror of β€œparse, don’t validate” β€” use builder type-state so .build() only exists when all required fields are present.
  2. Source-availability tokens prove initialization β€” the same capability token pattern from ch04, repurposed to prove a data source is ready.
  3. Dimensional types protect producers and consumers β€” putting Rpm in a ReadingCelsius field is a compile error, not a customer-reported bug.
  4. Health rollup is a typed fold β€” Ord on HealthValue plus explicit source references mean the compiler catches β€œforgot to include PSU status.”
  5. Schema versioning at the type level β€” phantom type parameters make version-specific fields appear and disappear at compile time.
  6. Action dispatch inverts ch02 β€” serde deserializes the payload into a typed Params struct, and exhaustive matching on enum variants means adding a new ResetType forces every handler to be updated.
  7. Server-side bugs propagate to every client β€” that’s why compile-time correctness on the producer side is even more critical than on the consumer side.

Fourteen Tricks from the Trenches 🟑

What you’ll learn: Fourteen smaller correct-by-construction techniques β€” from sentinel elimination and sealed traits to session types, Pin, RAII, and #[must_use] β€” each eliminating a specific bug class for near-zero effort.

Cross-references: ch02 (sealed traits extend ch02), ch05 (typestate builder extends ch05), ch07 (FromStr extends ch07)

Fourteen Tricks from the Trenches

The eight core patterns (ch02–ch09) cover the major correct-by-construction techniques. This chapter collects fourteen smaller but high-value tricks that show up repeatedly in production Rust code β€” each one eliminates a specific class of bug for zero or near-zero effort.

Trick 1 β€” Sentinel β†’ Option at the Boundary

Hardware protocols are full of sentinel values: IPMI uses 0xFF for β€œsensor not present,” PCI uses 0xFFFF for β€œno device,” and SMBIOS uses 0x00 for β€œunknown.” If you carry these sentinels through your code as plain integers, every consumer must remember to check for the magic value. If even one comparison forgets, you get a phantom 255 Β°C reading or a spurious vendor-ID match.

The rule: Convert sentinels to Option at the very first parse boundary, and convert back to the sentinel only at the serialization boundary.

The anti-pattern (from pcie_tree/src/lspci.rs)

// Sentinel carried internally β€” every comparison must remember
let mut current_vendor_id: u16 = 0xFFFF;
let mut current_device_id: u16 = 0xFFFF;

// ... later, parsing fails silently ...
current_vendor_id = u16::from_str_radix(hex, 16)
    .unwrap_or(0xFFFF);  // sentinel hides the error

Every function that receives current_vendor_id must know that 0xFFFF is special. If someone writes if vendor_id == target_id without checking for 0xFFFF first, a missing device silently matches when the target also happens to be parsed from bad input as 0xFFFF.

The correct pattern (from nic_sel/src/events.rs)

pub struct ThermalEvent {
    pub record_id: u16,
    pub temperature: Option<u8>,  // None if sensor reports 0xFF
}

impl ThermalEvent {
    pub fn from_raw(record_id: u16, raw_temp: u8) -> Self {
        ThermalEvent {
            record_id,
            temperature: if raw_temp != 0xFF {
                Some(raw_temp)
            } else {
                None
            },
        }
    }
}

Now every consumer must handle the None case β€” the compiler forces it:

// Safe β€” compiler ensures we handle missing temps
fn is_overtemp(temp: Option<u8>, threshold: u8) -> bool {
    temp.map_or(false, |t| t > threshold)
}

// Forgetting to handle None is a compile error:
// fn bad_check(temp: Option<u8>, threshold: u8) -> bool {
//     temp > threshold  // ERROR: can't compare Option<u8> with u8
// }

Real-world impact

inventory/src/events.rs uses the same pattern for GPU thermal alerts:

temperature: if data[1] != 0xFF {
    Some(data[1] as i8)
} else {
    None
},

The refactoring for pcie_tree/src/lspci.rs is straightforward: change current_vendor_id: u16 to current_vendor_id: Option<u16>, replace 0xFFFF with None, and let the compiler find every site that needs updating.

BeforeAfter
let mut vendor_id: u16 = 0xFFFFlet mut vendor_id: Option<u16> = None
.unwrap_or(0xFFFF).ok() (already returns Option)
if vendor_id != 0xFFFF { ... }if let Some(vid) = vendor_id { ... }
Serialization: vendor_idvendor_id.unwrap_or(0xFFFF)

Trick 2 β€” Sealed Traits

Chapter 2 introduced IpmiCmd with an associated type that binds each command to its response. But there’s a loophole: if any code can implement IpmiCmd, someone could write a MaliciousCmd whose parse_response returns the wrong type or panics. The type safety of the entire system rests on every implementation being correct.

A sealed trait closes this loophole. The idea is simple: make the trait require a private supertrait that only your crate can implement.

// β€” Private module: not exported from the crate β€”
mod private {
    pub trait Sealed {}
}

// β€” Public trait: requires Sealed, which outsiders can't implement β€”
pub trait IpmiCmd: private::Sealed {
    type Response;
    fn net_fn(&self) -> u8;
    fn cmd_byte(&self) -> u8;
    fn payload(&self) -> Vec<u8>;
    fn parse_response(&self, raw: &[u8]) -> io::Result<Self::Response>;
}

Inside your crate, you implement Sealed for each approved command type:

pub struct ReadTemp { pub sensor_id: u8 }
impl private::Sealed for ReadTemp {}

impl IpmiCmd for ReadTemp {
    type Response = Celsius;
    fn net_fn(&self) -> u8 { 0x04 }
    fn cmd_byte(&self) -> u8 { 0x2D }
    fn payload(&self) -> Vec<u8> { vec![self.sensor_id] }
    fn parse_response(&self, raw: &[u8]) -> io::Result<Celsius> {
        if raw.is_empty() { return Err(io::Error::new(io::ErrorKind::InvalidData, "empty")); }
        Ok(Celsius(raw[0] as f64))
    }
}

External code sees IpmiCmd and can call execute(), but cannot implement it:

// In another crate:
struct EvilCmd;
// impl private::Sealed for EvilCmd {}  // ERROR: module `private` is private
// impl IpmiCmd for EvilCmd { ... }     // ERROR: `Sealed` is not satisfied

When to seal

Seal when…Don’t seal when…
Safety depends on correct implementation (IpmiCmd, DiagModule)Users should extend the system (custom report formatters)
Associated types must satisfy invariantsThe trait is a simple capability marker (HasIpmi)
You own the canonical set of implementationsThird-party plugins are a design goal

Real-world candidates

  • IpmiCmd β€” incorrect parse could corrupt typed responses
  • DiagModule β€” framework assumes run() returns valid DER records
  • SelEventFilter β€” broken filter could swallow critical SEL events

Trick 3 β€” #[non_exhaustive] for Evolving Enums

SkuVariant in inventory/src/types.rs today has five variants:

pub enum SkuVariant {
    S1001, S2001, S2002, S2003, S3001,
}

When the next generation ships and you add S4001, any external code that matches on SkuVariant and doesn’t have a wildcard arm will silently fail to compile β€” which is the whole point. But what about internal code? Without #[non_exhaustive], your match in the same crate compiles without a wildcard, and adding the new variant breaks your own build.

Marking the enum #[non_exhaustive] forces external crates that match on it to include a wildcard arm. Within the defining crate, #[non_exhaustive] has no effect β€” you can still write exhaustive matches.

Why this is useful: When you publish SkuVariant from a library crate (or a shared sub-crate in a workspace), downstream code is forced to handle unknown future variants. When you add S4001 next generation, downstream code already compiles β€” they have a wildcard arm.

// In gpu_sel crate (the defining crate):
#[non_exhaustive]
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub enum SkuVariant {
    S1001,
    S2001,
    S2002,
    S2003,
    S3001,
    // When the next SKU ships, add it here.
    // External consumers already have a wildcard β€” zero breakage for them.
}

// Within gpu_sel itself β€” exhaustive match is allowed (no wildcard needed):
fn diag_path_internal(sku: SkuVariant) -> &'static str {
    match sku {
        SkuVariant::S1001 => "legacy_gen1",
        SkuVariant::S2001 => "gen2_accel_diag",
        SkuVariant::S2002 => "gen2_alt_diag",
        SkuVariant::S2003 => "gen2_alt_hf_diag",
        SkuVariant::S3001 => "gen3_accel_diag",
        // No wildcard needed inside the defining crate.
        // Adding S4001 here will cause a compile error at this match,
        // which is exactly what you want β€” it forces you to update it.
    }
}
// In the binary crate (a downstream crate that depends on inventory):
fn diag_path_external(sku: inventory::SkuVariant) -> &'static str {
    match sku {
        inventory::SkuVariant::S1001 => "legacy_gen1",
        inventory::SkuVariant::S2001 => "gen2_accel_diag",
        inventory::SkuVariant::S2002 => "gen2_alt_diag",
        inventory::SkuVariant::S2003 => "gen2_alt_hf_diag",
        inventory::SkuVariant::S3001 => "gen3_accel_diag",
        _ => "generic_diag",  // REQUIRED by #[non_exhaustive] for external crates
    }
}

Workspace tip: If all your code is in a single crate, #[non_exhaustive] won’t help β€” it only affects cross-crate boundaries. For the project’s large workspace, place evolving enums in a shared crate (core_lib or inventory) so the attribute protects consumers in other workspace crates.

Candidates

EnumModuleWhy
SkuVariantinventory, net_inventoryNew SKUs every generation
SensorTypeprotocol_libIPMI spec reserves 0xC0–0xFF for OEM
CompletionCodeprotocol_libCustom BMC vendors add codes
Componentevent_handlerNew hardware categories (NewSoC was recently added)

Trick 4 β€” Typestate Builder

Chapter 5 showed type-state for protocols (session lifecycles, link training). The same idea applies to builders β€” structs whose build() / finish() can only be called when all required fields have been set.

The problem with fluent builders

DerBuilder in diag_framework/src/der.rs today looks like this (simplified):

// Current fluent builder β€” finish() always available
pub struct DerBuilder {
    der: Der,
}

impl DerBuilder {
    pub fn new(marker: &str, fault_code: u32) -> Self { ... }
    pub fn mnemonic(mut self, m: &str) -> Self { ... }
    pub fn fault_class(mut self, fc: &str) -> Self { ... }
    pub fn finish(self) -> Der { self.der }  // ← always callable!
}

This compiles without error, but produces an incomplete DER record:

let bad = DerBuilder::new("CSI_ERR", 62691)
    .finish();  // oops β€” no mnemonic, no fault_class

Typestate builder: finish() requires both fields

pub struct Missing;
pub struct Set<T>(T);

pub struct DerBuilder<Mnemonic, FaultClass> {
    marker: String,
    fault_code: u32,
    mnemonic: Mnemonic,
    fault_class: FaultClass,
    description: Option<String>,
}

// Constructor: starts with both required fields Missing
impl DerBuilder<Missing, Missing> {
    pub fn new(marker: &str, fault_code: u32) -> Self {
        DerBuilder {
            marker: marker.to_string(),
            fault_code,
            mnemonic: Missing,
            fault_class: Missing,
            description: None,
        }
    }
}

// Set mnemonic (works regardless of fault_class's state)
impl<FC> DerBuilder<Missing, FC> {
    pub fn mnemonic(self, m: &str) -> DerBuilder<Set<String>, FC> {
        DerBuilder {
            marker: self.marker, fault_code: self.fault_code,
            mnemonic: Set(m.to_string()),
            fault_class: self.fault_class,
            description: self.description,
        }
    }
}

// Set fault_class (works regardless of mnemonic's state)
impl<MN> DerBuilder<MN, Missing> {
    pub fn fault_class(self, fc: &str) -> DerBuilder<MN, Set<String>> {
        DerBuilder {
            marker: self.marker, fault_code: self.fault_code,
            mnemonic: self.mnemonic,
            fault_class: Set(fc.to_string()),
            description: self.description,
        }
    }
}

// Optional fields β€” available in ANY state
impl<MN, FC> DerBuilder<MN, FC> {
    pub fn description(mut self, desc: &str) -> Self {
        self.description = Some(desc.to_string());
        self
    }
}

/// The fully-built DER record.
pub struct Der {
    pub marker: String,
    pub fault_code: u32,
    pub mnemonic: String,
    pub fault_class: String,
    pub description: Option<String>,
}

// finish() ONLY available when both required fields are Set
impl DerBuilder<Set<String>, Set<String>> {
    pub fn finish(self) -> Der {
        Der {
            marker: self.marker,
            fault_code: self.fault_code,
            mnemonic: self.mnemonic.0,
            fault_class: self.fault_class.0,
            description: self.description,
        }
    }
}

Now the buggy call is a compile error:

// βœ… Compiles β€” both required fields set (in any order)
let der = DerBuilder::new("CSI_ERR", 62691)
    .fault_class("GPU Module")   // order doesn't matter
    .mnemonic("ACCEL_CARD_ER691")
    .description("Thermal throttle")
    .finish();

// ❌ Compile error β€” finish() doesn't exist on DerBuilder<Set<String>, Missing>
let bad = DerBuilder::new("CSI_ERR", 62691)
    .mnemonic("ACCEL_CARD_ER691")
    .finish();  // ERROR: method `finish` not found

When to use typestate builders

Use when…Don’t bother when…
Omitting a field causes silent bugs (DER missing mnemonic)All fields have sensible defaults
The builder is part of a public APIThe builder is test-only scaffolding
More than 2–3 required fieldsSingle required field (just take it in new())

Trick 5 β€” FromStr as a Validation Boundary

Chapter 7 showed TryFrom<&[u8]> for binary data (FRU records, SEL entries). For string inputs β€” config files, CLI arguments, JSON fields β€” the analogous boundary is FromStr.

The problem

// C++ / unvalidated Rust: silently falls through to a default
fn route_diag(level: &str) -> DiagMode {
    if level == "quick" { ... }
    else if level == "standard" { ... }
    else { QuickMode }  // typo in config?  Β―\_(ツ)_/Β―
}

A config file with "diag_level": "extendedd" (typo) silently gets QuickMode.

The pattern (from config_loader/src/diag.rs)

use std::str::FromStr;

#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum DiagLevel {
    Quick,
    Standard,
    Extended,
    Stress,
}

impl FromStr for DiagLevel {
    type Err = String;
    fn from_str(s: &str) -> Result<Self, Self::Err> {
        match s.to_lowercase().as_str() {
            "quick"    | "1" => Ok(DiagLevel::Quick),
            "standard" | "2" => Ok(DiagLevel::Standard),
            "extended" | "3" => Ok(DiagLevel::Extended),
            "stress"   | "4" => Ok(DiagLevel::Stress),
            other => Err(format!("unknown diag level: '{other}'")),
        }
    }
}

Now a typo is caught immediately:

let level: DiagLevel = "extendedd".parse()?;
// Err("unknown diag level: 'extendedd'")

The three benefits

  1. Fail-fast: Bad input is caught at the parsing boundary, not three layers deep in diagnostic logic.
  2. Aliases are explicit: "MEM", "DIMM", and "MEMORY" all map to Component::Memory β€” the match arms document the mapping.
  3. .parse() is ergonomic: Because FromStr integrates with str::parse(), you get clean one-liners: let level: DiagLevel = config["level"].parse()?;

Real codebase usage

The project already has 8 FromStr implementations:

TypeModuleNotable aliases
DiagLevelconfig_loader"1" = Quick, "4" = Stress
Componentevent_handler"MEM" / "DIMM" = Memory, "SSD" / "NVME" = Disk
SkuVariantnet_inventory"Accel-X1" = S2001, "Accel-M1" = S2002, "Accel-Z1" = S3001
SkuVariantinventorySame aliases (separate module, same pattern)
FaultStatusconfig_loaderFault lifecycle states
DiagActionconfig_loaderRemediation action types
ActionTypeconfig_loaderAction categories
DiagModecluster_diagMulti-node test modes

The contrast with TryFrom:

TryFrom<&[u8]>FromStr
InputRaw bytes (binary protocols)Strings (configs, CLI, JSON)
Typical sourceIPMI, PCIe config space, FRUJSON fields, env vars, user input
Chapterch07ch11
Both useResult β€” forcing the caller to handle invalid input

Trick 6 β€” Const Generics for Compile-Time Size Validation

When hardware buffers, register banks, or protocol frames have fixed sizes, const generics let the compiler enforce them:

/// A fixed-size register bank. The size is part of the type.
/// `RegisterBank<256>` and `RegisterBank<4096>` are different types.
pub struct RegisterBank<const N: usize> {
    data: [u8; N],
}

impl<const N: usize> RegisterBank<N> {
    /// Read a register at the given offset.
    /// Compile-time: N is known, so the array size is fixed.
    /// Runtime: only the offset is checked.
    pub fn read(&self, offset: usize) -> Option<u8> {
        self.data.get(offset).copied()
    }
}

// PCIe conventional config space: 256 bytes
type PciConfigSpace = RegisterBank<256>;

// PCIe extended config space: 4096 bytes
type PcieExtConfigSpace = RegisterBank<4096>;

// These are different types β€” can't accidentally pass one for the other:
fn read_extended_cap(config: &PcieExtConfigSpace, offset: usize) -> Option<u8> {
    config.read(offset)
}
// read_extended_cap(&pci_config, 0x100);
//                   ^^^^^^^^^^^ expected RegisterBank<4096>, found RegisterBank<256> ❌

Compile-time assertions with const generics:

/// NVMe admin commands use 4096-byte buffers. Enforce at compile time.
pub struct NvmeBuffer<const N: usize> {
    data: Box<[u8; N]>,
}

impl<const N: usize> NvmeBuffer<N> {
    pub fn new() -> Self {
        // Runtime assertion: only 512 or 4096 allowed
        assert!(N == 4096 || N == 512, "NVMe buffers must be 512 or 4096 bytes");
        NvmeBuffer { data: Box::new([0u8; N]) }
    }
}
// NvmeBuffer::<1024>::new();  // panics at runtime with this form
// For true compile-time enforcement, see Trick 9 (const assertions).

When to use: Fixed-size protocol buffers (NVMe, PCIe config space), DMA descriptors, hardware FIFO depths. Anywhere the size is a hardware constant that should never vary at runtime.


Trick 7 β€” Safe Wrappers Around unsafe

The project currently has zero unsafe blocks. But when you add MMIO register access, DMA, or FFI to accel-mgmt/accel-query, you’ll need unsafe. The correct-by-construction approach: wrap every unsafe block in a safe abstraction so the unsafety is contained and auditable.

/// MMIO-mapped register. The pointer is valid for the lifetime of the mapping.
/// All unsafe is contained in this module β€” callers use safe methods.
pub struct MmioRegion {
    base: *mut u8,
    len: usize,
}

impl MmioRegion {
    /// # Safety
    /// - `base` must be a valid pointer to an MMIO-mapped region
    /// - The region must remain mapped for the lifetime of this struct
    /// - No other code may alias this region
    pub unsafe fn new(base: *mut u8, len: usize) -> Self {
        MmioRegion { base, len }
    }

    /// Safe read β€” bounds checking prevents out-of-bounds MMIO access.
    pub fn read_u32(&self, offset: usize) -> Option<u32> {
        if offset + 4 > self.len { return None; }
        // SAFETY: offset is bounds-checked above, base is valid per new() contract
        Some(unsafe {
            core::ptr::read_volatile(self.base.add(offset) as *const u32)
        })
    }

    /// Safe write β€” bounds checking prevents out-of-bounds MMIO access.
    pub fn write_u32(&self, offset: usize, value: u32) -> bool {
        if offset + 4 > self.len { return false; }
        // SAFETY: offset is bounds-checked above, base is valid per new() contract
        unsafe {
            core::ptr::write_volatile(self.base.add(offset) as *mut u32, value);
        }
        true
    }
}

Combine with phantom types (ch09) for typed MMIO:

use std::marker::PhantomData;

pub struct ReadOnly;
pub struct ReadWrite;

pub struct TypedMmio<Perm> {
    region: MmioRegion,
    _perm: PhantomData<Perm>,
}

impl TypedMmio<ReadOnly> {
    pub fn read_u32(&self, offset: usize) -> Option<u32> {
        self.region.read_u32(offset)
    }
    // No write method β€” compile error if you try to write to a ReadOnly region
}

impl TypedMmio<ReadWrite> {
    pub fn read_u32(&self, offset: usize) -> Option<u32> {
        self.region.read_u32(offset)
    }
    pub fn write_u32(&self, offset: usize, value: u32) -> bool {
        self.region.write_u32(offset, value)
    }
}

Guidelines for unsafe wrappers:

RuleWhy
One unsafe fn new() with documented # Safety invariantsCaller takes responsibility once
All other methods are safeCallers can’t trigger UB
# SAFETY: comment on every unsafe blockAuditors can verify locally
Wrap in a module with #[deny(unsafe_op_in_unsafe_fn)]Even inside unsafe fn, individual ops need unsafe
Run cargo +nightly miri test on the wrapperVerify memory model compliance

βœ… Checkpoint: Tricks 1–7

You now have seven everyday tricks. Here’s a quick scorecard:

TrickBug class eliminatedEffort to adopt
1Sentinel confusion (0xFF)Low β€” one match at the boundary
2Unauthorized trait implsLow β€” add Sealed supertrait
3Broken consumers after enum growthLow β€” one-line attribute
4Missing builder fieldsMedium β€” extra type parameters
5Typos in string-typed configLow β€” impl FromStr
6Wrong buffer sizesLow β€” const generic parameter
7Unsafe scattered across codebaseMedium β€” wrapper module

Tricks 8–14 are more advanced β€” they touch async, const evaluation, session types, Pin, and Drop. Take a break here if you need one; the techniques above are already high-value, low-effort wins you can adopt tomorrow.


Trick 8 β€” Async Type-State Machines

When hardware drivers use async (e.g., async BMC communication, async NVMe I/O), type-state still works β€” but ownership across .await points needs care:

use std::marker::PhantomData;

pub struct Idle;
pub struct Authenticating;
pub struct Active;

pub struct AsyncSession<S> {
    host: String,
    _state: PhantomData<S>,
}

impl AsyncSession<Idle> {
    pub fn new(host: &str) -> Self {
        AsyncSession { host: host.to_string(), _state: PhantomData }
    }

    /// Transition Idle β†’ Authenticating β†’ Active.
    /// The Session is consumed (moved into the future) across the .await.
    pub async fn authenticate(self, user: &str, pass: &str)
        -> Result<AsyncSession<Active>, String>
    {
        // Phase 1: send credentials (consumes Idle session)
        let pending: AsyncSession<Authenticating> = AsyncSession {
            host: self.host,
            _state: PhantomData,
        };

        // Simulate async BMC authentication
        // tokio::time::sleep(Duration::from_secs(1)).await;

        // Phase 2: return Active session
        Ok(AsyncSession {
            host: pending.host,
            _state: PhantomData,
        })
    }
}

impl AsyncSession<Active> {
    pub async fn send_command(&mut self, cmd: &[u8]) -> Vec<u8> {
        // async I/O here...
        vec![0x00]
    }
}

// Usage:
// let session = AsyncSession::new("192.168.1.100");
// let mut session = session.authenticate("admin", "pass").await?;
// let resp = session.send_command(&[0x04, 0x2D]).await;

Key rules for async type-state:

RuleWhy
Transition methods take self (by value), not &mut selfOwnership transfer works across .await
Return Result<NextState, (Error, PrevState)> for recoverable errorsCaller can retry from the previous state
Don’t split state across multiple futuresOne future owns one session
Use Send + 'static bounds if using tokio::spawnThe session must be movable across threads

Caveat: If you need the previous state back on error (to retry), return Result<AsyncSession<Active>, (Error, AsyncSession<Idle>)> so the caller gets ownership back. Without this, a failed .await drops the session permanently.


Trick 9 β€” Refinement Types via Const Assertions

When a numeric constraint is a compile-time invariant (not runtime data), use const evaluation to enforce it. This differs from Trick 6 (which provides type-level size distinctions) β€” here we reject invalid values at compile time:

/// A sensor ID that must be in the IPMI SDR range (0x01..=0xFE).
/// The constraint is checked at compile time when `N` is const.
pub struct SdrSensorId<const N: u8>;

impl<const N: u8> SdrSensorId<N> {
    /// Compile-time validation: panics during compilation if N is out of range.
    pub const fn validate() {
        assert!(N >= 0x01, "Sensor ID must be >= 0x01");
        assert!(N <= 0xFE, "Sensor ID must be <= 0xFE (0xFF is reserved)");
    }

    pub const VALIDATED: () = Self::validate();

    pub const fn value() -> u8 { N }
}

// Usage:
fn read_sensor_const<const N: u8>() -> f64 {
    let _ = SdrSensorId::<N>::VALIDATED;  // compile-time check
    // read sensor N...
    42.0
}

// read_sensor_const::<0x20>();   // βœ… compiles β€” 0x20 is valid
// read_sensor_const::<0x00>();   // ❌ compile error β€” "Sensor ID must be >= 0x01"
// read_sensor_const::<0xFF>();   // ❌ compile error β€” 0xFF is reserved

Simpler form β€” bounded fan IDs:

pub struct BoundedFanId<const N: u8>;

impl<const N: u8> BoundedFanId<N> {
    pub const VALIDATED: () = assert!(N < 8, "Server has at most 8 fans (0..7)");

    pub const fn id() -> u8 {
        let _ = Self::VALIDATED;
        N
    }
}

// BoundedFanId::<3>::id();   // βœ…
// BoundedFanId::<10>::id();  // ❌ compile error

When to use: Hardware-defined fixed IDs (sensor IDs, fan slots, PCIe slot numbers) known at compile time. When the value comes from runtime data (config file, user input), use TryFrom / FromStr (ch07, Trick 5) instead.


Trick 10 β€” Session Types for Channel Communication

When two components communicate over a channel (e.g., diagnostic orchestrator ↔ worker thread), session types encode the protocol in the type system:

use std::marker::PhantomData;

// Protocol: Client sends Request, Server sends Response, then done.
pub struct SendRequest;
pub struct RecvResponse;
pub struct Done;

/// A typed channel endpoint. `S` is the current protocol state.
pub struct Chan<S> {
    // In real code: wraps a mpsc::Sender/Receiver pair
    _state: PhantomData<S>,
}

impl Chan<SendRequest> {
    /// Send a request β€” transitions to RecvResponse state.
    pub fn send(self, request: DiagRequest) -> Chan<RecvResponse> {
        // ... send on channel ...
        Chan { _state: PhantomData }
    }
}

impl Chan<RecvResponse> {
    /// Receive a response β€” transitions to Done state.
    pub fn recv(self) -> (DiagResponse, Chan<Done>) {
        // ... recv from channel ...
        (DiagResponse { passed: true }, Chan { _state: PhantomData })
    }
}

impl Chan<Done> {
    /// Closing the channel β€” only possible when the protocol is complete.
    pub fn close(self) { /* drop */ }
}

pub struct DiagRequest { pub test_name: String }
pub struct DiagResponse { pub passed: bool }

// The protocol MUST be followed in order:
fn orchestrator(chan: Chan<SendRequest>) {
    let chan = chan.send(DiagRequest { test_name: "gpu_stress".into() });
    let (response, chan) = chan.recv();
    chan.close();
    println!("Result: {}", if response.passed { "PASS" } else { "FAIL" });
}

// Can't recv before send:
// fn wrong_order(chan: Chan<SendRequest>) {
//     chan.recv();  // ❌ no method `recv` on Chan<SendRequest>
// }

When to use: Inter-thread diagnostic protocols, BMC command sequences, any request-response pattern where order matters. For complex multi-message protocols, consider the session-types or rumpsteak crates.


Trick 11 β€” Pin for Self-Referential State Machines

Some type-state machines need to hold references into their own data (e.g., a parser that tracks a position within its owned buffer). Rust normally forbids this because moving the struct would invalidate the internal pointer. Pin<T> solves this by guaranteeing the value will not be moved:

use std::pin::Pin;
use std::marker::PhantomPinned;

/// A streaming parser that holds a reference into its own buffer.
/// Once pinned, it cannot be moved β€” the internal reference stays valid.
pub struct StreamParser {
    buffer: Vec<u8>,
    /// Points into `buffer`. Only valid while pinned.
    cursor: *const u8,
    _pin: PhantomPinned,  // opts out of Unpin β€” prevents accidental unpinning
}

impl StreamParser {
    pub fn new(data: Vec<u8>) -> Pin<Box<Self>> {
        let parser = StreamParser {
            buffer: data,
            cursor: std::ptr::null(),
            _pin: PhantomPinned,
        };
        let mut boxed = Box::pin(parser);

        // Set cursor to point into the pinned buffer
        let cursor = boxed.buffer.as_ptr();
        // SAFETY: we have exclusive access and the parser is pinned
        unsafe {
            let mut_ref = Pin::as_mut(&mut boxed);
            Pin::get_unchecked_mut(mut_ref).cursor = cursor;
        }

        boxed
    }

    /// Read the next byte β€” only callable through Pin<&mut Self>.
    pub fn next_byte(self: Pin<&mut Self>) -> Option<u8> {
        // The parser can't be moved, so cursor remains valid
        if self.cursor.is_null() { return None; }
        // ... advance cursor through buffer ...
        Some(42) // stub
    }
}

// Usage:
// let mut parser = StreamParser::new(vec![0x01, 0x02, 0x03]);
// let byte = parser.as_mut().next_byte();

Key insight: Pin is the correct-by-construction solution to the self-referential struct problem. Without it, you’d need unsafe and manual lifetime tracking. With it, the compiler prevents moves and the internal pointer invariant is maintained.

Use Pin when…Don’t use Pin when…
State machine holds intra-struct referencesAll fields are independently owned
Async futures that borrow across .awaitNo self-referencing needed
DMA descriptors that must not relocate in memoryData can be freely moved
Hardware ring buffers with internal cursorSimple index-based iteration works

Trick 12 β€” RAII / Drop as a Correctness Guarantee

Rust’s Drop trait is a correct-by-construction mechanism: cleanup code cannot be forgotten because the compiler inserts it automatically. This is especially valuable for hardware resources that must be released exactly once.

use std::io;

/// An IPMI session that MUST be closed when done.
/// The `Drop` impl guarantees cleanup even on panic or early `?` return.
pub struct IpmiSession {
    handle: u32,
}

impl IpmiSession {
    pub fn open(host: &str) -> io::Result<Self> {
        // ... negotiate IPMI session ...
        Ok(IpmiSession { handle: 42 })
    }

    pub fn send_raw(&self, _data: &[u8]) -> io::Result<Vec<u8>> {
        Ok(vec![0x00])
    }
}

impl Drop for IpmiSession {
    fn drop(&mut self) {
        // Close Session command: always runs, even on panic/early-return.
        // In C, forgetting CloseSession() leaks a BMC session slot.
        let _ = self.send_raw(&[0x06, 0x3C]);
        eprintln!("[RAII] session {} closed", self.handle);
    }
}
// Usage:
fn diagnose(host: &str) -> io::Result<()> {
    let session = IpmiSession::open(host)?;
    session.send_raw(&[0x04, 0x2D, 0x20])?;
    // No explicit close needed β€” Drop runs here automatically
    Ok(())
    // Even if send_raw returns Err(...), the session is still closed.
}

The C/C++ failure mode that RAII eliminates:

C:     session = ipmi_open(host);
       ipmi_send(session, data);
       if (error) return -1;        // πŸ› leaked session β€” forgot close()
       ipmi_close(session);

Rust:  let session = IpmiSession::open(host)?;
       session.send_raw(data)?;     // βœ… Drop runs on ? return
       // Drop always runs β€” leak is impossible

Combine RAII with type-state (ch05) for ordered cleanup:

You cannot specialize Drop on a generic parameter (Rust error E0366). Instead, use separate wrapper types per state:

use std::marker::PhantomData;

pub struct Open;
pub struct Locked;

pub struct GpuContext<S> {
    device_id: u32,
    _state: PhantomData<S>,
}

impl GpuContext<Open> {
    pub fn lock_clocks(self) -> LockedGpu {
        // ... lock GPU clocks for stable benchmarking ...
        LockedGpu { device_id: self.device_id }
    }
}

/// Separate type for the locked state β€” has its own Drop.
/// We can't do `impl Drop for GpuContext<Locked>` (E0366),
/// so we use a distinct wrapper that owns the locked resource.
pub struct LockedGpu {
    device_id: u32,
}

impl LockedGpu {
    pub fn run_benchmark(&self) -> f64 {
        // ... benchmark with locked clocks ...
        42.0
    }
}

impl Drop for LockedGpu {
    fn drop(&mut self) {
        // Unlock clocks on drop β€” only fires for the locked wrapper.
        eprintln!("[RAII] GPU {} clocks unlocked", self.device_id);
    }
}

// GpuContext<Open> has no special Drop β€” no clocks to unlock.
// LockedGpu always unlocks on drop, even on panic or early return.

Why not impl Drop for GpuContext<Locked>? Rust requires Drop impls to apply to all instantiations of a generic type. To get state-specific cleanup, use one of:

ApproachProsCons
Separate wrapper type (above)Clean, zero-costExtra type name
Generic Drop + runtime TypeId checkSingle typeRequires 'static, runtime cost
enum state with exhaustive match in DropSingle generic typeRuntime dispatch, less type safety

When to use: BMC sessions, GPU clock locks, DMA buffer mappings, file handles, mutex guards, any resource with a mandatory release step. If you find yourself writing fn close(&mut self) or fn cleanup(), it should almost certainly be Drop instead.


Trick 13 β€” Error Type Hierarchies as Correctness

Well-designed error types prevent silent error swallowing and ensure callers handle each failure mode appropriately. Using thiserror for structured errors is a correct-by-construction pattern: the compiler forces exhaustive matching.

# Cargo.toml
[dependencies]
thiserror = "1"
# For application-level error handling (optional):
# anyhow = "1"
use thiserror::Error;

#[derive(Debug, Error)]
pub enum DiagError {
    #[error("IPMI communication failed: {0}")]
    Ipmi(#[from] IpmiError),

    #[error("sensor {sensor_id:#04x} reading out of range: {value}")]
    SensorRange { sensor_id: u8, value: f64 },

    #[error("GPU {gpu_id} not responding")]
    GpuTimeout { gpu_id: u32 },

    #[error("configuration invalid: {0}")]
    Config(String),
}

#[derive(Debug, Error)]
pub enum IpmiError {
    #[error("session authentication failed")]
    AuthFailed,

    #[error("command {net_fn:#04x}/{cmd:#04x} timed out")]
    Timeout { net_fn: u8, cmd: u8 },

    #[error("completion code {0:#04x}")]
    CompletionCode(u8),
}

// Callers MUST handle each variant β€” no silent swallowing:
fn run_thermal_check() -> Result<(), DiagError> {
    // If this returns IpmiError, it's automatically converted to DiagError::Ipmi
    // via the #[from] attribute.
    let temp = read_cpu_temp()?;
    if temp > 105.0 {
        return Err(DiagError::SensorRange {
            sensor_id: 0x20,
            value: temp,
        });
    }
    Ok(())
}

fn read_cpu_temp() -> Result<f64, DiagError> { Ok(42.0) }

Why this is correct-by-construction:

Without structured errorsWith thiserror enums
fn op() -> Result<T, String>fn op() -> Result<T, DiagError>
Caller gets opaque stringCaller matches on specific variants
Can’t distinguish auth failure from timeoutDiagError::Ipmi(IpmiError::AuthFailed) vs Timeout
Logging swallows the errormatch forces handling each case
New error variant β†’ nobody noticesNew variant β†’ compiler warns unmatched arms

The anyhow vs thiserror decision:

Use thiserror when…Use anyhow when…
Writing a library/crateWriting a binary/CLI
Callers need to match on error variantsCallers just log and exit
Error types are part of the public APIInternal error plumbing
protocol_lib, accel_diag, thermal_diagdiag_tool main binary

When to use: Every crate in the workspace should define its own error enum with thiserror. The top-level binary crate can use anyhow to aggregate them. This gives library callers compile-time error handling guarantees while keeping the binary ergonomic.


Trick 14 β€” #[must_use] for Enforcing Consumption

The #[must_use] attribute turns ignored return values into compiler warnings. This is a lightweight correct-by-construction tool that pairs with every pattern in this guide:

/// A calibration token that MUST be used β€” dropping it silently is a bug.
#[must_use = "calibration token must be passed to calibrate(), not dropped"]
pub struct CalibrationToken {
    _private: (),
}

/// A diagnostic result that MUST be checked β€” ignoring failures is a bug.
#[must_use = "diagnostic result must be inspected for failures"]
pub struct DiagResult {
    pub passed: bool,
    pub details: String,
}

/// Functions that return important values should be marked too:
#[must_use = "the authenticated session must be used or explicitly closed"]
pub fn authenticate(user: &str, pass: &str) -> Result<Session, AuthError> {
    // ...
  unimplemented!()
}

pub struct Session;
pub struct AuthError;

What the compiler tells you:

warning: unused `CalibrationToken` that must be used
  --> src/main.rs:5:5
   |
5  |     CalibrationToken { _private: () };
   |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   |
   = note: calibration token must be passed to calibrate(), not dropped

Apply #[must_use] to these patterns:

PatternWhat to annotateWhy
Single-Use Tokens (ch03)CalibrationToken, FusePayloadDropping without use = logic bug
Capability Tokens (ch04)AdminTokenAuthenticating but ignoring the token
Type-State transitionsReturn type of authenticate(), activate()Session created but never used
ResultsDiagResult, SensorReadingSilent failure swallowing
RAII handles (Trick 12)IpmiSession, LockedGpuOpening but not using a resource

Rule of thumb: If dropping a value without using it is always a bug, add #[must_use]. If it’s sometimes intentional (e.g., a Vec), don’t. The _ prefix (let _ = foo()) explicitly acknowledges and silences the warning β€” this is fine when the drop is intentional.

Key Takeaways

  1. Sentinel β†’ Option at the boundary β€” convert magic values to Option on parse; the compiler forces callers to handle None.
  2. Sealed traits close the implementation loophole β€” private supertrait means only your crate can implement the trait.
  3. #[non_exhaustive] + #[must_use] are one-line, high-value annotations β€” add them to evolving enums and consumed tokens.
  4. Typestate builders enforce required fields β€” finish() only exists when all required type parameters are Set.
  5. Each trick targets a specific bug class β€” adopt them incrementally; no trick requires rewriting your architecture.

Exercises 🟑

What you’ll learn: Hands-on practice applying correct-by-construction patterns to realistic hardware scenarios β€” NVMe admin commands, firmware update state machines, sensor pipelines, PCIe phantom types, multi-protocol health checks, and session-typed diagnostic protocols.

Cross-references: ch02 (exercise 1), ch05 (exercise 2), ch06 (exercise 3), ch09 (exercise 4), ch10 (exercise 5)

Practice Problems

Exercise 1: NVMe Admin Command (Typed Commands)

Design a typed command interface for NVMe admin commands:

  • Identify β†’ IdentifyResponse (model number, serial, firmware rev)
  • GetLogPage β†’ SmartLog (temperature, available spare, data units read)
  • GetFeature β†’ feature-specific response

Requirements:

  1. The command type determines the response type
  2. No runtime dispatch β€” static dispatch only
  3. Add a NamespaceId newtype that prevents mixing namespace IDs with other u32s

Hint: Follow the IpmiCmd trait pattern from ch02, but use NVMe-specific constants.

Sample Solution (Exercise 1)
use std::io;

#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub struct NamespaceId(pub u32);

#[derive(Debug, Clone, PartialEq)]
pub struct IdentifyResponse {
    pub model: String,
    pub serial: String,
    pub firmware_rev: String,
}

#[derive(Debug, Clone, PartialEq)]
pub struct SmartLog {
    pub temperature_kelvin: u16,
    pub available_spare_pct: u8,
    pub data_units_read: u64,
}

#[derive(Debug, Clone, PartialEq)]
pub struct ArbitrationFeature {
    pub high_priority_weight: u8,
    pub medium_priority_weight: u8,
    pub low_priority_weight: u8,
}

/// The core pattern: associated type pins each command's response.
pub trait NvmeAdminCmd {
    type Response;
    fn opcode(&self) -> u8;
    fn nsid(&self) -> Option<NamespaceId>;
    fn parse_response(&self, raw: &[u8]) -> io::Result<Self::Response>;
}

pub struct Identify { pub nsid: NamespaceId }

impl NvmeAdminCmd for Identify {
    type Response = IdentifyResponse;
    fn opcode(&self) -> u8 { 0x06 }
    fn nsid(&self) -> Option<NamespaceId> { Some(self.nsid) }
    fn parse_response(&self, raw: &[u8]) -> io::Result<IdentifyResponse> {
        if raw.len() < 12 {
            return Err(io::Error::new(io::ErrorKind::InvalidData, "too short"));
        }
        Ok(IdentifyResponse {
            model: String::from_utf8_lossy(&raw[0..4]).trim().to_string(),
            serial: String::from_utf8_lossy(&raw[4..8]).trim().to_string(),
            firmware_rev: String::from_utf8_lossy(&raw[8..12]).trim().to_string(),
        })
    }
}

pub struct GetLogPage { pub log_id: u8 }

impl NvmeAdminCmd for GetLogPage {
    type Response = SmartLog;
    fn opcode(&self) -> u8 { 0x02 }
    fn nsid(&self) -> Option<NamespaceId> { None }
    fn parse_response(&self, raw: &[u8]) -> io::Result<SmartLog> {
        if raw.len() < 11 {
            return Err(io::Error::new(io::ErrorKind::InvalidData, "too short"));
        }
        Ok(SmartLog {
            temperature_kelvin: u16::from_le_bytes([raw[0], raw[1]]),
            available_spare_pct: raw[2],
            data_units_read: u64::from_le_bytes(raw[3..11].try_into().unwrap()),
        })
    }
}

pub struct GetFeature { pub feature_id: u8 }

impl NvmeAdminCmd for GetFeature {
    type Response = ArbitrationFeature;
    fn opcode(&self) -> u8 { 0x0A }
    fn nsid(&self) -> Option<NamespaceId> { None }
    fn parse_response(&self, raw: &[u8]) -> io::Result<ArbitrationFeature> {
        if raw.len() < 3 {
            return Err(io::Error::new(io::ErrorKind::InvalidData, "too short"));
        }
        Ok(ArbitrationFeature {
            high_priority_weight: raw[0],
            medium_priority_weight: raw[1],
            low_priority_weight: raw[2],
        })
    }
}

/// Static dispatch β€” the compiler monomorphises per command type.
pub struct NvmeController;

impl NvmeController {
    pub fn execute<C: NvmeAdminCmd>(&self, cmd: &C) -> io::Result<C::Response> {
        // Build SQE from cmd.opcode()/cmd.nsid(),
        // submit to SQ, wait for CQ, then:
        let raw = self.submit_and_read(cmd.opcode())?;
        cmd.parse_response(&raw)
    }

    fn submit_and_read(&self, _opcode: u8) -> io::Result<Vec<u8>> {
        // Real implementation talks to /dev/nvme0
        Ok(vec![0; 512])
    }
}

Key points:

  • NamespaceId(u32) prevents mixing namespace IDs with arbitrary u32 values.
  • NvmeAdminCmd::Response is the β€œtype index” β€” execute() returns exactly C::Response.
  • Fully static dispatch: no Box<dyn …>, no runtime downcasting.

Exercise 2: Firmware Update State Machine (Type-State)

Model a BMC firmware update lifecycle:

stateDiagram-v2
    [*] --> Idle
    Idle --> Uploading : begin_upload()
    Uploading --> Uploading : send_chunk(data)
    Uploading --> Verifying : finish_upload()
    Uploading --> Idle : abort()
    Verifying --> Applying : verify() βœ… + VerifiedImage token
    Verifying --> Idle : verify() ❌ or abort()
    Applying --> Rebooting : apply(token)
    Rebooting --> Complete : reboot_complete()
    Complete --> [*]

    note right of Applying : No abort() β€” irreversible
    note right of Verifying : VerifiedImage is a proof token

Requirements:

  1. Each state is a distinct type
  2. Upload can only begin from Idle
  3. Verification requires upload to be complete
  4. Apply can only happen after successful verification β€” take a VerifiedImage proof token
  5. Reboot is the only option after applying
  6. Add an abort() method available in Uploading and Verifying (but not Applying β€” too late)

Hint: Combine type-state (ch05) with capability tokens (ch04).

Sample Solution (Exercise 2)
// --- State types ---
// Design choice: here we store state inline (`_state: S`) rather than using
// `PhantomData<S>` (ch05's approach). This lets states carry data β€”
// e.g., `Uploading { bytes_sent: usize }` tracks progress. Use `PhantomData`
// when states are pure markers (zero-sized); use inline storage when
// states carry meaningful runtime data.
pub struct Idle;
pub struct Uploading { bytes_sent: usize }  // not ZST β€” carries progress data
pub struct Verifying;
pub struct Applying;
pub struct Rebooting;
pub struct Complete;

/// Proof token: only constructed inside verify().
pub struct VerifiedImage { _private: () }

pub struct FwUpdate<S> {
    bmc_addr: String,
    _state: S,
}

impl FwUpdate<Idle> {
    pub fn new(bmc_addr: &str) -> Self {
        FwUpdate { bmc_addr: bmc_addr.to_string(), _state: Idle }
    }
    pub fn begin_upload(self) -> FwUpdate<Uploading> {
        FwUpdate { bmc_addr: self.bmc_addr, _state: Uploading { bytes_sent: 0 } }
    }
}

impl FwUpdate<Uploading> {
    pub fn send_chunk(mut self, chunk: &[u8]) -> Self {
        self._state.bytes_sent += chunk.len();
        self
    }
    pub fn finish_upload(self) -> FwUpdate<Verifying> {
        FwUpdate { bmc_addr: self.bmc_addr, _state: Verifying }
    }
    /// Abort available during upload β€” returns to Idle.
    pub fn abort(self) -> FwUpdate<Idle> {
        FwUpdate { bmc_addr: self.bmc_addr, _state: Idle }
    }
}

impl FwUpdate<Verifying> {
    /// On success, returns the next state AND a VerifiedImage proof token.
    pub fn verify(self) -> Result<(FwUpdate<Applying>, VerifiedImage), FwUpdate<Idle>> {
        // Real: check CRC, signature, compatibility
        let token = VerifiedImage { _private: () };
        Ok((
            FwUpdate { bmc_addr: self.bmc_addr, _state: Applying },
            token,
        ))
    }
    /// Abort available during verification.
    pub fn abort(self) -> FwUpdate<Idle> {
        FwUpdate { bmc_addr: self.bmc_addr, _state: Idle }
    }
}

impl FwUpdate<Applying> {
    /// Consumes the VerifiedImage proof β€” can't apply without verification.
    /// Note: NO abort() method here β€” once flashing starts, it's too dangerous.
    pub fn apply(self, _proof: VerifiedImage) -> FwUpdate<Rebooting> {
        FwUpdate { bmc_addr: self.bmc_addr, _state: Rebooting }
    }
}

impl FwUpdate<Rebooting> {
    pub fn wait_for_reboot(self) -> FwUpdate<Complete> {
        FwUpdate { bmc_addr: self.bmc_addr, _state: Complete }
    }
}

impl FwUpdate<Complete> {
    pub fn version(&self) -> &str { "2.1.0" }
}

// Usage:
// let fw = FwUpdate::new("192.168.1.100")
//     .begin_upload()
//     .send_chunk(b"image_data")
//     .finish_upload();
// let (fw, proof) = fw.verify().map_err(|_| "verify failed")?;
// let fw = fw.apply(proof).wait_for_reboot();
// println!("New version: {}", fw.version());

Key points:

  • abort() exists only on FwUpdate<Uploading> and FwUpdate<Verifying> β€” calling it on FwUpdate<Applying> is a compile error, not a runtime check.
  • VerifiedImage has a private field, so only verify() can create one.
  • apply() consumes the proof token β€” you can’t skip verification.

Exercise 3: Sensor Reading Pipeline (Dimensional Analysis)

Build a complete sensor pipeline:

  1. Define newtypes: RawAdc, Celsius, Fahrenheit, Volts, Millivolts, Watts
  2. Implement From<Celsius> for Fahrenheit and vice versa
  3. Create impl Mul<Volts, Output=Watts> for Amperes (P = V Γ— I)
  4. Build a Threshold<T> generic checker
  5. Write a pipeline: ADC β†’ calibration β†’ threshold check β†’ result

The compiler should reject: comparing Celsius to Volts, adding Watts to Rpm, passing Millivolts where Volts is expected.

Sample Solution (Exercise 3)
use std::ops::{Add, Sub, Mul};

#[derive(Debug, Clone, Copy, PartialEq, PartialOrd)]
pub struct RawAdc(pub u16);

#[derive(Debug, Clone, Copy, PartialEq, PartialOrd)]
pub struct Celsius(pub f64);

#[derive(Debug, Clone, Copy, PartialEq, PartialOrd)]
pub struct Fahrenheit(pub f64);

#[derive(Debug, Clone, Copy, PartialEq, PartialOrd)]
pub struct Volts(pub f64);

#[derive(Debug, Clone, Copy, PartialEq, PartialOrd)]
pub struct Millivolts(pub f64);

#[derive(Debug, Clone, Copy, PartialEq, PartialOrd)]
pub struct Amperes(pub f64);

#[derive(Debug, Clone, Copy, PartialEq, PartialOrd)]
pub struct Watts(pub f64);

// --- Safe conversions ---
impl From<Celsius> for Fahrenheit {
    fn from(c: Celsius) -> Self { Fahrenheit(c.0 * 9.0 / 5.0 + 32.0) }
}
impl From<Fahrenheit> for Celsius {
    fn from(f: Fahrenheit) -> Self { Celsius((f.0 - 32.0) * 5.0 / 9.0) }
}
impl From<Millivolts> for Volts {
    fn from(mv: Millivolts) -> Self { Volts(mv.0 / 1000.0) }
}
impl From<Volts> for Millivolts {
    fn from(v: Volts) -> Self { Millivolts(v.0 * 1000.0) }
}

// --- Arithmetic on same-unit types ---
// NOTE: Adding absolute temperatures (25Β°C + 30Β°C) is physically
// questionable β€” see ch06's discussion of Ξ”T newtypes for a more
// rigorous approach.  Here we keep it simple for the exercise.
impl Add for Celsius {
    type Output = Celsius;
    fn add(self, rhs: Self) -> Celsius { Celsius(self.0 + rhs.0) }
}
impl Sub for Celsius {
    type Output = Celsius;
    fn sub(self, rhs: Self) -> Celsius { Celsius(self.0 - rhs.0) }
}

// P = V Γ— I  (cross-unit multiplication)
impl Mul<Amperes> for Volts {
    type Output = Watts;
    fn mul(self, rhs: Amperes) -> Watts { Watts(self.0 * rhs.0) }
}

// --- Generic threshold checker ---
// Exercise 3 extends ch06's Threshold with a generic ThresholdResult<T>
// that carries the triggering reading β€” an evolution of ch06's simpler
// ThresholdResult { Normal, Warning, Critical } enum.
pub enum ThresholdResult<T> {
    Normal(T),
    Warning(T),
    Critical(T),
}

pub struct Threshold<T> {
    pub warning: T,
    pub critical: T,
}

// Generic impl β€” works for any unit type that supports PartialOrd.
impl<T: PartialOrd + Copy> Threshold<T> {
    pub fn check(&self, reading: T) -> ThresholdResult<T> {
        if reading >= self.critical {
            ThresholdResult::Critical(reading)
        } else if reading >= self.warning {
            ThresholdResult::Warning(reading)
        } else {
            ThresholdResult::Normal(reading)
        }
    }
}
// Now `Threshold<Rpm>`, `Threshold<Volts>`, etc. all work automatically.

// --- Pipeline: ADC β†’ calibration β†’ threshold β†’ result ---
pub struct CalibrationParams {
    pub scale: f64,  // ADC counts per Β°C
    pub offset: f64, // Β°C at ADC 0
}

pub fn calibrate(raw: RawAdc, params: &CalibrationParams) -> Celsius {
    Celsius(raw.0 as f64 / params.scale + params.offset)
}

pub fn sensor_pipeline(
    raw: RawAdc,
    params: &CalibrationParams,
    threshold: &Threshold<Celsius>,
) -> ThresholdResult<Celsius> {
    let temp = calibrate(raw, params);
    threshold.check(temp)
}

// Compile-time safety β€” these would NOT compile:
// let _ = Celsius(25.0) + Volts(12.0);   // ERROR: mismatched types
// let _: Millivolts = Volts(1.0);         // ERROR: no implicit coercion
// let _ = Watts(100.0) + Rpm(3000);       // ERROR: mismatched types

Key points:

  • Each physical unit is a distinct type β€” no accidental mixing.
  • Mul<Amperes> for Volts yields Watts, encoding P = V Γ— I in the type system.
  • Explicit From conversions for related units (mV ↔ V, Β°C ↔ Β°F).
  • Threshold<Celsius> only accepts Celsius β€” can’t accidentally threshold-check RPM.

Exercise 4: PCIe Capability Walk (Phantom Types + Validated Boundary)

Model the PCIe capability linked list:

  1. RawCapability β€” unvalidated bytes from config space
  2. ValidCapability β€” parsed and validated (via TryFrom)
  3. Each capability type (MSI, MSI-X, PCIe Express, Power Management) has its own phantom-typed register layout
  4. Walking the list returns an iterator of ValidCapability values

Hint: Combine validated boundaries (ch07) with phantom types (ch09).

Sample Solution (Exercise 4)
use std::marker::PhantomData;

// --- Phantom markers for capability types ---
pub struct Msi;
pub struct MsiX;
pub struct PciExpress;
pub struct PowerMgmt;

// PCI capability IDs from the spec
const CAP_ID_PM:   u8 = 0x01;
const CAP_ID_MSI:  u8 = 0x05;
const CAP_ID_PCIE: u8 = 0x10;
const CAP_ID_MSIX: u8 = 0x11;

/// Unvalidated bytes β€” may be garbage.
#[derive(Debug)]
pub struct RawCapability {
    pub id: u8,
    pub next_ptr: u8,
    pub data: Vec<u8>,
}

/// Validated and type-tagged capability.
#[derive(Debug)]
pub struct ValidCapability<Kind> {
    id: u8,
    next_ptr: u8,
    data: Vec<u8>,
    _kind: PhantomData<Kind>,
}

// --- TryFrom: parse-don't-validate boundary ---
impl TryFrom<RawCapability> for ValidCapability<PowerMgmt> {
    type Error = &'static str;
    fn try_from(raw: RawCapability) -> Result<Self, Self::Error> {
        if raw.id != CAP_ID_PM { return Err("not a PM capability"); }
        if raw.data.len() < 2 { return Err("PM data too short"); }
        Ok(ValidCapability {
            id: raw.id, next_ptr: raw.next_ptr,
            data: raw.data, _kind: PhantomData,
        })
    }
}

impl TryFrom<RawCapability> for ValidCapability<Msi> {
    type Error = &'static str;
    fn try_from(raw: RawCapability) -> Result<Self, Self::Error> {
        if raw.id != CAP_ID_MSI { return Err("not an MSI capability"); }
        if raw.data.len() < 6 { return Err("MSI data too short"); }
        Ok(ValidCapability {
            id: raw.id, next_ptr: raw.next_ptr,
            data: raw.data, _kind: PhantomData,
        })
    }
}

// (Similar TryFrom impls for MsiX, PciExpress β€” omitted for brevity)

// --- Type-safe accessors: only available on the correct capability ---
impl ValidCapability<PowerMgmt> {
    pub fn pm_control(&self) -> u16 {
        u16::from_le_bytes([self.data[0], self.data[1]])
    }
}

impl ValidCapability<Msi> {
    pub fn message_control(&self) -> u16 {
        u16::from_le_bytes([self.data[0], self.data[1]])
    }
    pub fn vectors_requested(&self) -> u32 {
        1 << ((self.message_control() >> 1) & 0x07)
    }
}

impl ValidCapability<MsiX> {
    pub fn table_size(&self) -> u16 {
        (u16::from_le_bytes([self.data[0], self.data[1]]) & 0x07FF) + 1
    }
}

// --- Capability walker: iterates the linked list ---
pub struct CapabilityWalker<'a> {
    config_space: &'a [u8],
    next_ptr: u8,
}

impl<'a> CapabilityWalker<'a> {
    pub fn new(config_space: &'a [u8]) -> Self {
        // Capability pointer lives at offset 0x34 in PCI config space
        let first_ptr = if config_space.len() > 0x34 {
            config_space[0x34]
        } else { 0 };
        CapabilityWalker { config_space, next_ptr: first_ptr }
    }
}

impl<'a> Iterator for CapabilityWalker<'a> {
    type Item = RawCapability;
    fn next(&mut self) -> Option<RawCapability> {
        if self.next_ptr == 0 { return None; }
        let off = self.next_ptr as usize;
        if off + 2 > self.config_space.len() { return None; }
        let id = self.config_space[off];
        let next = self.config_space[off + 1];
        let end = if next > 0 { next as usize } else {
            (off + 16).min(self.config_space.len())
        };
        let data = self.config_space[off + 2..end].to_vec();
        self.next_ptr = next;
        Some(RawCapability { id, next_ptr: next, data })
    }
}

// Usage:
// for raw_cap in CapabilityWalker::new(&config_space) {
//     if let Ok(pm) = ValidCapability::<PowerMgmt>::try_from(raw_cap) {
//         println!("PM control: 0x{:04X}", pm.pm_control());
//     }
// }

Key points:

  • RawCapability β†’ ValidCapability<Kind> is the parse-don’t-validate boundary.
  • pm_control() only exists on ValidCapability<PowerMgmt> β€” calling it on an MSI capability is a compile error.
  • The CapabilityWalker iterator yields raw capabilities; the caller validates the ones they care about with TryFrom.

Exercise 5: Multi-Protocol Health Check (Capability Mixins)

Create a health-check framework:

  1. Define ingredient traits: HasIpmi, HasRedfish, HasNvmeCli, HasGpio
  2. Create mixin traits:
    • ThermalHealthMixin (requires HasIpmi + HasGpio) β€” reads temps, checks alerts
    • StorageHealthMixin (requires HasNvmeCli) β€” SMART data checks
    • BmcHealthMixin (requires HasIpmi + HasRedfish) β€” cross-validates BMC data
  3. Build a FullPlatformController that implements all ingredient traits
  4. Build a StorageOnlyController that only implements HasNvmeCli
  5. Verify that StorageOnlyController gets StorageHealthMixin but NOT the others
Sample Solution (Exercise 5)
// --- Ingredient traits ---
pub trait HasIpmi {
    fn ipmi_read_sensor(&self, id: u8) -> f64;
}
pub trait HasRedfish {
    fn redfish_get(&self, path: &str) -> String;
}
pub trait HasNvmeCli {
    fn nvme_smart_log(&self, dev: &str) -> SmartData;
}
pub trait HasGpio {
    fn gpio_read_alert(&self, pin: u8) -> bool;
}

pub struct SmartData {
    pub temperature_kelvin: u16,
    pub spare_pct: u8,
}

// --- Mixin traits with blanket impls ---
pub trait ThermalHealthMixin: HasIpmi + HasGpio {
    fn thermal_check(&self) -> ThermalStatus {
        let temp = self.ipmi_read_sensor(0x01);
        let alert = self.gpio_read_alert(12);
        ThermalStatus { temperature: temp, alert_active: alert }
    }
}
impl<T: HasIpmi + HasGpio> ThermalHealthMixin for T {}

pub trait StorageHealthMixin: HasNvmeCli {
    fn storage_check(&self) -> StorageStatus {
        let smart = self.nvme_smart_log("/dev/nvme0");
        StorageStatus {
            temperature_ok: smart.temperature_kelvin < 343, // 70 Β°C
            spare_ok: smart.spare_pct > 10,
        }
    }
}
impl<T: HasNvmeCli> StorageHealthMixin for T {}

pub trait BmcHealthMixin: HasIpmi + HasRedfish {
    fn bmc_health(&self) -> BmcStatus {
        let ipmi_temp = self.ipmi_read_sensor(0x01);
        let rf_temp = self.redfish_get("/Thermal/Temperatures/0");
        BmcStatus { ipmi_temp, redfish_temp: rf_temp, consistent: true }
    }
}
impl<T: HasIpmi + HasRedfish> BmcHealthMixin for T {}

pub struct ThermalStatus { pub temperature: f64, pub alert_active: bool }
pub struct StorageStatus { pub temperature_ok: bool, pub spare_ok: bool }
pub struct BmcStatus { pub ipmi_temp: f64, pub redfish_temp: String, pub consistent: bool }

// --- Full platform: all ingredients β†’ all three mixins for free ---
pub struct FullPlatformController;

impl HasIpmi for FullPlatformController {
    fn ipmi_read_sensor(&self, _id: u8) -> f64 { 42.0 }
}
impl HasRedfish for FullPlatformController {
    fn redfish_get(&self, _path: &str) -> String { "42.0".into() }
}
impl HasNvmeCli for FullPlatformController {
    fn nvme_smart_log(&self, _dev: &str) -> SmartData {
        SmartData { temperature_kelvin: 310, spare_pct: 95 }
    }
}
impl HasGpio for FullPlatformController {
    fn gpio_read_alert(&self, _pin: u8) -> bool { false }
}

// --- Storage-only: only HasNvmeCli β†’ only StorageHealthMixin ---
pub struct StorageOnlyController;

impl HasNvmeCli for StorageOnlyController {
    fn nvme_smart_log(&self, _dev: &str) -> SmartData {
        SmartData { temperature_kelvin: 315, spare_pct: 80 }
    }
}

// StorageOnlyController automatically gets storage_check().
// Calling thermal_check() or bmc_health() on it is a COMPILE ERROR.

Key points:

  • Blanket impl<T: HasIpmi + HasGpio> ThermalHealthMixin for T {} β€” any type that implements both ingredients automatically gets the mixin.
  • StorageOnlyController only implements HasNvmeCli, so the compiler grants it StorageHealthMixin but rejects thermal_check() and bmc_health() β€” zero runtime checks needed.
  • Adding a new mixin (e.g., NetworkHealthMixin: HasRedfish + HasGpio) is one trait
    • one blanket impl β€” existing controllers pick it up automatically if they qualify.

Exercise 6: Session-Typed Diagnostic Protocol (Single-Use + Type-State)

Design a diagnostic session with single-use test execution tokens:

  1. DiagSession starts in Setup state
  2. Transition to Running state β€” issues N execution tokens (one per test case)
  3. Each TestToken is consumed when the test runs β€” prevents running the same test twice
  4. After all tokens are consumed, transition to Complete state
  5. Generate a report (only in Complete state)

Advanced: Use a const generic N to track how many tests remain at the type level.

Sample Solution (Exercise 6)
// --- State types ---
pub struct Setup;
pub struct Running;
pub struct Complete;

/// Single-use test token. NOT Clone, NOT Copy β€” consumed on use.
pub struct TestToken {
    test_name: String,
}

#[derive(Debug)]
pub struct TestResult {
    pub test_name: String,
    pub passed: bool,
}

pub struct DiagSession<S> {
    name: String,
    results: Vec<TestResult>,
    _state: S,
}

impl DiagSession<Setup> {
    pub fn new(name: &str) -> Self {
        DiagSession {
            name: name.to_string(),
            results: Vec::new(),
            _state: Setup,
        }
    }

    /// Transition to Running β€” issues one token per test case.
    pub fn start(self, test_names: &[&str]) -> (DiagSession<Running>, Vec<TestToken>) {
        let tokens = test_names.iter()
            .map(|n| TestToken { test_name: n.to_string() })
            .collect();
        (
            DiagSession {
                name: self.name,
                results: Vec::new(),
                _state: Running,
            },
            tokens,
        )
    }
}

impl DiagSession<Running> {
    /// Consume a token to run one test. The move prevents double-running.
    pub fn run_test(mut self, token: TestToken) -> Self {
        let passed = true; // real code runs actual diagnostics here
        self.results.push(TestResult {
            test_name: token.test_name,
            passed,
        });
        self
    }

    /// Transition to Complete.
    ///
    /// **Note:** This solution does NOT enforce that all tokens have been
    /// consumed β€” `finish()` can be called with tokens still outstanding.
    /// The tokens will simply be dropped (they're not `#[must_use]`).
    /// For full compile-time enforcement, use the const-generic variant
    /// described in the "Advanced" note below, where `finish()` is only
    /// available on `DiagSession<Running, 0>`.
    pub fn finish(self) -> DiagSession<Complete> {
        DiagSession {
            name: self.name,
            results: self.results,
            _state: Complete,
        }
    }
}

impl DiagSession<Complete> {
    /// Report is ONLY available in Complete state.
    pub fn report(&self) -> String {
        let total = self.results.len();
        let passed = self.results.iter().filter(|r| r.passed).count();
        format!("{}: {}/{} passed", self.name, passed, total)
    }
}

// Usage:
// let session = DiagSession::new("GPU stress");
// let (mut session, tokens) = session.start(&["vram", "compute", "thermal"]);
// for token in tokens {
//     session = session.run_test(token);
// }
// let session = session.finish();
// println!("{}", session.report());  // "GPU stress: 3/3 passed"
//
// // These would NOT compile:
// // session.run_test(used_token);  β†’  ERROR: use of moved value
// // running_session.report();      β†’  ERROR: no method `report` on DiagSession<Running>

Key points:

  • TestToken is not Clone or Copy β€” consuming it via run_test(token) moves it, so re-running the same test is a compile error.
  • report() only exists on DiagSession<Complete> β€” calling it mid-run is impossible.
  • The Advanced variant would use DiagSession<Running, N> with const generics where run_test returns DiagSession<Running, {N-1}> and finish is only available on DiagSession<Running, 0> β€” that ensures all tokens are consumed before finishing.

Key Takeaways

  1. Practice with realistic protocols β€” NVMe, firmware update, sensor pipelines, PCIe are all real-world targets for these patterns.
  2. Each exercise maps to a core chapter β€” use the cross-references to review the pattern before attempting.
  3. Solutions use expandable details β€” try each exercise before revealing the solution.
  4. Compose patterns in exercise 5 β€” multi-protocol health checks combine typed commands, dimensional types, and validated boundaries.
  5. Session types (exercise 6) are the frontier β€” they enforce message ordering across channels, extending type-state to distributed systems.

Reference Card

Quick-reference for all 14+ correct-by-construction patterns with selection flowchart, pattern catalogue, composition rules, crate mapping, and types-as-guarantees cheat sheet.

Cross-references: Every chapter β€” this is the lookup table for the entire book.

Quick Reference: Correct-by-Construction Patterns

Pattern Selection Guide

Is the bug catastrophic if missed?
β”œβ”€β”€ Yes β†’ Can it be encoded in types?
β”‚         β”œβ”€β”€ Yes β†’ USE CORRECT-BY-CONSTRUCTION
β”‚         └── No  β†’ Runtime check + extensive testing
└── No  β†’ Runtime check is fine

Pattern Catalogue

#PatternKey Trait/TypePreventsRuntime CostChapter
1Typed Commandstrait IpmiCmd { type Response; }Wrong response typeZeroch02
2Single-Use Typesstruct Nonce (not Clone/Copy)Nonce/key reuseZeroch03
3Capability Tokensstruct AdminToken { _private: () }Unauthorised accessZeroch04
4Type-StateSession<Active>Protocol violationsZeroch05
5Dimensional Typesstruct Celsius(f64)Unit confusionZeroch06
6Validated Boundariesstruct ValidFru (via TryFrom)Unvalidated data useParse oncech07
7Capability Mixinstrait FanDiagMixin: HasSpi + HasI2cMissing bus accessZeroch08
8Phantom TypesRegister<Width16>Width/direction mismatchZeroch09
9Sentinel β†’ OptionOption<u8> (not 0xFF)Sentinel-as-value bugsZeroch11
10Sealed Traitstrait Cmd: private::SealedUnsound external implsZeroch11
11Non-Exhaustive Enums#[non_exhaustive] enum SkuSilent match fallthroughZeroch11
12Typestate BuilderDerBuilder<Set, Missing>Incomplete constructionZeroch11
13FromStr Validationimpl FromStr for DiagLevelUnvalidated string inputParse oncech11
14Const-Generic SizeRegisterBank<const N: usize>Buffer size mismatchZeroch11
15Safe unsafe WrapperMmioRegion::read_u32()Unchecked MMIO/FFIZeroch11
16Async Type-StateAsyncSession<Active>Async protocol violationsZeroch11
17Const AssertionsSdrSensorId<const N: u8>Invalid compile-time IDsZeroch11
18Session TypesChan<SendRequest>Out-of-order channel opsZeroch11
19Pin Self-ReferentialPin<Box<StreamParser>>Dangling intra-struct pointerZeroch11
20RAII / Dropimpl Drop for SessionResource leak on any exit pathZeroch11
21Error Type Hierarchy#[derive(Error)] enum DiagErrorSilent error swallowingZeroch11
22#[must_use]#[must_use] struct TokenSilently dropped valuesZeroch11

Composition Rules

Capability Token + Type-State = Authorised state transitions
Typed Command + Dimensional Type = Physically-typed responses
Validated Boundary + Phantom Type = Typed register access on validated config
Capability Mixin + Typed Command = Bus-aware typed operations
Single-Use Type + Type-State = Consume-on-transition protocols
Sealed Trait + Typed Command = Closed, sound command set
Sentinel β†’ Option + Validated Boundary = Clean parse-once pipeline
Typestate Builder + Capability Token = Proof-of-complete construction
FromStr + #[non_exhaustive] = Evolvable, fail-fast enum parsing
Const-Generic Size + Validated Boundary = Sized, validated protocol buffers
Safe unsafe Wrapper + Phantom Type = Typed, safe MMIO access
Async Type-State + Capability Token = Authorised async transitions
Session Types + Typed Command = Fully-typed request-response channels
Pin + Type-State = Self-referential state machines that can't move
RAII (Drop) + Type-State = State-dependent cleanup guarantees
Error Hierarchy + Validated Boundary = Typed parse errors with exhaustive handling
#[must_use] + Single-Use Type = Hard-to-ignore, hard-to-reuse tokens

Anti-Patterns to Avoid

Anti-PatternWhy It’s WrongCorrect Alternative
fn read_sensor() -> f64Unitless β€” could be Β°C, Β°F, or RPMfn read_sensor() -> Celsius
fn encrypt(nonce: &[u8; 12])Nonce can be reused (borrow)fn encrypt(nonce: Nonce) (move)
fn admin_op(is_admin: bool)Caller can lie (true)fn admin_op(_: &AdminToken)
fn send(session: &Session)No state guaranteefn send(session: &Session<Active>)
fn process(data: &[u8])Not validatedfn process(data: &ValidFru)
Clone on ephemeral keysDefeats single-use guaranteeDon’t derive Clone
let vendor_id: u16 = 0xFFFFSentinel carried internallylet vendor_id: Option<u16> = None
fn route(level: &str) with fallbackTypos silently defaultlet level: DiagLevel = s.parse()?
Builder::new().finish() without fieldsIncomplete object constructedTypestate builder: finish() gated on Set
let buf: Vec<u8> for fixed-size HW bufferSize only checked at runtimeRegisterBank<4096> (const generic)
Raw unsafe { ptr::read(...) } scatteredUB risk, unauditableMmioRegion::read_u32() safe wrapper
async fn transition(&mut self)Mutable borrows don’t enforce stateasync fn transition(self) -> NextState
fn cleanup() called manuallyForgotten on early return / panicimpl Drop β€” compiler inserts call
fn op() -> Result<T, String>Opaque error, no variant matchingfn op() -> Result<T, DiagError> enum

Mapping to a Diagnostics Codebase

ModuleApplicable Pattern(s)
protocol_libTyped commands, type-state sessions
thermal_diagCapability mixins, dimensional types
accel_diagValidated boundaries, phantom registers
network_diagType-state (link training), capability tokens
pci_topologyPhantom types (register width), validated config, sentinel β†’ Option
event_handlerSingle-use audit tokens, capability tokens, FromStr (Component)
event_logValidated boundaries (SEL record parsing)
compute_diagDimensional types (temperature, frequency)
memory_diagValidated boundaries (SPD data), dimensional types
switch_diagType-state (port enumeration), phantom types
config_loaderFromStr (DiagLevel, FaultStatus, DiagAction)
log_analyzerValidated boundaries (CompiledPatterns)
diag_frameworkTypestate builder (DerBuilder), session types (orchestrator↔worker)
topology_libConst-generic register banks, safe MMIO wrappers

Types as Guarantees β€” Quick Mapping

GuaranteeRust EquivalentExample
β€œThis proof exists”A typeAdminToken
β€œI have the proof”A value of that typelet tok = authenticate()?;
β€œA implies B”Function fn(A) -> Bfn activate(AdminToken) -> Session<Active>
β€œBoth A and B”Tuple (A, B) or multi-paramfn op(a: &AdminToken, b: &LinkTrained)
β€œEither A or B”enum { A(A), B(B) } or Result<A, B>Result<Session<Active>, Error>
β€œAlways true”() (unit type)Always constructible
β€œImpossible”! (never type) or enum Void {}Can never be constructed

Testing Type-Level Guarantees 🟑

What you’ll learn: How to test that invalid code fails to compile (trybuild), fuzz validated boundaries (proptest), verify RAII invariants, and prove zero-cost abstraction via cargo-show-asm.

Cross-references: ch03 (compile-fail for nonces), ch07 (proptest for boundaries), ch05 (RAII for sessions)

Testing Type-Level Guarantees

Correct-by-construction patterns shift bugs from runtime to compile time. But how do you test that invalid code actually fails to compile? And how do you ensure validated boundaries hold under fuzzing? This chapter covers the testing tools that complement type-level correctness.

Compile-Fail Tests with trybuild

The trybuild crate lets you assert that certain code should not compile. This is essential for maintaining type-level invariants across refactors β€” if someone accidentally adds Clone to your single-use Nonce, the compile-fail test catches it.

Setup:

# Cargo.toml
[dev-dependencies]
trybuild = "1"

Test file (tests/compile_fail.rs):

#[test]
fn type_safety_tests() {
    let t = trybuild::TestCases::new();
    t.compile_fail("tests/ui/*.rs");
}

Test case: Nonce reuse must not compile (tests/ui/nonce_reuse.rs):

// tests/ui/nonce_reuse.rs
use my_crate::Nonce;

fn main() {
    let nonce = Nonce::new();
    encrypt(nonce);
    encrypt(nonce); // should fail: use of moved value
}

fn encrypt(_n: Nonce) {}

Expected error (tests/ui/nonce_reuse.stderr):

error[E0382]: use of moved value: `nonce`
 --> tests/ui/nonce_reuse.rs:6:13
  |
4 |     let nonce = Nonce::new();
  |         ----- move occurs because `nonce` has type `Nonce`, which does not implement the `Copy` trait
5 |     encrypt(nonce);
  |             ----- value moved here
6 |     encrypt(nonce); // should fail: use of moved value
  |             ^^^^^ value used here after move

More compile-fail test cases per chapter:

Pattern (Chapter)Test assertionFile
Single-Use Nonce (ch03)Can’t use nonce twicenonce_reuse.rs
Capability Token (ch04)Can’t call admin_op() without tokenmissing_token.rs
Type-State (ch05)Can’t send_command() on Session<Idle>wrong_state.rs
Dimensional (ch06)Can’t add Celsius + Rpmunit_mismatch.rs
Sealed Trait (Trick 2)External crate can’t impl sealed traitunseal_attempt.rs
Non-Exhaustive (Trick 3)External match without wildcard failsmissing_wildcard.rs

CI integration:

# .github/workflows/ci.yml
- name: Run compile-fail tests
  run: cargo test --test compile_fail

Property-Based Testing of Validated Boundaries

Validated boundaries (ch07) parse data once and reject invalid input. But how do you know your validation catches all invalid inputs? Property-based testing with proptest generates thousands of random inputs to stress the boundary:

# Cargo.toml
[dev-dependencies]
proptest = "1"
use proptest::prelude::*;

/// From ch07: ValidFru wraps a spec-compliant FRU payload.
/// These tests use the full ch07 ValidFru with board_area(),
/// product_area(), and format_version() methods.
/// Note: ch07 defines TryFrom<RawFruData>, so we wrap raw bytes first.

proptest! {
    /// Any byte sequence that passes validation must be usable without panic.
    #[test]
    fn valid_fru_never_panics(data in proptest::collection::vec(any::<u8>(), 0..1024)) {
        if let Ok(fru) = ValidFru::try_from(RawFruData(data)) {
            // These must never panic on a validated FRU
            // (methods from ch07's ValidFru impl):
            let _ = fru.format_version();
            let _ = fru.board_area();
            let _ = fru.product_area();
        }
    }

    /// Round-trip: format_version is preserved through reparsing.
    #[test]
    fn fru_round_trip(data in valid_fru_strategy()) {
        let raw = RawFruData(data.clone());
        let fru = ValidFru::try_from(raw).unwrap();
        let version = fru.format_version();
        // Re-parse the same bytes β€” version must be identical
        let reparsed = ValidFru::try_from(RawFruData(data)).unwrap();
        prop_assert_eq!(version, reparsed.format_version());
    }
}

/// Custom strategy: generates byte vectors that satisfy the FRU spec header.
/// The header format matches ch07's `TryFrom<RawFruData>` validation:
///   - Byte 0: version = 0x01
///   - Bytes 1-6: area offsets (Γ—8 = actual byte offset)
///   - Byte 7: checksum (sum of bytes 0-7 = 0 mod 256)
/// The body is random but large enough for the offsets to be in-bounds.
fn valid_fru_strategy() -> impl Strategy<Value = Vec<u8>> {
    let header = vec![0x01, 0x00, 0x01, 0x02, 0x00, 0x00, 0x00];
    proptest::collection::vec(any::<u8>(), 64..256)
        .prop_map(move |body| {
            let mut fru = header.clone();
            let sum: u8 = fru.iter().fold(0u8, |a, &b| a.wrapping_add(b));
            fru.push(0u8.wrapping_sub(sum));
            fru.extend_from_slice(&body);
            fru
        })
}

The testing pyramid for correct-by-construction code:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚    Compile-Fail Tests (trybuild)  β”‚ ← "Invalid code must not compile"
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  Property Tests (proptest/quickcheck) β”‚ ← "Valid inputs never panic"
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚    Unit Tests (#[test])           β”‚ ← "Specific inputs produce expected outputs"
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚    Type System (patterns ch02–13) β”‚ ← "Entire classes of bugs can't exist"
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

RAII Verification

RAII (Trick 12) guarantees cleanup. To test this, verify that the Drop impl actually fires:

use std::sync::atomic::{AtomicBool, Ordering};

// NOTE: These tests use a global AtomicBool, so they must not run in
// parallel with each other. Use `#[serial_test::serial]` or run with
// `cargo test -- --test-threads=1`. Alternatively, use a per-test
// `Arc<AtomicBool>` passed via closure to avoid the global entirely.
static DROPPED: AtomicBool = AtomicBool::new(false);

struct TestSession;
impl Drop for TestSession {
    fn drop(&mut self) {
        DROPPED.store(true, Ordering::SeqCst);
    }
}

#[test]
fn session_drops_on_early_return() {
    DROPPED.store(false, Ordering::SeqCst);
    let result: Result<(), &str> = (|| {
        let _session = TestSession;
        Err("simulated failure")?;
        Ok(())
    })();
    assert!(result.is_err());
    assert!(DROPPED.load(Ordering::SeqCst), "Drop must fire on early return");
}

#[test]
fn session_drops_on_panic() {
    DROPPED.store(false, Ordering::SeqCst);
    let result = std::panic::catch_unwind(|| {
        let _session = TestSession;
        panic!("simulated panic");
    });
    assert!(result.is_err());
    assert!(DROPPED.load(Ordering::SeqCst), "Drop must fire on panic");
}

Applying to Your Codebase

Here’s a prioritized plan for adding type-level tests to the workspace:

CrateTest typeWhat to test
protocol_libCompile-failSession<Idle> can’t send_command()
protocol_libPropertyAny byte seq β†’ TryFrom either succeeds or returns Err (no panic)
thermal_diagCompile-failCan’t construct FanReading without HasSpi mixin
accel_diagPropertyGPU sensor parsing: random bytes β†’ validated-or-rejected
config_loaderPropertyRandom strings β†’ FromStr for DiagLevel never panics
pci_topologyCompile-failRegister<Width16> can’t be passed where Width32 expected
event_handlerCompile-failAudit token can’t be cloned
diag_frameworkCompile-failDerBuilder<Missing, _> can’t call finish()

Zero-Cost Abstraction: Proof by Assembly

A common concern: β€œDo newtypes and phantom types add runtime overhead?” The answer is no β€” they compile to identical assembly as raw primitives. Here’s how to verify:

Setup:

cargo install cargo-show-asm

Example: Newtype vs raw u32:

// src/lib.rs
#[derive(Clone, Copy)]
pub struct Rpm(pub u32);

#[derive(Clone, Copy)]
pub struct Celsius(pub f64);

// Newtype arithmetic
#[inline(never)]
pub fn add_rpm(a: Rpm, b: Rpm) -> Rpm {
    Rpm(a.0 + b.0)
}

// Raw arithmetic (for comparison)
#[inline(never)]
pub fn add_raw(a: u32, b: u32) -> u32 {
    a + b
}

Run:

cargo asm my_crate::add_rpm
cargo asm my_crate::add_raw

Result β€” identical assembly:

; add_rpm (newtype)           ; add_raw (raw u32)
my_crate::add_rpm:            my_crate::add_raw:
  lea eax, [rdi + rsi]         lea eax, [rdi + rsi]
  ret                          ret

The Rpm wrapper is completely erased at compile time. The same holds for PhantomData<S> (zero bytes), ZST tokens (zero bytes), and all other type-level markers used throughout this guide.

Verify for your own types:

# Show assembly for a specific function
cargo asm --lib ipmi_lib::session::execute

# Show that PhantomData adds zero bytes
cargo asm --lib --rust ipmi_lib::session::IpmiSession

Key takeaway: Every pattern in this guide has zero runtime cost. The type system does all the work and is erased completely during compilation. You get the safety of Haskell with the performance of C.

Key Takeaways

  1. trybuild tests that invalid code won’t compile β€” essential for maintaining type-level invariants across refactors.
  2. proptest fuzzes validation boundaries β€” generates thousands of random inputs to stress TryFrom implementations.
  3. RAII verification tests that Drop runs β€” Arc counters or mock flags prove cleanup happened.
  4. cargo-show-asm proves zero-cost β€” phantom types, ZSTs, and newtypes produce the same assembly as raw C.
  5. Add compile-fail tests for every β€œimpossible” state β€” if someone accidentally derives Clone on a single-use type, the test catches it.

End of Type-Driven Correctness in Rust