Tactical Trust (1 of 2): Platform Crypto for Developers

07-06-2025 | Bite-sized Rust solutions to cryptographic API and supply-chain challenges.


Digital systems our society relies on all have some notion of trust. A communicating party can identify, with confidence, who they are "talking" to (authentication). And they can rest assured that their "conversation" is private (confidentiality). Even non-networked systems will validate that code they flash or execute hasn't been modified or corrupted (integrity).

Cryptographic libraries are the technical mechanism underpinning properties like authentication, confidentiality, and integrity. These imperfect software components are the foundation on which societal trust is built and maintained. Thus exploitable flaws in crypto libs tend to have severe and widespread impact - e.g. [Durumeric, 2014].

Now this two-part post series isn't about applied cryptography in the proper academic sense, we won't explain cryptographic primitives or protocol design from the ground up. Let's assume those more formal concepts live in an ivory tower. We're medieval peasants fighting in mud that is long-surviving production software - shipping, patching, refactoring.

These posts are concerned with brutal realities of deploying [theoretically] sound designs - we aim to reduce certain risks inherent to real-world software. It's one interpretation of what platform security engineering entails when shipping trust at scale.

How are we defining "platform security engineering"?

Building libraries, frameworks, and tools that allow feature teams to ship both securely and quickly. Essentially providing a "solid security foundation" for high-velocity software. In terms of code-level consistency.

We'll use Rust, an increasingly popular systems programming language that guarantees memory safety and tends to encourage functional correctness in limited yet substantial ways. Prior Rust experience isn't required, but will help maximize understanding (and enjoyment?). Concepts we cover are language-agnostic: they likely apply to your problem domain and tech stack of choice.

So what's the agenda for this two-part series?

Part 1 (this post) focuses on code (full, runnable source here). Our proof-of-concept programs aim to raise the bar for "shift left" automation, even if modestly (give the attacker an inch, they'll take a mile). We'll sample solutions to two cryptographic platform security problems at different levels of the stack:

Part 2 will focus on concepts but still include plenty of code. The emphasis is higher-level exploration of a {problem,solution} space. We'll narrow scope to the problem of information disclosure, deep-diving vulnerabilities and state-of-the-art mitigations through the lens of two general threat models:

What if I'm interested less in software engineering, more in cryptographic design?

Good news, everyone! Although we're focused on code-level tactics, there's several quality, strategy-focused resources to meet you wherever you're currently at and help you construct correct designs. Here's a sample:

🔐 API: Prevent Nonce Reuse with Stronger Types

"Nonce" is a portmanteau of "number used only once". As the name implies: accidentally using the same nonce multiple times, aka nonce reuse, is a devastating footgun for many widely-used cryptographic algorithms. Common operations rely on a random nonce as input in order to uphold critical security properties:


Nonce reuse in context of encryption
Fig. 1: Nonce reuse: a single nonce used for multiple encryption operations (red input, step 3+).

So then: how do we prove that, in some arbitrarily-large codebase, all nonces are both random and single-use? By encoding safety invariants into the language's type system. We can create APIs that are nearly impossible to misuse, and we get automatic static verification of that correctness just by compiling a program which uses exclusively the safe APIs!

Bold claim, yet relatively straight-forward implementation:

use aead::{
    Aead, AeadCore, Nonce, Payload,
    rand_core::{CryptoRng, RngCore},
};
use core::error::Error;

/// Can be used in arbitrarily many decryption operations.
/// Its counterpart, [`EncryptionNonce`], can only be used for one encryption operation.
pub type DecryptionNonce<A> = Nonce<A>;

/// A safer nonce type for AEAD. See trait [`NonceSafeAead`].
//
// SECURITY: Intentionally opaque and unique. Do not derive/implement any of:
// `Default`, `Copy`, `Clone`, `Ord`, `Eq`, `Debug`, etc.
pub struct EncryptionNonce<A: AeadCore>(Nonce<A>);

impl<A: AeadCore> EncryptionNonce<A> {
    /// Generate a new random nonce for AEAD-specific encryption.
    pub fn generate_nonce(rng: impl CryptoRng + RngCore) -> Self {
        EncryptionNonce(<A as AeadCore>::generate_nonce(rng))
    }

    /// Crate-private conversion into [`aead::Nonce`].
    //
    // SECURITY: Do not make `pub`, risks reuse with `aead::Aead` APIs.
    fn less_safe_to_raw_nonce(self) -> Nonce<A> {
        self.0
    }
}

/// Nonce-safe AEAD. Guarantees the following properties:
///
/// 1. Nonce is random.
///     * Opaque type with rand-only constructor.
/// 2. Nonce is used in exactly one encryption operation.
///     * Pass-by-value consumption.
///
/// See also: [`EncryptionNonce`] and [`DecryptionNonce`].
pub trait NonceSafeAead {
    /// Encrypt plaintext payload with a random, single-use nonce.
    /// Returns ciphertext bytes and decryption-only nonce.
    fn nonce_safe_encrypt<'msg, 'aad>(
        &self,
        enc_nonce: EncryptionNonce<Self>,
        plaintext: impl Into<Payload<'msg, 'aad>>,
    ) -> Result<(Vec<u8>, DecryptionNonce<Self>), impl Error>
    where
        Self: AeadCore + Aead + Sized,
    {
        let nonce = enc_nonce.less_safe_to_raw_nonce();
        self.encrypt(&nonce, plaintext)
            .map(|ciphertext| (ciphertext, nonce))
    }

    /// Decrypt ciphertext.
    /// Identical to [`aead::Aead::decrypt`], defined so that [`aead::Aead`]
    /// doesn't have to be brought in-scope when using [`NonceSafeAead`].
    //
    // SECURITY: ban import of less safe `aead::Aead` trait.
    fn decrypt<'msg, 'aad>(
        &self,
        dec_nonce: &DecryptionNonce<Self>,
        ciphertext: impl Into<Payload<'msg, 'aad>>,
    ) -> Result<Vec<u8>, impl Error>
    where
        Self: AeadCore + Aead + Sized,
    {
        <Self as Aead>::decrypt(self, dec_nonce, ciphertext)
    }
}

// Use above default impl for below algorithms
impl NonceSafeAead for chacha20poly1305::XChaCha20Poly1305 {}
impl NonceSafeAead for aes_gcm::Aes256Gcm {}
impl NonceSafeAead for aes_siv::Aes256SivAead {}

Aead is a widely-used trait in the Rust cryptography ecosystem. It defines a common interface to the encrypt and decrypt operations of Authenticated Encryption with Associated Data (AEAD) algorithms like AES-256-GCM and XChaCha20Poly1305. This class of algorithms provides both confidentiality and integrity, plus optionally allows binding unencrypted, "associated" metadata (think network headers, UUIDs, or contextual info). Basically, an AEAD should be your preferred all-in-one solution for most day-to-day encryption problems.

Now the Aead enc/decrypt APIs both take a single nonce type by reference: &Nonce<A: AeadCore>. So a programmer is free to encrypt new data with the same nonce they used for decryption earlier (see Figure 1 above).

The crux of our above reuse solution is this: we use two distinct nonce types, EncryptionNonce<A: AeadCore> for encrypt and DecryptionNonce<A: AeadCore> for decrypt. This bifurcation prevents nonce-reuse vulnerabilities, again at compile-time (before shipping and systematically across the entire codebase), because:

What about "nonce misuse-resistant" algorithms? And size limitations?

Strong typing isn't the only possible solution for nonce-reuse. Defenses can also be implemented in the design of the algorithm itself, see AES-GCM-SIV. A "Synthetic Initialization Vector" (SIV) uses inputs, including plaintext, to derive the final IV/nonce - effectively forcing two different plaintexts to use two different nonces.

However: if the same message is encrypted with the same nonce twice under the same key, an attacker will learn that the two messages are equivalent (but not their contents). That equivalence leak could have serious implications in context of a larger threat model, so preventing reuse with strong typing is still the higher assurance option.

But we're not out of the woods yet. AES-256-GCM can only safely encrypt 232 (~4.3 billion) messages under the same key using random nonces - beyond that we risk nonce collision (chance reuse). XChaCha20Poly1305 bumps that safe limit to 280 (practically infinite!) and is faster on devices without hardware support for AES.

We can verify that the NonceSafeAead trait enc/decrypts as expected with the below unit test:

use aead::{KeyInit, OsRng};
use nonce_typing::{EncryptionNonce, NonceSafeAead};

const PLAINTEXT_MSG: &[u8; 86] =
    b"Two cryptographers walk into a bar. Nobody else has a clue what they're talking about.";

#[test]
fn nonce_safe_xchacha20poly1305() {
    use chacha20poly1305::XChaCha20Poly1305;

    let key = XChaCha20Poly1305::generate_key(&mut OsRng);
    let cipher = XChaCha20Poly1305::new(&key);
    let enc_nonce = EncryptionNonce::<XChaCha20Poly1305>::generate_nonce(&mut OsRng);

    let (ciphertext, dec_nonce) = cipher
        .nonce_safe_encrypt(enc_nonce, PLAINTEXT_MSG.as_ref())
        .unwrap();

    let plaintext = cipher.decrypt(&dec_nonce, ciphertext.as_ref()).unwrap();

    assert_eq!(&plaintext, PLAINTEXT_MSG);
}

But does it actually prevent reuse? You're welcome to try passing the same enc_nonce to two different nonce_safe_encrypt calls - the compiler error should look familiar!

Where do I start with "formally verified" cryptography?

Proving that a program satisfies a specific property, for any input, is the goal of formal verification. Rust's type system, which guarantees that data is "shared XOR mutable", is particularly amenable to certain formal techniques - less reasoning about the state of memory is needed. Cryptography is also lower-cost to verify: detailed specifications exist, data structures are statically-allocated, and input size is bounded.

Verification techniques vary widely (theorem proving, model checking, abstract interpretation, symbolic execution, etc) and the corresponding tools typically require significant expertise to leverage. But as lazy busy developers, we can readily integrate and benefit from already-formally-verified libraries. Two contenders for native cryptography are:

  1. aws-lc-rs (Amazon) - Symbolic execution of source code is used to prove that a program matches a machine-readable specification manually encoded from an algorithm's human-readable specification.

  2. symcrypt (Microsoft) - Source is translated to a model for an interactive (meaning semi-manual) theorem prover. Additionally, a combination of fuzzing and model-based testing is used to detect timing side-channels.

Keep in mind that formal verification is not a panacea: specifications can be incomplete and implementations can deviate from models. The aforementioned WPA2 4-way handshake was formally verified yet still exploitable! Its proof failed to specify when a negotiated key should be installed, implicitly allowing multiple installations and thus nonce reset on next install [Vanhoef, 2017].

🔗 Supply-chain: Allowlist Crypto Publishers and Ban Duplicates

Programming languages with official package registries are a joy to use: easily finding and integrating 3rd-party libraries means faster delivery speed and greater focus on your problem/business domain. But all convenience has a cost. Here:

Supply-chain assurance is particularly important for cryptographic dependencies, which likely have an out-sized impact on the security properties of an overall system. Application logic higher up the stack tends to rely on crypto libraries, implicitly or explicitly.

Imagine you've been handed a strict mandate: the two requirements below must hold for your entire million-plus line monorepo.

  1. Trusted Publishers - All direct (e.g. non-transitive) cryptographic dependencies must be sourced from a small allowlist of trusted publishers, initially only the RustCrypto organization.

    • Rationale: Minimize both RUSTSEC alert volume and backdoor introduction risk.

    • Scope: Direct dependencies only. Publishers we explicitly trust can still select their own dependencies.

  2. No Duplicates - All direct and indirect cryptographic dependencies must have exactly one version in-tree at any time.

    • Rationale: Minimize both bloat and programmer error (e.g. unclear behavior divergence between API versions).

    • Scope: All dependencies. Duplicate bloat is likely avoidable - some crate owner should consider updating to latest.


Before supply-chain policy enforcement
Fig. 2: No supply-chain policy. Tolerate organic dependency sprawl.
After supply-chain policy enforcement
Fig. 3: Policy enforced: only trusted publisher, no duplicates. Leaner app overall.

How do you enforce this policy (which nicely compliments our previous NonceSafeAead APIs)? Unfortunately these specific requirements can't be encoded with cargo deny, a popular and mature dependency graph linter, at the time of this writing (v0.18). We need to roll some custom kit atop cargo_metadata!

Let's start with builder-pattern boilerplate (our public API):

use cargo_metadata::{semver::Version, CargoOpt, Metadata, MetadataCommand, Package};
use std::{
    cell::OnceCell,
    collections::{BTreeMap, BTreeSet, HashMap},
    fs,
    path::{Path, PathBuf},
};

/// A [`Policy`] violation.
/// Note: error variants do expose/re-export error enums from 3rd-party crates.
#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)]
#[allow(missing_docs)]
pub enum PolicyViolationError {
    DuplicateCrateVersions(Vec<String>),
    DisallowedCategoryPublisher(String, String),
    MetadataReadError(String),
}

/// A builder for supply-chain policies.
#[derive(Default)]
pub struct Policy {
    // Path to `Cargo.toml` we're analyzing
    manifest_path: PathBuf,
    // Workaround for `OnceCell::get_or_try_init` being nightly-only in Rust 1.88
    cargo_metadata_result: OnceCell<Result<Metadata, PolicyViolationError>>,
    // {category}
    // `String`s lower-cased at construction time
    no_dup_cats: Option<BTreeSet<String>>,
    // category: {publisher}
    // `String`s lower-cased at construction time
    cat_pubs: Option<BTreeMap<String, BTreeSet<String>>>,
}

impl Policy {
    /// Create a new policy, construct with path to workspace or crate-specific `Cargo.toml`.
    pub fn new<P>(manifest_path: P) -> Result<Policy, std::io::Error>
    where
        P: AsRef<Path>,
    {
        let manifest_path = fs::canonicalize(manifest_path)?;
        Ok(Self {
            manifest_path,
            ..Default::default()
        })
    }

    /// Rule 1 (Category-specific Trusted Publishers):
    /// Ensure that a given category only contains crates from a fixed set of trusted publishers.
    /// Assumes input iterator format `(category_1, publisher_1)...(category_n, publisher_n)`.
    /// More then one publisher per category is supported.
    pub fn allowed_category_publishers<I, S>(mut self, cat_pubs: I) -> Policy
    where
        I: Iterator<Item = (S, S)>,
        S: Into<String>,
    {
        let mut cat_pubs = cat_pubs.peekable();
        if cat_pubs.peek().is_some() {
            let mut cat_map = BTreeMap::new();
            for (c, p) in cat_pubs {
                cat_map
                    .entry(c.into().to_ascii_lowercase())
                    .or_insert(BTreeSet::new())
                    .insert(p.into().to_ascii_lowercase());
            }
            self.cat_pubs = Some(cat_map);
        } else {
            self.cat_pubs = None;
        }

        self
    }

    /// ...OMITTED: Rule 2 (Category-specific No Duplicates)...

    /// Evaluate a built policy against a given workspace/crate.
    pub fn run(&self) -> Result<(), PolicyViolationError> {
        self.run_allowed_category_publishers()?;
        self.run_no_duplicate_crate_categories()?;
        Ok(())
    }

To keep the length of this post in check, we'll omit implementation of scaffolding for the 2nd policy requirement (no duplicate cryptographic dependencies). But the logic is mechanically similar to the first requirement and the complete, runnable ≈300 lines of source for both rules is available here.

Notice that the above builder doesn't encode anything specific to cryptographic crates - this interface supports arbitrary categories and publishers. Before we see what usage looks like in practice, lets dig into enforcement logic for whatever trusted publishers the user specified when initializing cat_pubs with a call to allowed_category_publishers (the below are private APIs):

    /// Collect dependency metadata for the entire workspace with all features enabled.
    fn metadata(&self) -> Result<&Metadata, PolicyViolationError> {
        let meta_result = self.cargo_metadata_result.get_or_init(|| {
            MetadataCommand::new()
                .manifest_path(&self.manifest_path)
                .features(CargoOpt::AllFeatures)
                .exec()
                .map_err(|e| PolicyViolationError::MetadataReadError(e.to_string()))
        });

        meta_result.as_ref().map_err(|e| e.to_owned())
    }

    /// Get repo's publisher by parsing its URL.
    // SECURITY: `dep.authors` isn't reliable - anyone can set any value in their crate's `Cargo.toml`.
    fn get_repo_publisher(dep: &Package) -> Result<String, PolicyViolationError> {
        let Some(repo_url) = dep
            .repository
            .as_ref()
            .and_then(|url| url::Url::parse(url).ok())
        else {
            return Err(PolicyViolationError::MetadataReadError(format!(
                "Missing or invalid repo URL for crate '{}'",
                dep.name
            )));
        };

        // If `repo_url` == "https://github.com/RustCrypto/AEADs/tree/master/aes-gcm"
        // Then `repo_publisher` == "RustCrypto"
        let Some(repo_publisher) = repo_url.path_segments().and_then(|mut path| path.next()) else {
            return Err(PolicyViolationError::MetadataReadError(format!(
                "Missing publisher name for repo URL '{repo_url}'"
            )));
        };

        Ok(repo_publisher.to_string())
    }

    /// Run category-specific trusted publishers check.
    fn run_allowed_category_publishers(&self) -> Result<(), PolicyViolationError> {
        let Some(ref cat_pubs) = self.cat_pubs else {
            return Ok(());
        };

        let metadata = self.metadata()?;

        // ID direct dependencies
        let direct_deps = metadata
            .packages
            .iter()
            .filter(|pkg| pkg.manifest_path.as_path() == self.manifest_path)
            .map(|pkg| &pkg.dependencies)
            .flatten()
            .collect::<Vec<_>>();

        // Get full crate info for each ID-ed direct dependency
        let direct_dep_crates = metadata
            .packages
            .iter()
            .filter(|pkg| direct_deps.iter().any(|dep| dep.name == *pkg.name));

        // Find disallowed category-specific publishers, if any
        for dep_crate in direct_dep_crates {
            for cat in &dep_crate.categories {
                if let Some(expected_pubs) = cat_pubs.get(&cat.to_ascii_lowercase()) {
                    let actual_publisher = Self::get_repo_publisher(dep_crate)?.to_lowercase();
                    if !expected_pubs.contains(&actual_publisher) {
                        return Err(PolicyViolationError::DisallowedCategoryPublisher(
                            cat.clone(),
                            actual_publisher,
                        ));
                    }
                }
            }
        }

        Ok(())
    }

So how do we roll out enforcement of our sophisticated policy requirements (category-specific trusted publishers and duplicate elimination)? The heavy-handed option is leveraging build.rs (Rust build scripts):

use supplychain_policy::Policy;

fn main() {
    println!("cargo:rerun-if-changed=build.rs");
    println!("cargo:rerun-if-changed=Cargo.toml");

    let manifest_dir = std::env::var("CARGO_MANIFEST_DIR").expect("CARGO_MANIFEST_DIR var not set");
    let manifest_path = std::path::PathBuf::from(manifest_dir).join("Cargo.toml");

    Policy::new(&manifest_path)
        .expect("Invalid manifest path")
        .allowed_category_publishers([("cryptography", "rustcrypto")].into_iter())
        .no_duplicate_crate_categories(["cryptography"].into_iter())
        .run()
        .unwrap()
}

Now failing builds for supply-chain policy violations probably isn't the best way to make friends with other development teams, even in a smaller organization, unless there's a strong regulatory and/or business need to do so. Fortunately the above Policy builder can easily be wrapped in a CLI tool and deployed in blocking or non-blocking CI pipelines, on a workspace-specific basis. Non-blocking failures can be centrally tracked and automatically triaged.

Our above proof-of-concept didn't accommodate exceptions (e.g. "allow this specific named duplicate, still enforce for remainder of category"), but you could quickly extend it to read individual crate/publisher names from a [version controlled and CODEOWNERS protected] config file. Supporting legitimate exceptions, with documented rationale, is realistic - "perfect is the enemy of good".

What are my other options for supply-chain security in Rust?

The landscape of Rust's supply-chain security tooling is, fortunately, evolving. Sample projects to be aware of:

  • Signature-based vulnerability alerting: cargo audit, a free tool to scan your dependency tree for known-vulnerable crates, is a must-have for production CI. Although a lack of "reachability analysis" (call-graph traversal to determine if your code directly or indirectly calls a vulnerable function) does mean false positives.

  • Heuristic-based malware detection: The Linux Foundation has funded development of a Rust counterpart to Go's capslock tool. Among other usecases, capslock enumerates capabilities (file I/O, network connectivity, command execution, etc) for a given dependency and alerts if they suddenly change in a new version.

  • Trusted publishers: Future PKI initiatives may allow cryptographic identification of publishers, a big improvement over our above URL parsing. A related RFC outlines support for publishing crates from trusted infrastructure, following the footsteps of PyPI. Note PKI also means better response capability, although a real-world attack may have already succeeded by the time a build machine pulls a Certificate Revocation List (CRL).

While Rust's intentionally minimal std library is boon for embedded development, it does encourage over-reliance on 3rd-party crates for routine tasks. For contrast: Go's standard library offers FIPS 140-3 compliant cryptography with the flip of a build flag and backported a secure RNG to existing programs with only a Go toolchain bump!

Takeaway

"Trust is earned in drops and lost in buckets". That's probably a maxim, but it feels especially true in the context of commercial software - a global competition in which any winner, perhaps outside of a few monopolists, can be dethroned at any time.

Now the technical mechanism for trust is cryptography. Most useful cryptography is implemented and executing, whether on a tiny microcontroller or a beefy server, in the form of code. And code is notoriously difficult to get right, especially when you're shipping a lot of it.

Software quality is as challenging to replicate reliably as it is to measure actionably, if not more so. Our best hope is automating repeatability. When the quality criteria is security, automation is one goal of a platform security engineering function. Which needs to keep pace with the broader engineering organization at minimum, and ideally should accelerate all feature teams.

This first post explored bite-sized solutions to platform cryptography problems at the API (nonce reuse) and supply-chain (dependency policy) levels. The intent is automating guardrails for human error, but nowadays LLM auto-complete increases vulnerability rate - per both [Perry, 2023] and [Pearce, 2025]. The good news is that the above techniques should mitigate risks from both sources. Compile-time checks don't care how the code was generated.

Our second and final post will have a narrower but deeper scope. We'll explore a classic topic in trust: information disclosure vulnerabilities. Part 2 (release date TBD) grapples with technical concepts at greater length and on the cutting edge. You're going to want a coffee for this one.

But it'll still be good fun. Trust me.


Read a free technical book! I'm fulfilling a lifelong dream and writing a book. It's about developing secure and robust systems software. Although a work-in-progress, the book is freely available online (no paywalls or obligations): https://highassurance.rs/