Tactical Trust (1 of 2): Platform Crypto for Developers
07-06-2025 | Bite-sized Rust solutions to cryptographic API and supply-chain challenges.
Digital systems our society relies on all have some notion of trust. A communicating party can identify, with confidence, who they are "talking" to (authentication). And they can rest assured that their "conversation" is private (confidentiality). Even non-networked systems will validate that code they flash or execute hasn't been modified or corrupted (integrity).
Cryptographic libraries are the technical mechanism underpinning properties like authentication, confidentiality, and integrity. These imperfect software components are the foundation on which societal trust is built and maintained. Thus exploitable flaws in crypto libs tend to have severe and widespread impact - e.g. [Durumeric, 2014].
Now this two-part post series isn't about applied cryptography in the proper academic sense, we won't explain cryptographic primitives or protocol design from the ground up. Let's assume those more formal concepts live in an ivory tower. We're medieval peasants fighting in mud that is long-surviving production software - shipping, patching, refactoring.
These posts are concerned with brutal realities of deploying [theoretically] sound designs - we aim to reduce certain risks inherent to real-world software. It's one interpretation of what platform security engineering entails when shipping trust at scale.
How are we defining "platform security engineering"?
Building libraries, frameworks, and tools that allow feature teams to ship both securely and quickly. Essentially providing a "solid security foundation" for high-velocity software. In terms of code-level consistency.
We'll use Rust, an increasingly popular systems programming language that guarantees memory safety and tends to encourage functional correctness in limited yet substantial ways. Prior Rust experience isn't required, but will help maximize understanding (and enjoyment?). Concepts we cover are language-agnostic: they likely apply to your problem domain and tech stack of choice.
So what's the agenda for this two-part series?
Part 1 (this post) focuses on code (full, runnable source here). Our proof-of-concept programs aim to raise the bar for "shift left" automation, even if modestly (give the attacker an inch, they'll take a mile). We'll sample solutions to two cryptographic platform security problems at different levels of the stack:
🔐 API: Can we systematically prevent nonce reuse vulnerabilities in an arbitrarily-large codebase?
🔗 Supply-chain: How should CI enforce policies specific to cryptographic dependencies?
Part 2 will focus on concepts but still include plenty of code. The emphasis is higher-level exploration of a {problem,solution} space. We'll narrow scope to the problem of information disclosure, deep-diving vulnerabilities and state-of-the-art mitigations through the lens of two general threat models:
📡 Man-in-the-Middle (MITM): Attacker intercepts network communications between two or more endpoints.
💻 Man-at-the-End (MATE): Attacker directly compromises one or more communication endpoints.
What if I'm interested less in software engineering, more in cryptographic design?
Good news, everyone! Although we're focused on code-level tactics, there's several quality, strategy-focused resources to meet you wherever you're currently at and help you construct correct designs. Here's a sample:
- Crypto novice but an experienced developer? → "Real-world Cryptography" by Dave Wong
- Work in applied cryptography professionally? → Soatok's Cryptography Blog
- At the cutting-edge of near-future cryptography? → Real World Crypto Symposium
🔐 API: Prevent Nonce Reuse with Stronger Types
"Nonce" is a portmanteau of "number used only once". As the name implies: accidentally using the same nonce multiple times, aka nonce reuse, is a devastating footgun for many widely-used cryptographic algorithms. Common operations rely on a random nonce as input in order to uphold critical security properties:
Encryption - Unique nonces are often called "Initialization Vectors" (IVs). They prevent plaintext and/or key recovery as well as replay attacks (malicious repetition of previous communications).
- WPA2 was the de facto standard for encryption on Wi-Fi networks from 2006 to 2020. Toward the end of that lifespan, researchers demonstrated a practical attack against all implementations [Vanhoef, 2017]. By abusing re-transmission logic in the 4-way handshake between a Wi-Fi endpoint and a client joining the network, an attacker could force reset/reuse of the nonce/IV for all protocol-supported stream ciphers (e.g. "keystream reuse"). That means an attacker can decrypt, replay, and [in some cases] forge network packets. Full compromise of the transport layer (e.g. TCP but not HTTPS).
Signing - Unique nonces prevent signature forging (generating a passing signature for attacker-created data) and signature duplication (replay of previously-signed data).
- The Sony PlayStation 3 was poised to become the most secure game console ever made, with no true jailbreak 4 years into production. The PS3 used ECDSA to create a chain-of-trust from early boot to userspace app launch - cryptographically enforcing software license checks. ECDSA signing takes as input a nonce and a hash of data to sign. Hackers discovered that Sony's implementation used a hardcoded nonce [FailOverflow, 2010]. This flaw enabled trivial re-computation of the ECDSA private signing key and therefore attacker ability to execute arbitrary unlicensed software.
So then: how do we prove that, in some arbitrarily-large codebase, all nonces are both random and single-use? By encoding safety invariants into the language's type system. We can create APIs that are nearly impossible to misuse, and we get automatic static verification of that correctness just by compiling a program which uses exclusively the safe APIs!
Bold claim, yet relatively straight-forward implementation:
use aead::{
Aead, AeadCore, Nonce, Payload,
rand_core::{CryptoRng, RngCore},
};
use core::error::Error;
/// Can be used in arbitrarily many decryption operations.
/// Its counterpart, [`EncryptionNonce`], can only be used for one encryption operation.
pub type DecryptionNonce<A> = Nonce<A>;
/// A safer nonce type for AEAD. See trait [`NonceSafeAead`].
//
// SECURITY: Intentionally opaque and unique. Do not derive/implement any of:
// `Default`, `Copy`, `Clone`, `Ord`, `Eq`, `Debug`, etc.
pub struct EncryptionNonce<A: AeadCore>(Nonce<A>);
impl<A: AeadCore> EncryptionNonce<A> {
/// Generate a new random nonce for AEAD-specific encryption.
pub fn generate_nonce(rng: impl CryptoRng + RngCore) -> Self {
EncryptionNonce(<A as AeadCore>::generate_nonce(rng))
}
/// Crate-private conversion into [`aead::Nonce`].
//
// SECURITY: Do not make `pub`, risks reuse with `aead::Aead` APIs.
fn less_safe_to_raw_nonce(self) -> Nonce<A> {
self.0
}
}
/// Nonce-safe AEAD. Guarantees the following properties:
///
/// 1. Nonce is random.
/// * Opaque type with rand-only constructor.
/// 2. Nonce is used in exactly one encryption operation.
/// * Pass-by-value consumption.
///
/// See also: [`EncryptionNonce`] and [`DecryptionNonce`].
pub trait NonceSafeAead {
/// Encrypt plaintext payload with a random, single-use nonce.
/// Returns ciphertext bytes and decryption-only nonce.
fn nonce_safe_encrypt<'msg, 'aad>(
&self,
enc_nonce: EncryptionNonce<Self>,
plaintext: impl Into<Payload<'msg, 'aad>>,
) -> Result<(Vec<u8>, DecryptionNonce<Self>), impl Error>
where
Self: AeadCore + Aead + Sized,
{
let nonce = enc_nonce.less_safe_to_raw_nonce();
self.encrypt(&nonce, plaintext)
.map(|ciphertext| (ciphertext, nonce))
}
/// Decrypt ciphertext.
/// Identical to [`aead::Aead::decrypt`], defined so that [`aead::Aead`]
/// doesn't have to be brought in-scope when using [`NonceSafeAead`].
//
// SECURITY: ban import of less safe `aead::Aead` trait.
fn decrypt<'msg, 'aad>(
&self,
dec_nonce: &DecryptionNonce<Self>,
ciphertext: impl Into<Payload<'msg, 'aad>>,
) -> Result<Vec<u8>, impl Error>
where
Self: AeadCore + Aead + Sized,
{
<Self as Aead>::decrypt(self, dec_nonce, ciphertext)
}
}
// Use above default impl for below algorithms
impl NonceSafeAead for chacha20poly1305::XChaCha20Poly1305 {}
impl NonceSafeAead for aes_gcm::Aes256Gcm {}
impl NonceSafeAead for aes_siv::Aes256SivAead {}
Aead
is a widely-used trait in the Rust cryptography ecosystem. It defines a common interface to the encrypt
and decrypt
operations of Authenticated Encryption with Associated Data (AEAD) algorithms like AES-256-GCM and XChaCha20Poly1305. This class of algorithms provides both confidentiality and integrity, plus optionally allows binding unencrypted, "associated" metadata (think network headers, UUIDs, or contextual info). Basically, an AEAD should be your preferred all-in-one solution for most day-to-day encryption problems.
Now the Aead
enc/decrypt APIs both take a single nonce type by reference: &Nonce<A: AeadCore>
. So a programmer is free to encrypt new data with the same nonce they used for decryption earlier (see Figure 1 above).
- Notice how a nonce is generic over the
AeadCore
trait, allowing compile-time verification of algorithm-specific array sizes - e.g.[u8; 12]
(96-bit) for AES-256-GCM,[u8; 24]
(192-bit) for XChaCha20Poly1305 - at all call-sites.
The crux of our above reuse solution is this: we use two distinct nonce types, EncryptionNonce<A: AeadCore>
for encrypt
and DecryptionNonce<A: AeadCore>
for decrypt
. This bifurcation prevents nonce-reuse vulnerabilities, again at compile-time (before shipping and systematically across the entire codebase), because:
EncryptionNonce
is guaranteed to be randomly-generated (opaque type with rand-only constructor) and single-use (pass-by-value parameter semantics). The single-use property is especially amenable to Rust's [linear] type system. Its decryption counterpart, aliastype DecryptionNonce<A> = Nonce<A>;
, continues to work normally.Marker trait
CryptoRng
infn generate_nonce(rng: impl CryptoRng + RngCore)
is critical. A biased (meaning not uniformly random) nonce can be as disastrous as a reused nonce. In another ECDSA debacle, biased nonces allowed extraction of Bitcoin private keys [Breitner, 2019].
What about "nonce misuse-resistant" algorithms? And size limitations?
Strong typing isn't the only possible solution for nonce-reuse. Defenses can also be implemented in the design of the algorithm itself, see AES-GCM-SIV. A "Synthetic Initialization Vector" (SIV) uses inputs, including plaintext, to derive the final IV/nonce - effectively forcing two different plaintexts to use two different nonces.
However: if the same message is encrypted with the same nonce twice under the same key, an attacker will learn that the two messages are equivalent (but not their contents). That equivalence leak could have serious implications in context of a larger threat model, so preventing reuse with strong typing is still the higher assurance option.
But we're not out of the woods yet. AES-256-GCM can only safely encrypt 232 (~4.3 billion) messages under the same key using random nonces - beyond that we risk nonce collision (chance reuse). XChaCha20Poly1305 bumps that safe limit to 280 (practically infinite!) and is faster on devices without hardware support for AES.
We can verify that the NonceSafeAead
trait enc/decrypts as expected with the below unit test:
use aead::{KeyInit, OsRng};
use nonce_typing::{EncryptionNonce, NonceSafeAead};
const PLAINTEXT_MSG: &[u8; 86] =
b"Two cryptographers walk into a bar. Nobody else has a clue what they're talking about.";
#[test]
fn nonce_safe_xchacha20poly1305() {
use chacha20poly1305::XChaCha20Poly1305;
let key = XChaCha20Poly1305::generate_key(&mut OsRng);
let cipher = XChaCha20Poly1305::new(&key);
let enc_nonce = EncryptionNonce::<XChaCha20Poly1305>::generate_nonce(&mut OsRng);
let (ciphertext, dec_nonce) = cipher
.nonce_safe_encrypt(enc_nonce, PLAINTEXT_MSG.as_ref())
.unwrap();
let plaintext = cipher.decrypt(&dec_nonce, ciphertext.as_ref()).unwrap();
assert_eq!(&plaintext, PLAINTEXT_MSG);
}
But does it actually prevent reuse? You're welcome to try passing the same enc_nonce
to two different nonce_safe_encrypt
calls - the compiler error should look familiar!
Where do I start with "formally verified" cryptography?
Proving that a program satisfies a specific property, for any input, is the goal of formal verification. Rust's type system, which guarantees that data is "shared XOR mutable", is particularly amenable to certain formal techniques - less reasoning about the state of memory is needed. Cryptography is also lower-cost to verify: detailed specifications exist, data structures are statically-allocated, and input size is bounded.
Verification techniques vary widely (theorem proving, model checking, abstract interpretation, symbolic execution, etc) and the corresponding tools typically require significant expertise to leverage. But as
lazybusy developers, we can readily integrate and benefit from already-formally-verified libraries. Two contenders for native cryptography are:
aws-lc-rs
(Amazon) - Symbolic execution of source code is used to prove that a program matches a machine-readable specification manually encoded from an algorithm's human-readable specification.symcrypt
(Microsoft) - Source is translated to a model for an interactive (meaning semi-manual) theorem prover. Additionally, a combination of fuzzing and model-based testing is used to detect timing side-channels.Keep in mind that formal verification is not a panacea: specifications can be incomplete and implementations can deviate from models. The aforementioned WPA2 4-way handshake was formally verified yet still exploitable! Its proof failed to specify when a negotiated key should be installed, implicitly allowing multiple installations and thus nonce reset on next install [Vanhoef, 2017].
🔗 Supply-chain: Allowlist Crypto Publishers and Ban Duplicates
Programming languages with official package registries are a joy to use: easily finding and integrating 3rd-party libraries means faster delivery speed and greater focus on your problem/business domain. But all convenience has a cost. Here:
Increased attack surface - Just one malicious crate, no matter how deep in a massive dependency graph, can compromise the entire application. And typo-squatting attacks indiscriminately victimize a percentage of the entire ecosystem.
Statistical weakening of memory-safety - Dependency count likely has some correlation to amount of
unsafe
Rust code (19% of public crates useunsafe
) and other-language CFFI code, and thus amount of total unsound code (realistically some subset ofunsafe
). Any unsound code can trigger memory safety errors at runtime, which often go undetected in production.Software bloat - Transitive dependencies tend to sprawl in number, causing "simple" apps to explode in objective size and complexity. Larger programs generally mean slower app startup and longer download times. Plus both routine (e.g. API upgrade) and emergency (e.g. vulnerable dependency alert) maintenance burden.
Supply-chain assurance is particularly important for cryptographic dependencies, which likely have an out-sized impact on the security properties of an overall system. Application logic higher up the stack tends to rely on crypto libraries, implicitly or explicitly.
Imagine you've been handed a strict mandate: the two requirements below must hold for your entire million-plus line monorepo.
Trusted Publishers - All direct (e.g. non-transitive) cryptographic dependencies must be sourced from a small allowlist of trusted publishers, initially only the
RustCrypto
organization.Rationale: Minimize both RUSTSEC alert volume and backdoor introduction risk.
Scope: Direct dependencies only. Publishers we explicitly trust can still select their own dependencies.
No Duplicates - All direct and indirect cryptographic dependencies must have exactly one version in-tree at any time.
Rationale: Minimize both bloat and programmer error (e.g. unclear behavior divergence between API versions).
Scope: All dependencies. Duplicate bloat is likely avoidable - some crate owner should consider updating to latest.
How do you enforce this policy (which nicely compliments our previous NonceSafeAead
APIs)? Unfortunately these specific requirements can't be encoded with cargo deny
, a popular and mature dependency graph linter, at the time of this writing (v0.18). We need to roll some custom kit atop cargo_metadata
!
Let's start with builder-pattern boilerplate (our public API):
use cargo_metadata::{semver::Version, CargoOpt, Metadata, MetadataCommand, Package};
use std::{
cell::OnceCell,
collections::{BTreeMap, BTreeSet, HashMap},
fs,
path::{Path, PathBuf},
};
/// A [`Policy`] violation.
/// Note: error variants do expose/re-export error enums from 3rd-party crates.
#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)]
#[allow(missing_docs)]
pub enum PolicyViolationError {
DuplicateCrateVersions(Vec<String>),
DisallowedCategoryPublisher(String, String),
MetadataReadError(String),
}
/// A builder for supply-chain policies.
#[derive(Default)]
pub struct Policy {
// Path to `Cargo.toml` we're analyzing
manifest_path: PathBuf,
// Workaround for `OnceCell::get_or_try_init` being nightly-only in Rust 1.88
cargo_metadata_result: OnceCell<Result<Metadata, PolicyViolationError>>,
// {category}
// `String`s lower-cased at construction time
no_dup_cats: Option<BTreeSet<String>>,
// category: {publisher}
// `String`s lower-cased at construction time
cat_pubs: Option<BTreeMap<String, BTreeSet<String>>>,
}
impl Policy {
/// Create a new policy, construct with path to workspace or crate-specific `Cargo.toml`.
pub fn new<P>(manifest_path: P) -> Result<Policy, std::io::Error>
where
P: AsRef<Path>,
{
let manifest_path = fs::canonicalize(manifest_path)?;
Ok(Self {
manifest_path,
..Default::default()
})
}
/// Rule 1 (Category-specific Trusted Publishers):
/// Ensure that a given category only contains crates from a fixed set of trusted publishers.
/// Assumes input iterator format `(category_1, publisher_1)...(category_n, publisher_n)`.
/// More then one publisher per category is supported.
pub fn allowed_category_publishers<I, S>(mut self, cat_pubs: I) -> Policy
where
I: Iterator<Item = (S, S)>,
S: Into<String>,
{
let mut cat_pubs = cat_pubs.peekable();
if cat_pubs.peek().is_some() {
let mut cat_map = BTreeMap::new();
for (c, p) in cat_pubs {
cat_map
.entry(c.into().to_ascii_lowercase())
.or_insert(BTreeSet::new())
.insert(p.into().to_ascii_lowercase());
}
self.cat_pubs = Some(cat_map);
} else {
self.cat_pubs = None;
}
self
}
/// ...OMITTED: Rule 2 (Category-specific No Duplicates)...
/// Evaluate a built policy against a given workspace/crate.
pub fn run(&self) -> Result<(), PolicyViolationError> {
self.run_allowed_category_publishers()?;
self.run_no_duplicate_crate_categories()?;
Ok(())
}
To keep the length of this post in check, we'll omit implementation of scaffolding for the 2nd policy requirement (no duplicate cryptographic dependencies). But the logic is mechanically similar to the first requirement and the complete, runnable ≈300 lines of source for both rules is available here.
Notice that the above builder doesn't encode anything specific to cryptographic crates - this interface supports arbitrary categories and publishers. Before we see what usage looks like in practice, lets dig into enforcement logic for whatever trusted publishers the user specified when initializing cat_pubs
with a call to allowed_category_publishers
(the below are private APIs):
/// Collect dependency metadata for the entire workspace with all features enabled.
fn metadata(&self) -> Result<&Metadata, PolicyViolationError> {
let meta_result = self.cargo_metadata_result.get_or_init(|| {
MetadataCommand::new()
.manifest_path(&self.manifest_path)
.features(CargoOpt::AllFeatures)
.exec()
.map_err(|e| PolicyViolationError::MetadataReadError(e.to_string()))
});
meta_result.as_ref().map_err(|e| e.to_owned())
}
/// Get repo's publisher by parsing its URL.
// SECURITY: `dep.authors` isn't reliable - anyone can set any value in their crate's `Cargo.toml`.
fn get_repo_publisher(dep: &Package) -> Result<String, PolicyViolationError> {
let Some(repo_url) = dep
.repository
.as_ref()
.and_then(|url| url::Url::parse(url).ok())
else {
return Err(PolicyViolationError::MetadataReadError(format!(
"Missing or invalid repo URL for crate '{}'",
dep.name
)));
};
// If `repo_url` == "https://github.com/RustCrypto/AEADs/tree/master/aes-gcm"
// Then `repo_publisher` == "RustCrypto"
let Some(repo_publisher) = repo_url.path_segments().and_then(|mut path| path.next()) else {
return Err(PolicyViolationError::MetadataReadError(format!(
"Missing publisher name for repo URL '{repo_url}'"
)));
};
Ok(repo_publisher.to_string())
}
/// Run category-specific trusted publishers check.
fn run_allowed_category_publishers(&self) -> Result<(), PolicyViolationError> {
let Some(ref cat_pubs) = self.cat_pubs else {
return Ok(());
};
let metadata = self.metadata()?;
// ID direct dependencies
let direct_deps = metadata
.packages
.iter()
.filter(|pkg| pkg.manifest_path.as_path() == self.manifest_path)
.map(|pkg| &pkg.dependencies)
.flatten()
.collect::<Vec<_>>();
// Get full crate info for each ID-ed direct dependency
let direct_dep_crates = metadata
.packages
.iter()
.filter(|pkg| direct_deps.iter().any(|dep| dep.name == *pkg.name));
// Find disallowed category-specific publishers, if any
for dep_crate in direct_dep_crates {
for cat in &dep_crate.categories {
if let Some(expected_pubs) = cat_pubs.get(&cat.to_ascii_lowercase()) {
let actual_publisher = Self::get_repo_publisher(dep_crate)?.to_lowercase();
if !expected_pubs.contains(&actual_publisher) {
return Err(PolicyViolationError::DisallowedCategoryPublisher(
cat.clone(),
actual_publisher,
));
}
}
}
}
Ok(())
}
fn metadata
does memoized collection of dependency metadata for the entire workspace, with all features enabled. Even if the user specifies 10 requirements for 10 different crate categories, we'll run collection exactly once (recallPolicy
fieldcargo_metadata_result
is aOnceCell
).fn get_repo_publisher
parses the owner of a repository from its URL. While this logic will extract the publishing user or organization for both GitHub and GitLab URLs, be warned: we're not claiming any of the code in this supply-chain half of this post is robust enough for production usage!- We can't rely on the authors field of
cargo_metdata
'sPackage
struct, which could be maliciously set to impersonate a publisher. We instead use [presumably valid] URLs as a source of truth for publisher identification. PKI will be a superior long-term solution, more on this later.
- We can't rely on the authors field of
fn run_allowed_category_publishers
is the bulk of our trusted publishers (requirement 1) logic. We identify direct dependencies of the target project (to whichPolicy::new
takes aCargo.toml
path) and iterate that list to look for any crate which belongs to a user-specified category but isn't sourced from a user-allowed publisher for that category.- Crate category labels are optional, but we could extend the builder to support "allowed publisher for any or missing category" - ensuring unexpected publishers don't slip in. Our policy evaluation logic also doesn't validate user-input category names, a typo will cause checks to pass! Adding validation would be straightforward since categories are fixed.
So how do we roll out enforcement of our sophisticated policy requirements (category-specific trusted publishers and duplicate elimination)? The heavy-handed option is leveraging build.rs
(Rust build scripts):
use supplychain_policy::Policy;
fn main() {
println!("cargo:rerun-if-changed=build.rs");
println!("cargo:rerun-if-changed=Cargo.toml");
let manifest_dir = std::env::var("CARGO_MANIFEST_DIR").expect("CARGO_MANIFEST_DIR var not set");
let manifest_path = std::path::PathBuf::from(manifest_dir).join("Cargo.toml");
Policy::new(&manifest_path)
.expect("Invalid manifest path")
.allowed_category_publishers([("cryptography", "rustcrypto")].into_iter())
.no_duplicate_crate_categories(["cryptography"].into_iter())
.run()
.unwrap()
}
Now failing builds for supply-chain policy violations probably isn't the best way to make friends with other development teams, even in a smaller organization, unless there's a strong regulatory and/or business need to do so. Fortunately the above Policy
builder can easily be wrapped in a CLI tool and deployed in blocking or non-blocking CI pipelines, on a workspace-specific basis. Non-blocking failures can be centrally tracked and automatically triaged.
Our above proof-of-concept didn't accommodate exceptions (e.g. "allow this specific named duplicate, still enforce for remainder of category"), but you could quickly extend it to read individual crate/publisher names from a [version controlled and CODEOWNERS
protected] config file. Supporting legitimate exceptions, with documented rationale, is realistic - "perfect is the enemy of good".
What are my other options for supply-chain security in Rust?
The landscape of Rust's supply-chain security tooling is, fortunately, evolving. Sample projects to be aware of:
- Signature-based vulnerability alerting:
cargo audit
, a free tool to scan your dependency tree for known-vulnerable crates, is a must-have for production CI. Although a lack of "reachability analysis" (call-graph traversal to determine if your code directly or indirectly calls a vulnerable function) does mean false positives.- Heuristic-based malware detection: The Linux Foundation has funded development of a Rust counterpart to Go's
capslock
tool. Among other usecases,capslock
enumerates capabilities (file I/O, network connectivity, command execution, etc) for a given dependency and alerts if they suddenly change in a new version.- Trusted publishers: Future PKI initiatives may allow cryptographic identification of publishers, a big improvement over our above URL parsing. A related RFC outlines support for publishing crates from trusted infrastructure, following the footsteps of PyPI. Note PKI also means better response capability, although a real-world attack may have already succeeded by the time a build machine pulls a Certificate Revocation List (CRL).
While Rust's intentionally minimal
std
library is boon for embedded development, it does encourage over-reliance on 3rd-party crates for routine tasks. For contrast: Go's standard library offers FIPS 140-3 compliant cryptography with the flip of a build flag and backported a secure RNG to existing programs with only a Go toolchain bump!
Takeaway
"Trust is earned in drops and lost in buckets". That's probably a maxim, but it feels especially true in the context of commercial software - a global competition in which any winner, perhaps outside of a few monopolists, can be dethroned at any time.
Now the technical mechanism for trust is cryptography. Most useful cryptography is implemented and executing, whether on a tiny microcontroller or a beefy server, in the form of code. And code is notoriously difficult to get right, especially when you're shipping a lot of it.
Software quality is as challenging to replicate reliably as it is to measure actionably, if not more so. Our best hope is automating repeatability. When the quality criteria is security, automation is one goal of a platform security engineering function. Which needs to keep pace with the broader engineering organization at minimum, and ideally should accelerate all feature teams.
This first post explored bite-sized solutions to platform cryptography problems at the API (nonce reuse) and supply-chain (dependency policy) levels. The intent is automating guardrails for human error, but nowadays LLM auto-complete increases vulnerability rate - per both [Perry, 2023] and [Pearce, 2025]. The good news is that the above techniques should mitigate risks from both sources. Compile-time checks don't care how the code was generated.
Our second and final post will have a narrower but deeper scope. We'll explore a classic topic in trust: information disclosure vulnerabilities. Part 2 (release date TBD) grapples with technical concepts at greater length and on the cutting edge. You're going to want a coffee for this one.
But it'll still be good fun. Trust me.
Read a free technical book! I'm fulfilling a lifelong dream and writing a book. It's about developing secure and robust systems software. Although a work-in-progress, the book is freely available online (no paywalls or obligations): https://highassurance.rs/