Threat Model
A threat model is a list of attackers and what they can do. A tool’s value is measured against a specific threat model, not against an abstract idea of “security.” This page documents the threats OSA-Midnight Oil is designed to address, the threats it is not, and the operator behaviors that matter most for each.
What we defend against
Lost or stolen device
The primary threat. A device left at airport security, stolen from a parked car, or seized at a border crossing should not yield workspace content to whoever has it.
How the cockpit handles it:
- Vault is encrypted at rest with a key derived from the master passphrase
- The master passphrase is not stored anywhere on disk
- Locking the workspace discards the in-memory decrypted state
- An explicit Nuke flow destroys the vault file if the threat materializes during use
What the operator must do:
- Pick a strong master passphrase
- Lock the workspace before stepping away
- Configure idle auto-lock to a tolerable threshold
Compromised cloud provider
The product would not be local-first if it stored data in a cloud the user does not control. The cockpit defends against cloud compromise by not having a cloud.
How the cockpit handles it: there is no central server holding workspace content. Cloud compromise is not in the threat surface because the cloud is not in the architecture.
What the operator must do: if you choose to store a backup bundle in your own cloud, use a strong destination passphrase. The bundle is encrypted, but the cloud’s compromise affects the bundle’s availability and reveals metadata (file size, timestamps).
Telemetry-based de-anonymization
A growing class of attacks correlates application usage across services to identify users. The cockpit avoids this by not emitting telemetry.
How the cockpit handles it: no analytics, no crash reports sent home, no “anonymous” usage data, no update-check telemetry beyond the explicit user-initiated version check.
Untrusted local model output
Local models can be wrong, biased, or manipulated. The cockpit treats model output as data, not as commands.
How the cockpit handles it: model responses are stored as memory entries and rendered in the UI. They never trigger actions in the Rust core. The trust boundary holds even when the model is compromised.
What we do not defend against
Documenting these explicitly is part of the contract.
Operator-authored malware
The cockpit is a workspace. If the operator pastes a malicious script into a Vault Notes runbook and then runs it externally, that is outside the cockpit’s defenses. The cockpit does not execute its content.
Compromised operating system
If the OS is rooted, the cockpit’s process memory is accessible. We can’t defend the decrypted in-memory state against a kernel-level attacker. We can ensure the at-rest state remains encrypted. If theres a trojan horse or screen capture session on your device, we did our best to thwart it. Also, if a kernel-level hacker reaches One, the save state is secure but the active session can be intercepted via memory, same as any application with our at-rest state encrypted still.
Compromised local model runtime
If the Ollama process is replaced with malware, prompts sent to it can be exfiltrated. Run model runtimes you trust, on infrastructure you trust.
Coercion of the operator
If someone has the operator at gunpoint, the master passphrase will be entered. No cryptography defends against rubber-hose attacks. The Nuke flow exists for the operator’s pre-emptive use, not as a defense against this. If it’s your last-will at gun point, we covered that too.
Side-channel attacks during a session
While the workspace is unlocked, decrypted content is in process memory. Sophisticated attackers with physical access to a running, unlocked session can extract that memory. Lock the workspace when not actively using it. Mission Impossible MI:1 Style. When you walk away, lock it and then your screen.
Trust boundary in one paragraph
The Rust core is trusted. The UI shell is treated as untrusted code that asks the core for permission. The encrypted vault is the only persisted state. Decrypted state is ephemeral. The master passphrase is never persisted. Models are external untrusted output sources. Network is opt-in per module.
Reasoning about a specific scenario
When evaluating “is the cockpit safe for X,” walk the boundary:
- Where would the relevant data live? (Vault at rest, in-memory while unlocked)
- Where does it cross a trust boundary? (UI ↔ Rust core, Rust core ↔ disk, optional outbound for opt-in modules)
- Who has access to those crossings in the scenario? (Local user, local OS, attacker with device, etc.)
- Which of “what we defend against” applies?
If the scenario matches a defended threat, the architecture covers it. If it matches an undefended one, it is the operator’s responsibility, not the cockpit’s.
Disclosure
Security issues should be reported privately to the project maintainers before public discussion. The repository’s SECURITY.md documents the current channel.