Skip to content

Model Workspace

Model Workspace is where the cockpit exposes local models. The surface is built around model cards rather than raw API setup — regular operators should see capabilities, not endpoints, terminal commands, or model IDs.

Model cards

A model card includes:

  • Name — operator-facing model name (e.g. “DeepNimSec / Security Model v1”)
  • Subtitle — one-line capability statement (e.g. “DeepNimSec Defensive Review”)
  • Description — what the model does in plain language
  • Actions — pre-canned action chips the model supports

You select a model from the card and use the action chips as one-tap operations on the current project’s memory.

First-class models

The first integration phase includes two bundled model profiles:

DeepNimSec / Security Model v1

Normalizes project memory into DeepNimSec risk, evidence, controls, and safer next steps. Action chips: Review risk, Normalize evidence, Map controls, Score findings.

The operator-facing path is simple: select the card, click Prepare model, then run the review workflow. ROS owns the local runtime preparation behind the scenes.

Citizen-AI / Project Blueprint

Creates lab-only defensive training scenarios, bias reviews, verification coaching, and after-action summaries. Action chips: Lab scenarios, Bias recognition, Verification coaching, After-action review.

Friendly fallback

If a local model cannot be prepared yet, the cockpit keeps capture tools active and explains the state in plain language. Manual response capture remains available, but it is a fallback, not the primary setup path.

Advanced setup

The Advanced Setup drawer is where developer and AI-builder configuration lives:

  • Provider / Runtime — Ollama-compatible local runtime by default
  • Local Endpoint — typically http://127.0.0.1:11434 for Ollama
  • Technical Model — the underlying model identifier (e.g. citizen-ai:latest)
  • Runtime Statusavailable or unavailable reachability state

Regular operators should not need to touch this. The intent is that the model card and Prepare model button are enough.

Local-first default

Cloud LLMs are not a primary user path. The cockpit prepares local models through ROS-managed adapters and is designed to keep working when no model is reachable.