The P-S-A-S-E-G-D Algorithm¶
The Rosetta is not written - it is generated using a 7-step algorithm designed to exploit how LLM attention mechanisms process structured specifications. Each step targets a specific phase of the model's comprehension pipeline.
The Seven Steps¶
1. Prime (P)¶
Target: Role and context priming in the attention mechanism.
The first lines of the Rosetta establish what AXL is and what the reader's role will be. This activates the model's instruction-following circuits and sets the domain context.
AXL - Agent eXchange Language
You are an AXL-fluent agent. Parse and produce packets in the following format.
Priming works because LLMs allocate attention weight based on early tokens. A clear role declaration at the top biases all subsequent processing toward protocol compliance.
2. Shape (S)¶
Target: Structural pattern recognition.
The Shape step introduces the packet template - the skeletal structure that all packets follow. This gives the model a pattern to fill rather than a concept to invent.
Shape works because transformer models are pattern completors. Once the template is in the context window, the model will attempt to reproduce its structure in outputs.
3. Alphabet (A)¶
Target: Token-level type recognition.
The Alphabet step defines the six determinative sigils. This is the type system - the smallest unit of AXL semantics.
This step is critical because it establishes a token-to-type mapping that the model applies to every subsequent value it encounters or generates. The sigils are chosen to be single characters with strong prior associations in the model's training data.
4. Schemas (S)¶
Target: Structured field binding.
The Schemas step lists all 10 domains and their field definitions. Presented as compact, tabular data, this step creates slot-filling templates in the model's working memory.
OPS: target|status|metric|value|threshold|action
SEC: target|threat|severity|action|confidence
DEV: repo|branch|status|action|author|confidence|R_risk
...
Tabular schemas are highly effective for LLMs because they map directly to the structured generation patterns the model learned during training on documentation, APIs, and database schemas.
5. Examples (E)¶
Target: Few-shot grounding.
The Examples step provides one or more worked examples per domain. These serve as in-context few-shot demonstrations - the most reliable way to teach an LLM a new output format.
Example OPS packet:
@https://axlprotocol.org/rosetta|π:axl_7f3a:sig:0.001|T:1719422400
|S:OPS.CRITICAL|target:db-primary|status:!DOWN|metric:latency
|value:%312.5|threshold:#100|action:failover|LOG
Each example reinforces the template (from Shape), the types (from Alphabet), and the field schemas (from Schemas) simultaneously. The model sees the abstract pattern instantiated concretely.
6. Generate (G)¶
Target: Output mode activation.
The Generate step includes explicit instructions for producing packets. It transitions the model from "understanding" mode to "generation" mode.
When you need to communicate, emit an AXL packet. Always include preamble, selector, and domain-appropriate fields.
This step is necessary because comprehension and generation are different circuits in an LLM. An agent that understands a packet format may still not produce one unless explicitly directed to.
7. Direct (D)¶
Target: Constraint enforcement.
The Direct step establishes hard constraints and behavioral rules - what the agent must and must not do.
Rules:
- Every packet MUST have a preamble, selector, and body.
- Use determinatives on all typed values.
- Never emit malformed packets.
Directives work because LLMs trained with RLHF respond strongly to imperative constraints. The Direct step acts as a guardrail layer that reduces the error rate on generated packets.
Why This Order Matters¶
The P-S-A-S-E-G-D sequence is not arbitrary. It follows the natural comprehension pipeline of a transformer model:
| Step | What the model learns | Attention phase |
|---|---|---|
| Prime | "I am an AXL agent" | Role activation |
| Shape | "Packets look like this" | Pattern recognition |
| Alphabet | "These sigils mean these types" | Token-level semantics |
| Schemas | "These domains have these fields" | Slot-filling templates |
| Examples | "Here is what a real packet looks like" | Few-shot grounding |
| Generate | "I should produce packets" | Output mode switch |
| Direct | "These are the rules" | Constraint enforcement |
Reordering the steps degrades comprehension. For example, showing examples before schemas forces the model to infer the schema from examples (unreliable). Showing schemas before the template leaves the model without a structural frame to attach fields to.
Validation¶
Two versions of the Rosetta were tested:
| Version | Lines | Comprehension | Parse Validity |
|---|---|---|---|
| Prototype | 27 | 95.8% | 100% |
| Production (v2.2) | 133 | 95.8% | 100% |
The 27-line prototype validated the P-S-A-S-E-G-D algorithm itself - proving that the ordering and compression strategy works. The 133-line production version expanded coverage to all 10 domains and added richer examples without degrading one-read learnability.