Contract Engineering: Beyond Context
Context Schema Contracts: Ensuring AI Never Loses the Plot
Imagine handing your AI system a living, evolving contract that spells out exactly what context it should consider, how to interpret it, and when to update it – all from day one of development through launch and beyond. A Context Schema Contract is just that: a formalized agreement (in spirit, not legalese) between humans and an AI system about maintaining shared context. It’s the antidote to the “stale PRD” syndrome, ensuring that the AI never drifts off-script or forgets the user’s intent as your software evolves. In short, it’s a clever pact that keeps your AI on the same page (literally) with your team – no matter how many pages the project spec grows.
In this guide, we’ll dive deep (with a dose of wit) into what context schema contracts are, why they’re needed, how to create one, and how they complement those context-heavy product requirement documents (PRDs) you might already be writing. We’ll include practical examples (with Python code for clarity), a step-by-step “paint-by-numbers” approach to authoring your own contract, and even a fill-in-the-blank template. By the end, you’ll see how this contract becomes a living artifact throughout the software life cycle – not a document that gets filed away to gather dust.
Let’s get started by defining this concept and understanding why it’s more than just a fancy way to say “prompt.”
What is a Context Schema Contract?
A Context Schema Contract is a structured, evolving blueprint of the context that an AI system should maintain and use to fulfill user intent. It defines the “world” that the AI operates in – including user goals, relevant data, rules or constraints, and any memory of past interactions – in a consistent, formal format. Think of it as a cross between a project spec and an API contract, but for the AI’s contextual understanding of your application. It tells the AI what to expect and how to behave given the information at hand, ensuring that the AI’s responses stay aligned with the true user intent and current product state.
Crucially, a context schema contract is not just a one-off prompt or a transient memory of the last conversation. It’s a living document or configuration that both humans and AI reference continuously. It typically includes a schema – a defined structure or template – for all the key context elements the AI should consider, and a set of rules or guidelines (the “contract” part) for how the AI uses and updates that context. For example, it might specify that the AI always gets a field for the user’s current goal, a field for recent relevant user actions, a field for applicable business rules, etc., each time it generates output. Because this structure is defined and agreed upon up front, the AI’s “brain” always knows where to look for important information and where to put new information as the interaction continues.
In practice, a context schema contract could be represented in various ways: as a document in your repository, as a JSON/YAML schema, or even as a class in code that holds context state. The key is that it’s formal and structured – more like a data contract than free text. In fact, one emerging standard called the Model Context Protocol (MCP) uses a YAML-based config to define a “context schema” for AI models, treating context like an interface. As Victor Leung explains, “Think of MCP as the equivalent of an API contract, but for LLM context” – in other words, a context schema contract in action.
How Is It Different from Prompts or Ephemeral State?
It’s easy to confuse a context schema contract with general prompt engineering or just saving conversation history. They all relate to providing context to an AI, but the contract goes further:
-
Not Just a Prompt: Traditional prompt design is often about crafting a single-turn instruction or question cleverly to get a good answer. That’s important, but it’s ephemeral – like giving the AI an essay each time. A context contract, by contrast, is more like maintaining a structured API surface for prompts. In fact, teams are moving away from long, clever prompts toward “small, versioned contracts” that specify roles, scope, output format, and rules explicitly. This makes the AI’s behavior reliable, auditable, and consistent, rather than relying on prompt wording “vibes.” As one article put it, “the contract turns prompt engineering from persuasion into software design”.
-
Not Just Memory Buffers: Ephemeral context (like a chat history or a few stored variables) is transient and often ad-hoc. For example, a typical chatbot might keep the last N messages as context, or a coding assistant might load the last edited file. That’s useful, but it’s not structured or guaranteed. An ephemeral state might drift or be lost once it falls out of scope or context window. In contrast, a context schema contract explicitly defines what constitutes context and ensures it’s systematically maintained. It’s the difference between “the AI happens to remember because we prepend some text” and “the AI is required to check these fields and update them as a matter of contract.” Each piece of context in the schema contract is usually versioned or traceable, so you can track changes over time. Essentially, the contract gives you observability and control over context that ad-hoc memory cannot.
-
Persistent & Living: Perhaps most importantly, a context schema contract is persistent across the software life cycle. It’s not thrown away after each session. It evolves alongside your system. If a new user requirement comes in, you update the contract (and likely its schema version) to reflect that new reality. This is very unlike a static prompt or a forgotten design doc – it’s more akin to a living documentation or “single source of truth” for what the AI should know at any given time. In context engineering philosophy, “If it isn’t in the shared memory, it isn’t true” – meaning if something about the product or user intent changes, you put it into the contract so the AI and team are instantly on the same page.
In summary, prompts are like individual conversations, and ephemeral memory is like your AI’s short-term memory, but a Context Schema Contract is the long-term agreement that governs those conversations and memories. It ensures continuity and consistency of context over time and across scenarios.
From PRD to Context Contract: Keeping Requirements Alive
If you’ve written a Product Requirement Document (PRD) or a context-engineered PRD, you know how much context and intent get spelled out up front. The tragedy is that traditional PRDs often become shelfware – forgotten in a Confluence page as the team rushes into coding. A context schema contract aims to carry that contextual knowledge forward throughout development, so it doesn’t get lost or outdated.
Think of the context schema contract as the younger, more dynamic sibling of the context-rich PRD. The PRD (especially a “context-engineered” one) lays out the Why and What – the business context, user stories, assumptions, etc. – essentially a high-level narrative of the product’s intent. The context schema contract then takes that and makes it actionable for the AI, in a structured way. It’s the bridge that ensures “nothing gets lost in translation between ‘requirements’ and the actual AI behavior”.
Here’s how they complement each other:
-
PRD as Foundation, Contract as Implementation: Your context-engineered PRD might describe user personas, use cases, business rules, and even example dialogues (prompt ideas). The context schema contract takes these and formalizes them. For instance, if the PRD says “The AI assistant should adhere to GDPR and company style guidelines,” the contract will have fields or rules for RegulatoryCompliance: GDPR and ToneGuide: [link to styleguide] under its constraints section. The PRD is the why; the contract is the how (for the AI).
-
Living Sync: As the product evolves, you’d update both the PRD and the contract – but the contract is what the AI actually uses. In a well-“context-engineered” project, your documentation is part of the development flow, not an afterthought. Matt Gierhart’s Context Engineering framework calls this treating documentation as the “memory infrastructure” of a hybrid human-AI team. In that approach, “documentation keeps pace with code, acting as the rigorous interface between Human and AI”. The context schema contract is exactly such an interface artifact – it lives alongside code and tests, continuously updated so that what the AI “knows” is always in sync with the latest specs.
-
Example – If your PRD defines a set of Business Rules (BR-001, BR-002, etc.), your context contract might include a section listing all active rules, or a link to a
specs/SoT(Source of Truth) file where those rules live. When a rule is added or changed in the PRD, you update that file (or section) and voila – the AI’s next run includes the new rule. Unlike a static PRD that engineers might forget to read, the context contract is *ingested by the AI (or at least by the system building the AI prompt) as part of its normal operation. There’s no chance it gets ignored unless you intentionally omit it.
In short, the context schema contract operationalizes your PRD. It ensures the AI always has the current “big picture” and the fine details at hand. Far from living in a binder or a wiki page, those carefully engineered contexts from your PRD become part of the AI’s working memory via the contract. This is how we prevent the fate of the stale PRD – by making context a first-class citizen in the runtime.
Anatomy of a Context Schema Contract
So, what does a context schema contract actually look like? While there’s no single “standard” format (it can vary based on domain or team preferences), most contracts will include several key components. Let’s break down the typical anatomy of a context schema contract:
-
Roles and Scope: First, the contract establishes who and what we’re dealing with. This often means defining the AI’s role (e.g., “You are a customer support assistant AI for [Company]” or “You are a C++ coding assistant specialized in embedded systems”) and possibly the user’s role or context (“User is a developer trying to implement feature X”). This sets the stage and boundaries for context. It’s similar to a system prompt defining persona, but in our contract it’s an explicit field. For example, a contract might have
role: "AI Coding Assistant"anddomain: "Embedded Finance Application"at the top. By formalizing role/scope, we prevent context creep – the AI knows its jurisdiction. -
User Intent and Goals: At the heart of context is the user’s intent – what are we trying to achieve? The contract will include a clear statement of the user’s current goal or problem it needs solved. In a static contract (for a fixed feature) this might be the product’s main objective (e.g., “Help the user manage their personal finances via voice commands”). In a dynamic scenario, this could be updated per session or task (“User wants to integrate OAuth login in their app”). Either way, the contract schema reserves a place for “Intent” so it’s never ambiguous. This is like the north star that both AI and humans can refer back to.
- Contextual Data Fields: These are the schema elements that hold all relevant information the AI should consider. What falls here depends on your application, but common examples include:
- Memory / History: Prior interactions, important past events or messages. (E.g., last user query, or a summary of session history.) In MCP terms, these are memory objects. The contract might specify a field like
recent_historyor have structured subfields for memory (as in a list ofmessageswith role/user). - Environment State: Any state from the software environment the AI needs. For instance, in a coding assistant, the state could be the contents of relevant files or the current error message. In a sales chatbot, it could be the user’s account status or items in their cart.
- External Knowledge/Artifacts: Relevant documents, knowledge base articles, or data that should be included. The contract could have a section for Artifacts or References (as MCP does) such as
reference_doc: "FAQ.md section 2"orknowledge_snippets: [...]. This ensures the AI isn’t flying blind on domain info – it knows exactly which external facts are in play. - Tools & Integrations: If the AI can use tools or functions (like an API call, database lookup, calculator, etc.), the contract enumerates them. For example:
tools_available: ["WeatherAPI", "SQLDatabase"]along with maybe a brief on what each does. This tells both the AI and any developers what capabilities are on the table. It’s analogous to function definitions in newer AI APIs (and indeed MCP includes tool descriptions as part of context). - User Profile & Preferences: If applicable, information about the user (preferences, permissions, expertise level). For instance,
user_role: "admin"oruser_language: "Spanish". This helps the AI tailor responses properly.
- Memory / History: Prior interactions, important past events or messages. (E.g., last user query, or a summary of session history.) In MCP terms, these are memory objects. The contract might specify a field like
- Constraints and Rules: Every good contract has the “shall and shall not” section. Here we list the policies, business rules, or guardrails the AI must follow. This can include content guidelines (“Do not disclose internal project names”), style guidelines (“Use a friendly tone, refer to user by first name”), compliance rules (“If financial advice, include disclaimer per FINRA”), or operational constraints (“Never call external API more than once per request”). In high-grade prompt contracts, this often appears as explicit instructions or ask/refuse policies (e.g., “If user asks for unsupported action, politely refuse”). By encoding these in the schema contract, we make them persistent and testable. For example, you might have:
```yaml
constraints:
- name: “PrivacyPolicy” rule: “Do not output user’s personal data unless explicitly confirmed.”
- name: “StyleGuide” rule: “Use markdown for code snippets; limit answer to 4 paragraphs.” ``` These become part of the context fed to the AI or at least part of the spec that the AI’s outputs are validated against.
-
Output Schema (if applicable): Some context contracts also define how the AI’s outputs should be structured. This is especially the case if the AI is expected to produce a particular format that another system will consume (JSON, XML, etc.). For example, the contract might say: “The assistant’s answer must be a JSON object with fields
answerandconfidence.” Or it might define sections in a text output (like an essay with intro, body, conclusion). This is essentially telling the AI what shape of answer is acceptable, which can be checked by validators. It complements the context: if the input is well-structured, often teams want the output well-structured too. While this might border on response design more than context, including it in the contract ensures that context and expected output are aligned (and it gives the AI a heads-up of the format). - Versioning and Metadata: Because the context schema contract is evolving, it will usually carry a version number or timestamp. This could be a simple line like
version: 3.2 (2025-12-30)at the top. Each time you update the contract (say you add a new context field or change a rule), you bump the version. This practice is borrowed from API contracts – it lets you diff changes and even roll back if needed. In addition, metadata like who last updated it or why (a brief changelog) can be included as comments or in a dedicated section. This helps keep track of context changes just like code changes.
All these components together form the schema (the fields/structure) and the contract (the rules about using/updating those fields). By making this explicit, you’ve essentially created a shared language between humans and the AI about what context matters. Developers can now programmatically ensure these fields are populated when constructing AI prompts, and the AI (if it’s sophisticated enough, or via system instructions) knows to respect the contract – e.g., to ask for missing info if a required field is empty, or to refuse certain actions per the rules.
Concrete Example: To visualize this, here’s a bite-sized example in a pseudo-schema form for a hypothetical AI onboarding assistant:
# Context Schema Contract for "OnboardMe" AI assistant
version: 1.0
role: "AI Onboarding Guide"
scope: "Helps new users set up their account and learn features"
user_intent: "User wants to get started with Product X and needs guidance"
context_fields:
user_profile: # Info about the user
name: "<Name>"
account_type: "<Free/Pro/Enterprise>"
joined_date: "<yyyy-mm-dd>"
current_step: "<Onboarding step name or number>"
history:
- role: user
content: "<most recent user question or action>"
- role: assistant
content: "<assistant's last answer>"
external_refs:
FAQs: "FAQ.md#basic-setup" # Link to relevant FAQ section
DocPage: "docs/quickstart.html" # Link to a documentation page snippet
constraints:
- "Tone: Friendly and encouraging, use second person (\"you\")."
- "If user is stuck on a step for >2 queries, offer to connect to human support."
- "Do NOT reveal internal implementation details or source code."
output_format: "Text, 1-2 paragraphs, plus a list of next-step options as bullet points."
# (End of contract)
This is just illustrative – in practice your format could be JSON, Markdown, etc., and you might generate parts of it dynamically. But notice how it encapsulates who (role/scope), what goal (user_intent), what info (profile, step, history, references), rules (constraints), and output expectations. This contract would travel with the AI assistant throughout its life: any new feature (say, a new external reference or a new constraint about branding) gets added here, version bumped, and the AI immediately “knows” about it.
Lifecycle: A Contract that Lives and Learns
One of the biggest benefits of a context schema contract is that it persists through the entire software development lifecycle – and even beyond into maintenance – ensuring the AI’s understanding stays up-to-date. Here’s how it plays out across phases:
-
Design & Planning: When you’re defining what the AI should do (say in a design doc or PRD), you also draft the initial context schema contract. This might involve listing out everything the AI needs to know or remember. Because it’s part of planning, it forces you to be explicit – which often uncovers assumptions. (Ever realized you forgot to mention a business rule until you think “oh, the AI wouldn’t know that unless we put it in context”?) By authoring the contract early, you bake context engineering into the design.
-
Development & Implementation: As developers (human ones) start building the features, the context contract acts as a guide and a guardrail. For example, when writing the code that calls the AI model, a developer can reference the contract to assemble the prompt: “Include field X, Y, Z from the contract”. If something in the contract is unclear or missing, that’s a prompt (pun intended) to update it or clarify. Some teams even treat an out-of-sync context contract as a failing test – i.e., the system might validate that all required context fields are present before allowing an AI action to proceed. In effect, the contract becomes part of the “definition of done” for coding tasks involving AI.
-
Testing & Validation: QA isn’t just for UIs and APIs – you’ll want to test that your AI is following the contract. Because the contract is structured, you can write automated tests or validators that check things. For instance, a test could feed the AI a scenario and verify that it respected a given constraint from the contract (did it refuse an invalid request properly? Did it format output as JSON as specified?). In sophisticated setups, validators enforce the contract at runtime, catching any deviation (like a missing field or a policy violation) before it reaches the user. This “contract compliance testing” ensures your AI doesn’t go rogue even as the context gets complex.
-
Deployment & Runtime: Even in production, the context contract is at work. It might be literally included in prompts (especially system-level prompts that guide the AI’s behavior). Or it might be embedded in the AI orchestration logic (for example, a pipeline that builds the context from live data sources according to the schema, and checks responses against rules). If you versioned the contract, your deployment can track which version of context schema is live with which version of the AI model. Rolling back might mean reverting to a prior contract version if a new context change caused issues (just like rolling back a bad code deploy).
-
Maintenance & Evolution: After launch, requirements will change, and new insights will emerge (maybe users start asking things you didn’t anticipate). Instead of writing a whole new PRD each time, you iteratively update the context schema contract. It’s a living document: add a new field, update a rule, deprecate something that’s no longer relevant. Each update is communicated to the team (possibly via commit history, since ideally the contract lives in your repo). Importantly, because the AI’s behavior is tightly coupled to this contract, updating the contract is an integral part of updating the product. You’re effectively maintaining the AI’s “understanding” alongside its code. This continuous alignment prevents the situation where the code has moved on but the AI is still following old instructions.
-
Audit & Knowledge Transfer: Let’s say six months later a new developer or PM joins, or you need to investigate why the AI made a bizarre decision. The context contract serves as an audit trail of what context was provided at any time and why. Since each item in context is traceable and versioned, you can reconstruct what the AI knew at moment X that led to output Y. It’s much easier to debug or explain AI decisions with this in hand. And for onboarding new team members, handing them the context schema contract (plus its history of changes) is like giving them the condensed lore of the project’s AI logic.
In essence, the context schema contract ensures that the AI’s “mind” grows and adapts with the product. We no longer throw a requirements doc over the wall and hope for the best; we keep the requirements in lockstep with the AI’s context forever. In a sense, the contract is the AI’s evolving playbook – always current, always authoritative, and never forgotten in a drawer.
Example: Context Contract in Action (with Code)
Let’s bring this to life with a concrete example. Suppose we’re building an AI assistant to help developers review pull requests (PRs) in a C++ project. We’ll sketch a simplified context schema contract for this scenario and show how it could be used in code. Even if your stack is C++, we’ll use Python for the pseudocode because readability matters (and hey, even C++ devs can read Python – this is a contract, not the actual product code!).
Scenario: “CodeCritique AI” – an AI that reviews a GitHub pull request, understands the project’s guidelines, and provides feedback with suggested improvements.
Context Schema Contract for CodeCritique AI:
- AI Role/Scope: Code reviewer AI for a specific repository.
- User Intent: The user (developer) wants a thorough code review of their PR.
- Context Fields:
- PR details (title, description, diff),
- Coding standards and guidelines (for that repo/team),
- Test results (did tests pass?),
- Past feedback (if any from previous PRs).
- Constraints:
- The AI must not approve if tests fail.
- It must highlight security issues if present (using a known list of security patterns).
- Use polite, constructive tone; no harsh language.
- Output: A formatted review comment (markdown) with sections: Summary, Strengths, Areas for Improvement, Suggestions.
We can represent part of this contract in code and show how it would be applied. First, let’s define a Python data class to mirror our contract schema, then simulate populating it and using it to construct an AI prompt:
from dataclasses import dataclass, field
from typing import List, Dict
@dataclass
class ContextSchemaContract:
"""Schema for CodeCritique AI context."""
ai_role: str
user_intent: str
context: Dict[str, str] = field(default_factory=dict)
constraints: List[str] = field(default_factory=list)
version: float = 1.0
def update_context(self, key: str, value: str):
"""Update a context field and bump version for tracking."""
self.context[key] = value
self.version += 0.1 # increment version to mark change
# Initialize the contract with basic info
contract = ContextSchemaContract(
ai_role="Code Review Assistant for the FooBar Repo",
user_intent="Provide a thorough code review for the given pull request"
)
# Populate initial context fields
contract.context.update({
"PR_title": "Add input validation to payment module",
"PR_description": "This PR adds range checks on inputs to prevent overflow.",
"diff_summary": "Modified files: payment.cpp, validation.h"
})
contract.constraints.extend([
"Follow the project's C++ style guide and coding standards.",
"If unit tests failed, do not approve the PR.",
"Mention any security vulnerabilities or edge cases that are not handled."
])
In the snippet above, we set up the contract with an ai_role and user_intent. We then fill in some context details like the PR title, description, and a summary of the diff (which presumably we’d get from the GitHub API). We also add a few constraints that came from our project guidelines.
Now, imagine the moment we want the AI to do its job – we need to assemble a prompt (or API call) using this contract. Here’s how we might do that:
# Let's say tests have run and we have a result:
tests_passed = False
if not tests_passed:
contract.context["test_status"] = "FAIL" # update context
contract.update_context("test_status", "FAIL") # using our method to bump version
# Construct a prompt for the AI using the context contract
prompt = f"""
Role: {contract.ai_role}
User Intent: {contract.user_intent}
Context:
- PR Title: {contract.context.get('PR_title')}
- PR Description: {contract.context.get('PR_description')}
- Diff Summary: {contract.context.get('diff_summary')}
- Test Status: {contract.context.get('test_status', 'PASS')} # default to PASS if not set
- Repository Guidelines: Follow all coding standards.
Constraints:
"""
for rule in contract.constraints:
prompt += f"- {rule}\n"
prompt += """
Now, based on the above context, provide a code review.
Your response should include:
1. Summary of the changes.
2. Positive feedback on good aspects.
3. Constructive critique on issues or improvements.
4. A clear recommendation (approve or request changes), considering the test status.
"""
In this constructed prompt string, we’ve basically translated our context schema contract into the actual input we’ll give the AI model. We list the role, the intent, all the context fields (including the test status we updated and any guidelines), and we enumerate the constraints as bullet points (so the AI is explicitly instructed on them). Finally, we ask the AI to produce the output in the desired format.
A few things to note here:
-
We updated the contract when tests failed:
contract.update_context("test_status", "FAIL"). This is simulating the idea that our pipeline or application logic knows to keep the contract in sync with reality (tests failing is a change in context). Because of this, the prompt we build accurately reflects that context (Test Status: FAIL), and our constraints say “do not approve if tests failed,” so we expect the AI to catch that. -
The contract’s fields make it straightforward to assemble a prompt. We’re not scrambling to piece together bits of state from here and there – we know exactly what keys to pull from (
PR_title,diff_summary, etc.) because the contract defined them. This reduces the chance of forgetting to include something important. If later we realize we need to also include “Lines changed: 120” as context, we’d add a fieldlines_changedto the contract, update our code accordingly, and everyone (human and AI) now knows that’s part of context. -
After getting the AI’s output, we could also imagine enforcing parts of the contract. For example, if the AI said “Looks good, approved” despite tests failing, that violates a constraint. We could detect the word “approved” and the
test_status: FAILcontext and flag that the AI’s response is non-compliant. This might trigger either an automatic correction (ask the AI again, reminding it of the rule) or at least log a warning. In effect, the contract allows building validators on the output side as well, turning “AI contracts” into something that can be checked like any other contract.
This example, while simplified, demonstrates how a context schema contract can be used in practice. It’s not just theoretical – you implement it in your code workflow. The result is an AI system that behaves more deterministically and transparently. When it inevitably does something unexpected, you have the contract to either blame (“Ah, we forgot to include X in context, no wonder it missed that detail!”) or to improve (“We should add a rule to cover this new case”).
How to Craft a Context Schema Contract (Step-by-Step)
Authoring a context schema contract might sound complex, but it’s quite approachable if you break it down. It’s a bit like filling out a detailed spec sheet for your AI’s brain. Here’s a step-by-step guide to creating one – essentially a paint-by-numbers approach:
-
Define the AI’s Role and Scope: Start by clearly writing down what the AI is (and maybe what it is not). Are we dealing with “a friendly customer support chatbot for an e-commerce site” or “an autonomous data-cleaning script that explains its changes”? This becomes the foundation of your contract. It will often be one of the first lines in the contract and will anchor many decisions (since context beyond that scope can be ignored).
-
Identify the Primary User Intent(s): What is the user trying to accomplish when interacting with this AI? If there are multiple scenarios, you might list the top-level intents (or have a separate contract per intent). This ensures the contract is goal-oriented. For example, “User wants to get troubleshooting help for a software issue” or “User wants creative suggestions for a marketing slogan.” Write this as a short statement or list; it will guide what context is relevant.
-
List Out Context Elements Needed: Brainstorm everything the AI would ideally know to fulfill those intents given its role. This often maps to answering questions like: What information about the user do we have (profile, preferences, location)? What ongoing state or history matters (previous questions, current draft content, last action taken)? What external knowledge or tools could help (product docs, APIs, database entries)? What environmental conditions (time, date, geolocation, system state)? Essentially, imagine yourself in the AI’s shoes trying to do the task – what do you wish you knew? Write all these down as candidate context fields. Don’t worry about structure yet, just make a comprehensive list.
- Structure the Context Schema: Now take that brainstorm and organize it. Group related pieces (perhaps “User Info,” “Session History,” “Active Data,” “Relevant Docs,” etc.). Decide on a format – e.g., will this be a JSON with nested keys, a sectioned Markdown document, a Python
dict, etc. Create the skeleton with placeholder names. For example:- User Profile: name, role, account tier, etc.
- Session State: current step, last action, unresolved queries.
- Knowledge Base: maybe a list of document IDs or summary snippets.
- External Tools: list of tool names available. Each of these could be a section in a doc or keys in a dictionary. The idea is to give your context a schema – a predictable shape. This is where the “schema” in context schema contract comes in.
- Add the Rules/Constraints: Next, enumerate the rules the AI should abide by. Think in terms of always/never and if/then:
- Always cite sources for factual claims.
- Never use profanity or biased language.
- If user asks for legal advice, respond with a disclaimer + generic info (no actual advice).
- If context is missing crucial info, ask a clarifying question (don’t just guess).
- When in doubt or on error, respond with a safe failure (like “I’m sorry, I can’t do X”). These rules can come from many places – ethical guidelines, product requirements, legal compliance, user experience guidelines. Put them in a list in the contract. Each rule can be a short sentence or a reference to a longer policy. The key is that they are explicit. This part really makes it a “contract” (thou shalt / shalt not…). In our template, we’ll include a spot for these.
-
Include Output Expectations (if needed): If your use case benefits from a defined output format or style, describe it in the contract. This could be as strict as a JSON schema or as general as “respond in a cheerful tone with emojis.” The reason to include it is to tie the context to the outcome. For instance, if the contract context includes a field “requested_output_language: French”, a corresponding expectation is “Output should be in French.” Having it in the contract means any developer or stakeholder knows that’s an intended part of the AI’s behavior. It also helps you double-check that the context provides what’s necessary for the output (in this case, that we did specify the language).
-
Review and Iterate: Like any good specification, review the draft contract. Involve both engineers and non-engineers (e.g., product managers, domain experts) if possible. Does it capture everything important? Is anything in the contract that the AI really won’t ever use? Trim the fat, because unnecessary context can confuse the model or waste token budget. Also think of edge cases: What if a field is blank? – maybe add a rule for that (“If
user_profileis empty, assume guest user”). What if two rules conflict? – perhaps set an order of precedence. This review often surfaces subtle assumptions. Update the contract accordingly. -
Version it and Communicate: Stamp a version number and date. Save the contract in a place where your team and your AI pipeline can access it – ideally in your git repository or a shared docs space. Communicate to the team that “this is the context contract we’ll be following.” Since this is a new-ish concept, you might need to evangelize it a bit: encourage everyone to treat it seriously, like code. If you use tools like pull requests for docs, consider doing that for contract changes so others can review updates.
-
Integrate into AI Workflow: Make sure this contract isn’t just pretty words on paper. Update your prompt construction code or API calls to draw from the contract. If you have a system prompt for ChatGPT or Claude, that system prompt could literally include text from the contract (or a summary thereof). If you’re using a framework (LangChain, etc.), you might convert the contract into their format (some frameworks allow schema definitions or multiple context inputs). Essentially, wire it up so that when the AI runs, it’s reading from the contract. This might be direct or through intermediate code, but the logic of the contract should reflect in what the AI sees.
- Plan Updates and Maintenance: Finally, decide how you will update this contract as things change. Perhaps you’ll revise it at the end of each sprint if new features were added. Or maybe you have a rule that any PR that affects the AI’s behavior must also consider if the context contract needs an update (akin to updating documentation or tests). Embed the contract into your life cycle. A context schema contract is not fire-and-forget – it’s more like a living constitution for your AI feature.
By following these steps, you’ll author a robust context schema contract that is tailored to your project. It might feel formal, but you’ll thank yourself later when the AI does exactly what it’s supposed to and everyone on the team (including the new AI teammate) understands why.
In case you prefer a more guided format, we’ve created a fill-in-the-blank template below. You can use it as a starting point to draft your own context schema contract by replacing the placeholders with your project specifics.
Broader Learning
This concept of maintaining structured context and treating it as a contract is gaining traction. If you’re hungry for more insights (or want to see how others are doing it), here are some top resources worth exploring:
-
Anthropic: “Effective Context Engineering for AI Agents” (2025) – An in-depth look at strategies for curating and managing LLM context beyond just prompt wording. It contrasts prompt engineering vs context engineering, and discusses why focusing on the right information in context is key to steerable, reliable AI.
-
John Godel’s “Prompt Engineering 2026: Contracts Become the Interface” – A forward-looking article about how teams are moving from verbose prompts to concise prompt contracts. It outlines the elements of a good AI contract (scope, schema, refuse rules, etc.) and how this approach makes prompts “reliable, auditable, and portable”. Great perspective on treating prompts as versioned interfaces.
-
Victor Leung on Model Context Protocol (MCP) – Blog post “What is MCP and Why It Matters” (2025). Introduces an open standard for defining context via YAML. MCP is essentially an implementation of context schema contracts, enabling modular, debuggable AI context. Worth reading for concrete examples of context schemas (system instructions, memory, tools, user inputs) and how each item is versioned.
-
GitHub – PRD-driven Context Engineering (Gearheart) – An open-source repository by Matt Gierhart demonstrating “memory as infrastructure” for AI-human teams. It showcases how living documentation (PRDs, specs, etc.) can be organized to feed AI agents in a project, using unique IDs and a layered approach. Browsing the repo (and the
README.md/PRD.mdin it) can spark ideas on structuring context and keeping AI aligned through docs. -
Video – “Context Engineering 101: The Simple Strategy to 100x AI Coding” by Cole Medin (2023) – A YouTube talk that humorously and effectively breaks down context engineering for programmers. Cole demonstrates how providing the right structured context (like relevant code snippets, instructions, and constraints) can massively boost an AI coding assistant’s performance. A great visual primer on why context matters more than model size.
-
A. B. Vijay Kumar’s series on Product Requirement Prompts (PRPs) – Particularly the second part “Context Engineering — Product Requirements Prompts”. It discusses layering context from business requirements down to technical specs in AI prompts. This showcases a practical methodology for turning traditional requirements into structured contexts, much like a contract between stakeholders and the AI. It’s a useful bridge between classic PRD writing and context schema thinking.
Each of these resources will deepen your understanding and give you different perspectives on ensuring AI systems have the context they need, structured in a maintainable way. Happy learning!
Context Schema Contract Template (Fill-in-the-Blank)
Use this Markdown template to draft your own context schema contract. Replace the placeholder text in square brackets [like this] with details for your project. You can add or remove sections as needed.
[PROJECT NAME] – Context Schema Contract
AI System Role & Scope: [Describe the AI’s role and domain. For example: “AI coding assistant for the FooBar application, specializing in C++ code reviews.”]
Primary User Intent: [State the main goal(s) users have when interacting with this AI. For example: “Help the user troubleshoot network issues” or “Guide the user through onboarding”.]
Context Schema Components
- User Profile / Attributes: [What info do we have about the user? e.g. name, role, preferences, account tier.]
- Session State / History: [What recent interactions or state matter? e.g. last user question, current step in a flow, recent actions taken.]
- External Knowledge / References: [What docs, data, or context from outside should be included? e.g. “FAQ page on pricing”, “User’s transaction history”, “Relevant code snippet from file X”.]
- Tools & Integrations: [What tools can the AI use or call on? e.g. “database lookup tool”, “calendar API”, or simply “no external tools” if none.]
- Environment Details: [Any other environmental context? e.g. time/date, location, device info, version of software, etc., if relevant.]
(Feel free to add more context fields if needed, or group them into subsections as makes sense.)
Constraints & Rules
- [Rule 1: Describe something the AI must always do. For example: “Always respond in a polite, empathetic tone.”]
- [Rule 2: Describe something the AI must never do. For example: “Never reveal confidential internal code or URLs.”]
- [Rule 3: Conditional rule, if applicable. For example: “If the user asks for medical advice, the AI must include a disclaimer and suggest seeing a professional.”]
- [Rule 4: Another guideline or policy. For example: “Use British English spelling for UK users.”]
(Add or remove bullet points as needed. Aim to cover safety, style, business policies, and any domain-specific mandates.)
Output Format (if applicable)
Describe the desired structure or format of the AI’s responses. For example: “Responses should be in markdown. Start with a brief summary, followed by a numbered list of recommendations. If providing code, include it in a fenced code block with syntax highlighting.”
(If the output format is flexible or not a concern, you can omit this section.)
Lifecycle & Maintenance
- Source of Truth: [Reference where this contract lives in your docs or repo. e.g. “This contract is stored in
docs/ai-context-contract.mdand is considered the source of truth for AI context.”] - Update Policy: [Explain how/when this contract will be updated. e.g. “Review and update this contract at the end of each development sprint or whenever new AI capabilities are added.”]
- Version: [Version number] – Last Updated: [Date]
- Changelog: [Optional: list key updates. e.g. “v1.1 – Added tool access to Weather API”, “v2.0 – Revamped constraints after legal review.”]
End of Context Schema Contract.
Pro-tip: After filling this out, double-check that every piece of info the AI might need is covered in Context Schema Components, and every critical behavior guideline is captured in Constraints & Rules. This document should evolve as your product and AI evolve – treat it as a living contract!