Intent: The Missing Data Layer in Generative AI

Moving beyond prompts and parameters: Why the next evolution of agentic AI requires a machine-readable ‘decision contract’ to manage risk and complexity.

Share

Most of us like to believe we make decisions rationally. In reality, we book flights after checking prices ‘just one more time’, choose restaurants based on a mix of reviews and ‘vibes’, and stop debugging not because we have proved the bug is gone, but because it feels gone enough. This is not irrational; it is algorithmic.

Humans constantly run decision algorithms in our heads. We balance exploration and exploitation, stopping our search the moment marginal benefit drops. We operate under strict time, risk, and cognitive limits. We are not always optimal, but we are usually good enough.

This is the core insight behind Algorithms to Live By: The Computer Science of Human Decisions, by Brian Christian and Tom Griffiths. Much of human decision-making maps cleanly to computer science concepts like optimal stopping, multi-arm bandits, and computational complexity. Now, consider modern Generative AI (GenAI) systems. They can reason fluently, retrieve vast amounts of data, and call tools with impressive speed. Yet, when asked to make decisions in real-world environments, they often behave like someone who never stops scrolling hotel reviews, or confidently books the wrong flight because it happened to be the cheapest.

They do not lack intelligence; they lack context for why they are deciding. The missing layer is intent.

Why GenAI Struggles with Decisions, Not Answers

Most GenAI systems today are optimised to answer questions rather than to decide under constraints. A typical stack includes a foundation model, retrieval over enterprise data, tools for simulation or action, and post-generation guardrails. Despite this, familiar patterns emerge in production: the system keeps searching when it should act, it acts when evidence is still weak, or it chooses impressive-sounding answers over safe outcomes.

Consider a medical triage chatbot. A user asks: “I have chest pain and shortness of breath. Should I be worried?” A GenAI system will often respond responsibly by explaining possible causes and suggesting next steps. As an answer, this is reasonable. As a decision, it is insufficient. In that moment, explanation is secondary to escalation. The system needs to decide when uncertainty itself is dangerous and when to recommend immediate action.

These are not language problems; they are decision problems. Without explicit risk thresholds or stopping conditions—without intent—the system cannot reliably decide when to act.

Decision Theory as the Missing Bridge

Decision theory provides a useful lens for understanding this gap, particularly through the concept of the ‘multi-arm bandit’ problem. When planning a once-in-a-lifetime family holiday, humans tend to ‘exploit’: we choose well-reviewed hotels and proven itineraries because the cost of failure is high. On a weekend city break, we are more willing to ‘explore’ a new, unrated restaurant. The algorithm remains the same, but the objective and risk level change.

GenAI systems face the same choice. Every retrieval query or tool call is an ‘arm’ of the bandit. Without intent, the reward function is implicit and exploration remains unconstrained. With intent, the reward function becomes explicit, and exploitation becomes a deliberate, risk-priced choice.

Similarly, humans use ‘optimal stopping’ when booking flights. We search until prices fall within an acceptable range, then stop. GenAI systems lack this intuition; they will continue retrieving and reasoning until they hit an artificial token limit. Intent turns human intuition into a machine-executable stopping rule, making computational complexity an operational design choice rather than an accident.

Defining the ‘Intent Object’

To be precise, intent is not a prompt, a user utterance, or a vague goal statement. It is a runtime decision contract: a structured, machine-readable object that binds objectives, constraints, authority, and evidence requirements.

Conceptually, an intent object looks like this:

YAML

intent:

  objective: “approve_or_reject_transaction”

  task_type: “risk_decision”

  risk_tier: “high”

  constraints:

    – regulation: “AML”

    – data_scope: “transaction + customer_profile”

    – forbidden_actions:

        – auto_approve_without_human_review

  authority:

    allowed_actions:

      – retrieve_transaction_history

      – run_fraud_model

      – recommend_decision

  evidence_threshold:

    confidence: 0.92

    corroboration_sources: 2

  budgets:

    time_ms: 800

    tool_calls: 5

By treating intent as data—structured, versioned, and auditable—we change the shape of the architecture. Intent introduces constraints that operate during reasoning, not just after the output is produced. This results in fewer unsafe paths explored and decisions that are explainable in terms of specific objectives.

The Governance Gap

While platforms like Algedonic AI are beginning to offer governance planes for agentic AI, focusing on access control and policy enforcement, a gap remains. Current tools enforce boundaries around decisions. The next generation of intent layers will define how those decisions are made in the first place.

In a sector like insurance claims, this is non-negotiable. Without intent, a system might over-prioritise speed, allowing fraudulent claims through or delaying legitimate ones. With intent, low-risk claims are fast-tracked while high-risk cases are flagged for corroboration based on a predefined investigation budget.

Conclusion

Traditional data layers answer: “What do we know about the world?” Intent answers: “What are we trying to achieve, and at what risk?”

GenAI systems without intent are eloquent improvisers. With intent, they become constrained decision engines. If we want AI to be a competent collaborator, we must give it what humans rely on every day: a clear objective, boundaries that matter, and—most importantly—a stopping rule. Models reason and data informs, but it is intent that finally decides

ALSO READ: 5 Specialised Copilots Rewriting the Enterprise Stack

Rakesh Ranjan
Rakesh Ranjan
Director of IBM Software IBM

Related

Unpack More