Index

Machine Experience: Adding Metadata So AI Agents Don't Have to Think

What Machine Experience Actually Means

Machine Experience (MX) is the practice of adding metadata and instructions to internet assets so AI agents don’t have to guess. HTML, informed by MX, is the publication point that makes certain context built in Content Operations reaches agents at the delivery point.

When AI has to “think” - generate answers without complete context - it must produce confident answers even when context is missing. This leads to hallucination. MX makes all context explicitly present in your website’s structure, helping everyone, not just “MX: The Handbook.”

Right now, AI agents are visiting your website. People ask ChatGPT about your products, use Copilot to compare your services, and run agents to check your availability. The goal of any web asset is to drive users to action - whether that’s purchasing a product, informing readers of a product recall, establishing credibility, completing a contact form, downloading a whitepaper, or registering for an event.

MX is not just about ecommerce. Without MX, fewer AI agent activities complete those actions - regardless of what those actions are.

Numbers Tell a Different Story

Adobe’s Holiday 2025 data reveals the scale of transformation. AI referrals surged dramatically - Retail up 700%, Travel up 500%. Conversion rates now lead human traffic by 30%.

In January 2026, three major platforms launched agent commerce systems within a single week:

What industry analysts predicted would take 12-24 months to reach mainstream adoption is now expected within 6-9 months or less. Agent-mediated commerce has moved from experimental to infrastructure.

Invisible User Problem

These invisible users blend into your analytics, coming once and leaving. The interface is invisible to them - they cannot see animations, colour, toast notifications, or loading spinners. Most companies don’t track AI bot traffic. Some prohibit AI bots entirely through robots.txt directives or block them using services like Cloudflare Identity checks.

Side benefit: MX patterns also benefit users with disabilities through shared reliance on semantic structure. But the primary focus is improving machine visitor compatibility. The business case - goal completion, conversions, lead generation - drives the technical requirements.

5-Stage MX Framework

When AI agents interact with your website, they follow a predictable 5-stage journey with specific technical requirements at each stage. Miss any stage and the entire goal completion chain breaks.

Diagram not available

The 5 Stage Agent Journey

Stage 1: Discovery (Training)

Agent State: Not in knowledge base, doesn’t know you exist

MX Requirements:

Side Benefits: Improves SEO (organic search traffic), improves WCAG (semantic structure)

Failure Mode: Agent recommends competitors, never mentions you - you don’t exist in their knowledge base

We implement MX patterns for agent discovery. SEO improvement is an automatic outcome, not a separate task.

Stage 2: Citation (Recommendation)

Agent State: Aware of your site, can recommend it

MX Requirements:

Side Benefits: Improves GEO (Generative Engine Optimisation - citations in AI-generated responses), improves SEO (rich snippets), improves WCAG (clear content structure)

Failure Mode: Agent knows you exist but can’t confidently recommend you - hallucinate details or skip your site entirely

We implement MX patterns for agent citations. GEO improvement is an automatic outcome, not a separate task.

Example: Lawyers have been caught citing fictional cases in court because AI agents confused Ally McBeal television scripts with legal precedents. Court opinions should use Schema.org Article type with genre="Judicial Opinion" and articleSection="Case Law", whilst TV shows should use TVEpisode type with genre="Legal Drama". Without this Schema.org differentiation, content appears identical to AI agents - they cannot distinguish fiction from fact.

Stage 3: Search and Compare

Agent State: Building comparison lists, sorting by features, evaluating options

MX Requirements:

Side Benefits: Improves GEO (AI comparisons), improves SEO (structured data), improves WCAG (clear data presentation)

Failure Mode: Agent cannot understand what you offer or how you compare - skips you in comparisons

We implement MX patterns for agent comparison tasks. Structured data benefits multiple disciplines automatically.

Stage 4: Price Understanding

Agent State: Need exact pricing to make recommendations

MX Requirements:

Side Benefits: Improves SEO (product rich results), improves GEO (pricing citations), improves WCAG (clear pricing)

Failure Mode: Agents misunderstand costs by orders of magnitude

Real-world example: When researching Danube river cruises in late 2024, Claude for Chrome quoted a price of £203,000 for a one-week cruise. The actual price was £2,030. European currency formatting (€2.030,00 vs £2,030) had been misinterpreted, throwing the price off by a factor of 100. The metadata on pricing hadn’t specified currency correctly, and the AI couldn’t reason about prices sensibly. Had an autonomous agent auto-booked this cruise, the financial consequences would have been severe.

We implement MX patterns for agent price parsing. Schema.org benefits multiple disciplines automatically.

Stage 5: Purchase Confidence (or Goal Completion)

Agent State: Can they complete the desired action with confidence?

MX Requirements:

Side Benefits: Improves WCAG (form accessibility), improves user experience (faster completions for humans too)

Failure Mode: Entire goal completion chain breaks - agent cannot see what buttons do, cannot track progress, times out and abandons

We implement MX patterns for agent goal completion. Accessibility and UX improvements are automatic outcomes.

Note: Stage 5 applies to ANY web goal - purchase, contact form, download, registration, information retrieval. The principle is universal: explicit structure enables agents to complete whatever action your website is designed to drive.

Why Missing One Stage Breaks Everything

Miss any stage and the entire goal completion chain breaks.

At every stage, your website’s structure determines success or failure.

Computational Trust and First-Mover Advantage

Sites that successfully complete the full journey gain computational trust - agents return for more interactions through learned behaviour. Sites that fail at any stage disappear from the agent’s map permanently.

Unlike humans who persist through bad UX and can be won back with improvements, agents provide no analytics visibility and offer no second chance.

Diagram not available

Human Vs Ai Agent Behaviour
Human Vs Ai Agent Behaviour
Behaviour Humans AI Agents
Retry attempts Persistent, will try multiple times Time out and abandon
Workarounds Ask friends, call support, use phone None - just fails
Tolerance for ambiguity Can interpret context Must have complete context
Bad UX response Keep trying when motivated Disappear, never return
Recovery Can be won back with improvements Invisible - no analytics, no second chance

“AI Will Figure It Out” Fallacy

The common objection: “AI is getting better all the time, why worry? It will work itself out.”

The critical flaw in this argument: Yes, AI models are improving - but they’re also multiplying at an accelerating rate. The diversity problem is getting worse, not better.

Unknown Agent Problem

Site owners have no idea which model is visiting their site:

User-Agent strings are trivially spoofed. No standardised capability announcement exists. You cannot serve different HTML based on agent sophistication - design for the lowest common denominator.

Diversity Explosion

Over 1 million models exist on Hugging Face (2026) with wildly different capabilities:

The platform added 1 million models in just 335 days (late 2024-2025), compared to 1,000+ days for the first million. This acceleration shows the diversity problem is intensifying, not resolving.

Why “Waiting for AI to Improve” Fails

Problem 1 - No standardisation: No central authority controls agent capabilities. No way to demand parsing standards when no imperative exists. Everyone does what they want, giving lip service to standards without enforcement.

Problem 2 - The diversity paradox: Large frontier models are getting better at handling ambiguity. But small models (7B, 13B parameters) deployed on edge devices cannot handle the same complexity. And you don’t know which model is visiting your site. Result: Designing for “average” AI means failing for 40%+ of agents.

Problem 3 - Local and edge deployment: Browser extensions with local LLMs (privacy-focused users), mobile agents with smaller models (resource constraints), and custom domain-specific models (specialised capabilities) will never have the computational power of frontier models. These agents are proliferating, not disappearing.

Design for the Worst Agent

Explicit structure and unambiguous MX patterns make you compatible with the worst agents, therefore compatible with all:

This isn’t “dumbing down” - it’s universal compatibility.

The alternative (hoping AI improves) leaves you incompatible with 40%+ of agents visiting your site right now. Design for the worst agent equals compatible with all agents.

MX in the Content Pipeline

MX is often confused with adjacent disciplines in the content stack.

MX is NOT:

MX IS: The publication mechanism that makes context get through to the goal of the site.

Content Pipeline

Diagram not available

The Content Pipeline Where Mx Fits

Content Operations is essential for AI at the construction point - creating semantic structure, defining relationships, building ontology models. But Content Operations alone is not enough. If the publication layer (MX) doesn’t preserve this structure, agents at the delivery point never see it.

Example failure mode:

  1. ✅ CMS creates perfect semantic structure
  2. ✅ Ontology defines clear relationships
  3. ❌ Publication process renders to JavaScript-heavy SPA
  4. ❌ Metadata stripped from served HTML
  5. ❌ Agents see unstructured content, can’t parse relationships

MX fixes this: Makes certain the publication process preserves what Content Operations built.

Understanding Ontology in CMS Context

In content delivery systems and CMS environments, an ontology is a semantic model that defines concepts and their relationships so content can be understood, linked, filtered, and delivered in a more intelligent and context-aware way.

Ontology differs from traditional metadata:

MX’s role with ontology:

Without MX: Beautiful ontology in CMS → lost in publication → agents can’t use it

With MX: Beautiful ontology in CMS → preserved in publication → agents use full semantic model

Entity Asset Layer and Sovereign Portability

What is the Entity Asset Layer?

The Entity Asset Layer (EAL) is an independent database containing your business-critical assets—reviews, product knowledge, customer preferences, brand logic—owned by you and readable by any AI agent or commerce platform. Unlike platform-locked data (Amazon reviews, Shopify product data), EAL assets remain under your control and travel with you across any technology choice.

Platform Lock-in Problem

Consider a real-world scenario that many businesses face:

You’ve spent years building 10,000 five-star reviews on Amazon. Your reputation is solid, your conversion rates are excellent, and customers trust you. Then you decide to migrate to Shopify or launch your own ecommerce platform.

Result: You’re nobody. Zero reviews. Zero reputation. You start from scratch.

Your reviews—your most valuable Reputation Assets—are trapped in Amazon’s platform. They can’t transfer. AI agents visiting your new site see no social proof, no trust signals, no reason to recommend you.

This is platform lock-in. Reviews aren’t the only asset trapped:

These are your Entity Assets—the strategic capital that determines success or failure when AI agents visit your site. And most businesses don’t own them; platforms do.

Identity Evolves into Strategic Asset Vault

When AI agents interact with your business, they need more than identity verification (“Who you are”). They need access to your Entity Assets:

The Four Asset Categories
Category What It Includes Purpose Strategic Value
Identity Assets Loyalty status, location preferences, verified credentials Establish “Who” Personalization across platforms
Reputation Assets Verified reviews, trust scores, certifications Establish “Why trust you” Influence agent recommendations
Knowledge Assets Product specs, brand logic, domain expertise Establish “What you know” Prevent hallucination
Transactional Assets Purchase history, cart patterns, preferences Enable predictions Improve conversions

The shift: FROM simple identity verification TO complete asset ownership that travels with you across any platform.

EAL Solution for Asset Ownership

The Entity Asset Layer provides a fundamental innovation: you own your assets, and they travel with you across any platform.

Instead of this (current state):

Diagram not available

Platform Database Amazon Shopify Proprietary Cms

You get this (EAL state):

Diagram not available

Entity Asset Layer Your Sovereign Database

Key benefits:

  1. Sovereignty: You own your assets, not the platform
  2. Portability: Assets travel with you when you switch platforms
  3. Persistence: Reviews, reputation, knowledge remain intact regardless of technology choices
  4. Agent-agnostic: Single source of truth works with any AI agent (Gemini, ChatGPT, Claude, proprietary)

Example: Portable Reviews

Instead of reviews trapped in Amazon’s database, Entity Assets are published as portable structured data:

{
  "@context": "https://schema.org",
  "@type": "Review",
  "itemReviewed": {
    "@type": "Product",
    "@id": "https://yoursite.com/products/xyz789"
  },
  "author": {
    "@type": "Person",
    "name": "Jane Smith"
  },
  "reviewRating": {
    "@type": "Rating",
    "ratingValue": "5"
  },
  "reviewBody": "Exceptional quality.",
  "datePublished": "2026-01-15",
  "publisher": {
    "@type": "Organization",
    "name": "Your Company"
  }
}

This review is now portable:

Example 2: Knowledge Asset (Product Specification)

Instead of product specs trapped in CMS:

{
  "@context": "https://schema.org",
  "@type": "Product",
  "@id": "https://yoursite.com/products/xyz789",
  "name": "Industrial Widget Pro",
  "description": "Professional-grade widget for manufacturing",
  "manufacturer": {
    "@type": "Organization",
    "name": "Your Company"
  },
  "mpn": "IW-PRO-2024",
  "additionalProperty": [
    {
      "@type": "PropertyValue",
      "name": "Operating Temperature",
      "value": "-20°C to 80°C"
    },
    {
      "@type": "PropertyValue",
      "name": "Certification",
      "value": "ISO 9001, CE Marked"
    }
  ]
}

This specification is now a portable Knowledge Asset that AI agents can cite accurately across any platform.

MX’s Role in Making Assets Portable

MX is how Entity Assets become portable.

Without MX: Entity Assets trapped in platform databases → lost during publication → invisible to AI agents

With MX: Your assets embedded as machine-readable data in web pages → preserved during publication → readable by all agents

The relationship:

Getting Started with Entity Assets

For business leaders:

  1. Audit your platform lock-in: Identify what assets are trapped (reviews on Amazon, product data in proprietary CMS, customer preferences in commerce platform)
  2. Prioritize by business impact: Start with Reputation Assets (reviews, trust scores) that directly influence agent recommendations
  3. Plan ownership model: Decide who owns EAL (IT, Marketing, Operations) and establish governance
  4. Budget for sovereignty: Implementation scope varies based on asset volume and platform complexity

For technical teams:

  1. Establish EAL storage: Independent database (separate from commerce/CMS platforms)
  2. Implement Schema.org markup: Start with Product, Review, Organization types
  3. Use JSON-LD for portability: Embed structured data in HTML, accessible via API
  4. Enable MX publication: Make certain your CMS/platform publishes EAL assets as HTML metadata
  5. Test with validators: Google Rich Results Test, Schema.org validator

January 2026 as Strategic Inflection Point

In January 2026, three major platforms launched agent commerce systems within seven days. This convergence marks an inflection point.

First-mover advantage exists: Businesses that implement Entity Asset Layer now will gain “computational trust” from AI agents—a form of learned behaviour where agents preferentially recommend proven-successful entities.

Sites with EAL: Agent recommends → successful transaction → increased trust → higher future recommendations → compounding advantage

Sites without EAL: Agent cannot extract data → skipped in recommendations → never builds trust → permanent invisibility

The strategic question: Can your business afford to remain platform-dependent whilst competitors build sovereign Entity Assets and gain computational trust?

Building the Future with Open Source EAL

The Entity Asset Layer concept is powerful, but it needs concrete implementation. I’m building an open source EAL reference implementation that provides:

Core Features:

Why Open Source?

Entity Assets are too important to be locked in proprietary systems. An open source EAL implementation provides:

Who Should Join?

Get Involved:

If you’re interested in building sovereign, portable Entity Assets that work across any AI agent or commerce platform, let’s collaborate. Contact me at [email protected] or visit https://allabout.network to join the open source EAL project.

This is the infrastructure layer that will define how businesses maintain ownership in the agent-mediated future. First-movers who help build this foundation will shape the standard.

Why MX Prevents Hallucination

When agents encounter incomplete context, they must “think” - generating confident answers by guessing based on statistical co-occurrence patterns. Without clear structured data (Schema.org, semantic HTML) providing complete context, they fabricate details that seem plausible but are incorrect.

MX is the act of adding metadata and instructions so AI doesn’t have to think. When all context is explicitly present, hallucination decreases dramatically.

Real-World Examples

Stage 1 Failure (Discovery): Your site uses heavy JavaScript rendering with no server-side fallback. Training crawlers see empty HTML shells. You don’t exist in agent knowledge bases. Agents recommend competitors exclusively.

Stage 2 Failure (Citation): Your pricing page has figures embedded in paragraphs without Schema.org markup. When asked “How much does Product X cost?”, agents hallucinate prices based on statistical patterns from similar products, quoting incorrect figures with confidence.

Stage 4 Failure (Price Understanding): The Danube cruise example - £2,030 becomes £203,000 due to decimal separator confusion combined with missing Schema.org PriceSpecification with currency codes.

Stage 5 Failure (Goal Completion): Your checkout uses visual-only state changes (spinners, colour changes) with no DOM-reflected state. Agents cannot track progress, don’t know if submission succeeded, time out and abandon.

MX Applies to Every Web Goal

MX is universal - it applies to every type of web asset with every type of goal:

When agents hallucinate or fail to extract accurate information, they move to competitors with better MX implementation.

Addressing Stakeholder Concerns

“But We Already Do SEO”

SEO and MX are different disciplines with different goals. SEO optimises for search engine ranking algorithms. MX optimises for AI agent goal completion.

The relationship:

Yes, there’s overlap. Both benefit from semantic HTML and structured data. But the overlap is incidental, not intentional. Implementing MX for agent compatibility automatically improves SEO as a side effect. But implementing SEO does not automatically create agent-compatible structure.

Example: Your SEO is excellent - you rank first for “enterprise CRM software”. But your pricing page embeds costs in paragraphs without Schema.org markup. Agents cannot extract pricing reliably. They hallucinate figures or skip your site in comparisons. You win the search ranking but lose the agent citation.

MX is not “better SEO” - it’s a distinct discipline that shares some technical foundations with SEO whilst serving a different purpose.

Common Objections and Responses

Objection: “AI will get better and figure this out”

Response: Yes, frontier models improve. But 40% of models have under 100M parameters. You cannot detect which agent visits your site. Design for the worst agent creates universal compatibility. Waiting means losing to competitors who implement MX now and gain computational trust.

Objection: “This is too much work for uncertain ROI”

Response: Adobe’s Holiday 2025 data shows AI referrals up 700% in retail, 500% in travel, with 30% higher conversion rates than human traffic. Three major platforms launched agent commerce in one week (January 2026). The ROI is measurable now, not theoretical.

Objection: “Our users are human, not AI agents”

Response: Your users ask ChatGPT about your products. They use Copilot to compare your services. They run agents to check your availability. The interface is invisible to them - they don’t see “AI” or “human” modes, they just get results. If agents cannot parse your site, your brand disappears from their consideration set.

Objection: “We block bots in robots.txt”

Response: You’re blocking discovery. Training crawlers cannot index your content. Agents don’t know you exist. You’ve removed yourself from their knowledge base entirely. Competitors who allow crawling gain all the agent referrals whilst you get none.

Budget Justification

What it costs:

Implementation scope varies significantly based on site size, complexity, existing infrastructure, and team resources. A simple brochure site needs far less work than a large ecommerce platform with dynamic pricing and complex checkout flows.

Key factors affecting scope:

What it returns:

Cost of inaction:

The question isn’t “Can we afford to do this?” - it’s “Can we afford not to?”

Organisational Implementation

Who Owns MX?

MX sits at the intersection of multiple disciplines. Ownership depends on your organisation’s structure, but typically requires coordination across:

Primary ownership candidates:

Shared responsibility model:

The worst approach: treating MX as “someone else’s problem” that falls through organisational gaps.

Integration with Existing Workflows

DevOps Integration:

MX requirements become part of standard deployment checks:

Example: Add Schema.org validation to your build process. If Product pages lack proper PriceSpecification markup, the build fails - just like it would fail for broken tests or linting errors.

Content Operations Integration:

MX patterns inform content creation workflows:

Example: Your CMS product page template has required fields for price, currency, availability. These fields automatically generate correct Schema.org markup. Content creators cannot publish without completing agent-required metadata.

Marketing Integration:

MX becomes part of campaign measurement:

Example: Google Analytics segment showing agent referrals (ChatGPT, Perplexity, Claude, etc.) with conversion tracking. You discover agents prefer Product A over Product B despite equal human traffic - this informs inventory and marketing decisions.

Cross-functional collaboration:

MX requires coordination, not silos:

The goal: MX becomes standard practice, not a special initiative requiring executive intervention.

Complete MX Resource Package

Two Books for Different Needs

“MX: The Handbook” (300-400 pages) - A practical implementation guide for developers, UX designers, content strategists, product managers, and executives. It offers step-by-step platform-specific implementations, content strategies, testing approaches, and patterns across major CMS platforms. Accessible enough for decision-makers, detailed enough for implementers.

“The MX Bible” (800 pages) - The definitive technical reference for architects, consultants, and serious practitioners who need complete coverage of Machine Experience. This is the book for those implementing MX at scale or establishing organisational practices.

13 Appendices - Freely Available Online

61,600 words of implementation guides, code examples, and proven patterns - all freely accessible.

Implementation Guides:

Resources and References:

Distribution model: All appendices published openly on the web. Books provide context, appendices provide free implementation guides. Lower barrier to entry with “try before you buy” model.

Take Action Now

It’s January 2026. Google, Microsoft, and Amazon have all announced agent-powered purchasing features launching this quarter. This isn’t a distant future - it’s happening now.

First-mover advantage exists. Sites that work early become trusted sources that agents return to repeatedly. Sites that fail at any stage of the agent journey disappear from recommendations with no analytics visibility and no recovery opportunity.

Get Started

  1. Start with free resources: Access the 13 appendices at allabout.network
  2. Implement systematically: Follow “MX: The Handbook” for platform-specific guidance
  3. Master the details: Dive into “The MX Bible” for complete technical coverage including Entity Asset Layer strategies
  4. Build sovereign assets: Start implementing EAL patterns to make certain your reviews, product data, and customer knowledge remain portable across any platform

Contact

For professional implementation services, website analysis, or questions about Machine Experience:


The same principles that improve discoverability by AI agents also improve search engine rankings and accessibility compliance - one implementation serves multiple audiences.

Design for machines with zero-tolerance requirements, and you automatically create structure that benefits everyone.

MX is the act of adding metadata and instructions so AI doesn’t have to think.

MX is the practice: HTML is the delivery mechanism.

Back to Top