Machine Experience: Adding Metadata So AI Agents Don't Have to Think
What Machine Experience Actually Means
Machine Experience (MX) is the practice of adding metadata and instructions to internet assets so AI agents don’t have to guess. HTML, informed by MX, is the publication point that makes certain context built in Content Operations reaches agents at the delivery point.
When AI has to “think” - generate answers without complete context - it must produce confident answers even when context is missing. This leads to hallucination. MX makes all context explicitly present in your website’s structure, helping everyone, not just “MX: The Handbook.”
Right now, AI agents are visiting your website. People ask ChatGPT about your products, use Copilot to compare your services, and run agents to check your availability. The goal of any web asset is to drive users to action - whether that’s purchasing a product, informing readers of a product recall, establishing credibility, completing a contact form, downloading a whitepaper, or registering for an event.
MX is not just about ecommerce. Without MX, fewer AI agent activities complete those actions - regardless of what those actions are.
Numbers Tell a Different Story
Adobe’s Holiday 2025 data reveals the scale of transformation. AI referrals surged dramatically - Retail up 700%, Travel up 500%. Conversion rates now lead human traffic by 30%.
In January 2026, three major platforms launched agent commerce systems within a single week:
- Amazon Alexa+ (browser agent, 5 January)
- Microsoft Copilot Checkout (proprietary, 8 January)
- Google Universal Commerce Protocol (open standard, 11 January)
What industry analysts predicted would take 12-24 months to reach mainstream adoption is now expected within 6-9 months or less. Agent-mediated commerce has moved from experimental to infrastructure.
Invisible User Problem
These invisible users blend into your analytics, coming once and leaving. The interface is invisible to them - they cannot see animations, colour, toast notifications, or loading spinners. Most companies don’t track AI bot traffic. Some prohibit AI bots entirely through robots.txt directives or block them using services like Cloudflare Identity checks.
Side benefit: MX patterns also benefit users with disabilities through shared reliance on semantic structure. But the primary focus is improving machine visitor compatibility. The business case - goal completion, conversions, lead generation - drives the technical requirements.
5-Stage MX Framework
When AI agents interact with your website, they follow a predictable 5-stage journey with specific technical requirements at each stage. Miss any stage and the entire goal completion chain breaks.
Stage 1: Discovery (Training)
Agent State: Not in knowledge base, doesn’t know you exist
MX Requirements:
- Crawlable structure (robots.txt compliance, sitemap.xml)
- Semantic HTML markup for training data
- Server-side rendering for JavaScript-heavy content
- Quality content that search engines can discover and rank
Side Benefits: Improves SEO (organic search traffic), improves WCAG (semantic structure)
Failure Mode: Agent recommends competitors, never mentions you - you don’t exist in their knowledge base
We implement MX patterns for agent discovery. SEO improvement is an automatic outcome, not a separate task.
Stage 2: Citation (Recommendation)
Agent State: Aware of your site, can recommend it
MX Requirements:
- Fact-level clarity (each statistic, definition, concept needs standalone clarity)
- Structured data (Schema.org JSON-LD) for AI platforms
- Citation-worthy content architecture optimised for being featured in AI responses, not just found
Side Benefits: Improves GEO (Generative Engine Optimisation - citations in AI-generated responses), improves SEO (rich snippets), improves WCAG (clear content structure)
Failure Mode: Agent knows you exist but can’t confidently recommend you - hallucinate details or skip your site entirely
We implement MX patterns for agent citations. GEO improvement is an automatic outcome, not a separate task.
Example: Lawyers have been caught citing fictional cases in court because AI agents confused Ally McBeal television scripts with legal precedents. Court opinions should use Schema.org Article type with genre="Judicial Opinion" and articleSection="Case Law", whilst TV shows should use TVEpisode type with genre="Legal Drama". Without this Schema.org differentiation, content appears identical to AI agents - they cannot distinguish fiction from fact.
Stage 3: Search and Compare
Agent State: Building comparison lists, sorting by features, evaluating options
MX Requirements:
- JSON-LD microdata at the pricing level
- Explicit comparison attributes (product features, specifications)
- Semantic HTML that agents can parse for feature extraction
Side Benefits: Improves GEO (AI comparisons), improves SEO (structured data), improves WCAG (clear data presentation)
Failure Mode: Agent cannot understand what you offer or how you compare - skips you in comparisons
We implement MX patterns for agent comparison tasks. Structured data benefits multiple disciplines automatically.
Stage 4: Price Understanding
Agent State: Need exact pricing to make recommendations
MX Requirements:
- Schema.org types (Product, Offer, PriceSpecification)
- Unambiguous pricing structure with currency specification (ISO 4217 codes)
- Validation to prevent decimal formatting errors
- Clear price markup that prevents magnitude misinterpretation
Side Benefits: Improves SEO (product rich results), improves GEO (pricing citations), improves WCAG (clear pricing)
Failure Mode: Agents misunderstand costs by orders of magnitude
Real-world example: When researching Danube river cruises in late 2024, Claude for Chrome quoted a price of £203,000 for a one-week cruise. The actual price was £2,030. European currency formatting (€2.030,00 vs £2,030) had been misinterpreted, throwing the price off by a factor of 100. The metadata on pricing hadn’t specified currency correctly, and the AI couldn’t reason about prices sensibly. Had an autonomous agent auto-booked this cruise, the financial consequences would have been severe.
We implement MX patterns for agent price parsing. Schema.org benefits multiple disciplines automatically.
Stage 5: Purchase Confidence (or Goal Completion)
Agent State: Can they complete the desired action with confidence?
MX Requirements:
- No hidden state buried in JavaScript (state must be DOM-reflected)
- Explicit form semantics (
<button>not<div class="btn">) - Persistent feedback (role=“alert” for important messages)
- data-state attributes for progress tracking
- UCP (Universal Commerce Protocol) support for standardised commerce interactions
Side Benefits: Improves WCAG (form accessibility), improves user experience (faster completions for humans too)
Failure Mode: Entire goal completion chain breaks - agent cannot see what buttons do, cannot track progress, times out and abandons
We implement MX patterns for agent goal completion. Accessibility and UX improvements are automatic outcomes.
Note: Stage 5 applies to ANY web goal - purchase, contact form, download, registration, information retrieval. The principle is universal: explicit structure enables agents to complete whatever action your website is designed to drive.
Why Missing One Stage Breaks Everything
Miss any stage and the entire goal completion chain breaks.
- Discovery requires semantic HTML
- Citation requires structured data
- Comparison requires JSON-LD
- Price understanding requires Schema.org
- Confidence requires explicit state
At every stage, your website’s structure determines success or failure.
Computational Trust and First-Mover Advantage
Sites that successfully complete the full journey gain computational trust - agents return for more interactions through learned behaviour. Sites that fail at any stage disappear from the agent’s map permanently.
Unlike humans who persist through bad UX and can be won back with improvements, agents provide no analytics visibility and offer no second chance.
| Behaviour | Humans | AI Agents |
|---|---|---|
| Retry attempts | Persistent, will try multiple times | Time out and abandon |
| Workarounds | Ask friends, call support, use phone | None - just fails |
| Tolerance for ambiguity | Can interpret context | Must have complete context |
| Bad UX response | Keep trying when motivated | Disappear, never return |
| Recovery | Can be won back with improvements | Invisible - no analytics, no second chance |
“AI Will Figure It Out” Fallacy
The common objection: “AI is getting better all the time, why worry? It will work itself out.”
The critical flaw in this argument: Yes, AI models are improving - but they’re also multiplying at an accelerating rate. The diversity problem is getting worse, not better.
Unknown Agent Problem
Site owners have no idea which model is visiting their site:
- Small LLM running on a mobile device (SMOL, edge models with 100-500M parameters)?
- Frontier model (Claude Opus 4.5, GPT-4, Gemini Ultra)?
- In-browser extension with a local LLM prioritising privacy?
- Custom-trained domain-specific agent?
User-Agent strings are trivially spoofed. No standardised capability announcement exists. You cannot serve different HTML based on agent sophistication - design for the lowest common denominator.
Diversity Explosion
Over 1 million models exist on Hugging Face (2026) with wildly different capabilities:
- Over 90% have fewer than 1 billion parameters
- Nearly 90% have fewer than 500 million
- More than two-thirds have fewer than 200 million
- Around 40% have fewer than 100 million
The platform added 1 million models in just 335 days (late 2024-2025), compared to 1,000+ days for the first million. This acceleration shows the diversity problem is intensifying, not resolving.
Why “Waiting for AI to Improve” Fails
Problem 1 - No standardisation: No central authority controls agent capabilities. No way to demand parsing standards when no imperative exists. Everyone does what they want, giving lip service to standards without enforcement.
Problem 2 - The diversity paradox: Large frontier models are getting better at handling ambiguity. But small models (7B, 13B parameters) deployed on edge devices cannot handle the same complexity. And you don’t know which model is visiting your site. Result: Designing for “average” AI means failing for 40%+ of agents.
Problem 3 - Local and edge deployment: Browser extensions with local LLMs (privacy-focused users), mobile agents with smaller models (resource constraints), and custom domain-specific models (specialised capabilities) will never have the computational power of frontier models. These agents are proliferating, not disappearing.
Design for the Worst Agent
Explicit structure and unambiguous MX patterns make you compatible with the worst agents, therefore compatible with all:
- Small 100M parameter model can parse Schema.org → Large models can too
- Local edge LLM can read semantic HTML → Cloud models can too
- Simple browser extension can understand explicit state → Sophisticated agents can too
This isn’t “dumbing down” - it’s universal compatibility.
The alternative (hoping AI improves) leaves you incompatible with 40%+ of agents visiting your site right now. Design for the worst agent equals compatible with all agents.
MX in the Content Pipeline
MX is often confused with adjacent disciplines in the content stack.
MX is NOT:
- Content Management System (CMS) - where content is created, edited, stored
- Content Delivery System (CDS) - infrastructure for delivering content to endpoints
- Ontology - semantic model of concepts and relationships
MX IS: The publication mechanism that makes context get through to the goal of the site.
Content Pipeline
Content Operations is essential for AI at the construction point - creating semantic structure, defining relationships, building ontology models. But Content Operations alone is not enough. If the publication layer (MX) doesn’t preserve this structure, agents at the delivery point never see it.
Example failure mode:
- ✅ CMS creates perfect semantic structure
- ✅ Ontology defines clear relationships
- ❌ Publication process renders to JavaScript-heavy SPA
- ❌ Metadata stripped from served HTML
- ❌ Agents see unstructured content, can’t parse relationships
MX fixes this: Makes certain the publication process preserves what Content Operations built.
Understanding Ontology in CMS Context
In content delivery systems and CMS environments, an ontology is a semantic model that defines concepts and their relationships so content can be understood, linked, filtered, and delivered in a more intelligent and context-aware way.
Ontology differs from traditional metadata:
- Traditional CMS: Flat tags and categories, hierarchical taxonomies, static linking
- Ontology: Concept models with many-to-many relationships, dynamic contextual delivery, machine-readable semantic models
MX’s role with ontology:
- Ontology defines the semantic model (construction point)
- MX makes certain the semantic model reaches agents (publication point)
- CDS delivers the content with preserved semantics (delivery point)
Without MX: Beautiful ontology in CMS → lost in publication → agents can’t use it
With MX: Beautiful ontology in CMS → preserved in publication → agents use full semantic model
Entity Asset Layer and Sovereign Portability
What is the Entity Asset Layer?
The Entity Asset Layer (EAL) is an independent database containing your business-critical assets—reviews, product knowledge, customer preferences, brand logic—owned by you and readable by any AI agent or commerce platform. Unlike platform-locked data (Amazon reviews, Shopify product data), EAL assets remain under your control and travel with you across any technology choice.
Platform Lock-in Problem
Consider a real-world scenario that many businesses face:
You’ve spent years building 10,000 five-star reviews on Amazon. Your reputation is solid, your conversion rates are excellent, and customers trust you. Then you decide to migrate to Shopify or launch your own ecommerce platform.
Result: You’re nobody. Zero reviews. Zero reputation. You start from scratch.
Your reviews—your most valuable Reputation Assets—are trapped in Amazon’s platform. They can’t transfer. AI agents visiting your new site see no social proof, no trust signals, no reason to recommend you.
This is platform lock-in. Reviews aren’t the only asset trapped:
- Product knowledge locked in proprietary CMS formats
- Customer loyalty data owned by your commerce platform
- Brand logic buried in platform-specific code
These are your Entity Assets—the strategic capital that determines success or failure when AI agents visit your site. And most businesses don’t own them; platforms do.
Identity Evolves into Strategic Asset Vault
When AI agents interact with your business, they need more than identity verification (“Who you are”). They need access to your Entity Assets:
| Category | What It Includes | Purpose | Strategic Value |
|---|---|---|---|
| Identity Assets | Loyalty status, location preferences, verified credentials | Establish “Who” | Personalization across platforms |
| Reputation Assets | Verified reviews, trust scores, certifications | Establish “Why trust you” | Influence agent recommendations |
| Knowledge Assets | Product specs, brand logic, domain expertise | Establish “What you know” | Prevent hallucination |
| Transactional Assets | Purchase history, cart patterns, preferences | Enable predictions | Improve conversions |
The shift: FROM simple identity verification TO complete asset ownership that travels with you across any platform.
EAL Solution for Asset Ownership
The Entity Asset Layer provides a fundamental innovation: you own your assets, and they travel with you across any platform.
Instead of this (current state):
You get this (EAL state):
Key benefits:
- Sovereignty: You own your assets, not the platform
- Portability: Assets travel with you when you switch platforms
- Persistence: Reviews, reputation, knowledge remain intact regardless of technology choices
- Agent-agnostic: Single source of truth works with any AI agent (Gemini, ChatGPT, Claude, proprietary)
Example: Portable Reviews
Instead of reviews trapped in Amazon’s database, Entity Assets are published as portable structured data:
{
"@context": "https://schema.org",
"@type": "Review",
"itemReviewed": {
"@type": "Product",
"@id": "https://yoursite.com/products/xyz789"
},
"author": {
"@type": "Person",
"name": "Jane Smith"
},
"reviewRating": {
"@type": "Rating",
"ratingValue": "5"
},
"reviewBody": "Exceptional quality.",
"datePublished": "2026-01-15",
"publisher": {
"@type": "Organization",
"name": "Your Company"
}
}
This review is now portable:
- Certified by your company (not Amazon)
- Readable by any AI agent
- Migratable to new platforms
- Owned by you, not trapped
Example 2: Knowledge Asset (Product Specification)
Instead of product specs trapped in CMS:
{
"@context": "https://schema.org",
"@type": "Product",
"@id": "https://yoursite.com/products/xyz789",
"name": "Industrial Widget Pro",
"description": "Professional-grade widget for manufacturing",
"manufacturer": {
"@type": "Organization",
"name": "Your Company"
},
"mpn": "IW-PRO-2024",
"additionalProperty": [
{
"@type": "PropertyValue",
"name": "Operating Temperature",
"value": "-20°C to 80°C"
},
{
"@type": "PropertyValue",
"name": "Certification",
"value": "ISO 9001, CE Marked"
}
]
}
This specification is now a portable Knowledge Asset that AI agents can cite accurately across any platform.
MX’s Role in Making Assets Portable
MX is how Entity Assets become portable.
Without MX: Entity Assets trapped in platform databases → lost during publication → invisible to AI agents
With MX: Your assets embedded as machine-readable data in web pages → preserved during publication → readable by all agents
The relationship:
- Entity Assets = what you own (reviews, product data, customer knowledge)
- MX = how you publish them (HTML metadata, Schema.org, semantic structure)
- Result = you own your assets, and they work across any platform or AI agent
Getting Started with Entity Assets
For business leaders:
- Audit your platform lock-in: Identify what assets are trapped (reviews on Amazon, product data in proprietary CMS, customer preferences in commerce platform)
- Prioritize by business impact: Start with Reputation Assets (reviews, trust scores) that directly influence agent recommendations
- Plan ownership model: Decide who owns EAL (IT, Marketing, Operations) and establish governance
- Budget for sovereignty: Implementation scope varies based on asset volume and platform complexity
For technical teams:
- Establish EAL storage: Independent database (separate from commerce/CMS platforms)
- Implement Schema.org markup: Start with Product, Review, Organization types
- Use JSON-LD for portability: Embed structured data in HTML, accessible via API
- Enable MX publication: Make certain your CMS/platform publishes EAL assets as HTML metadata
- Test with validators: Google Rich Results Test, Schema.org validator
January 2026 as Strategic Inflection Point
In January 2026, three major platforms launched agent commerce systems within seven days. This convergence marks an inflection point.
First-mover advantage exists: Businesses that implement Entity Asset Layer now will gain “computational trust” from AI agents—a form of learned behaviour where agents preferentially recommend proven-successful entities.
Sites with EAL: Agent recommends → successful transaction → increased trust → higher future recommendations → compounding advantage
Sites without EAL: Agent cannot extract data → skipped in recommendations → never builds trust → permanent invisibility
The strategic question: Can your business afford to remain platform-dependent whilst competitors build sovereign Entity Assets and gain computational trust?
Building the Future with Open Source EAL
The Entity Asset Layer concept is powerful, but it needs concrete implementation. I’m building an open source EAL reference implementation that provides:
Core Features:
- Independent storage layer (database-agnostic)
- Schema.org compliant asset management (Product, Review, Organization, Person)
- REST API for platform integration
- JSON-LD generation for HTML embedding
- Validation tools for EAL markup
- Migration utilities (extract from Amazon, Shopify, etc.)
Why Open Source?
Entity Assets are too important to be locked in proprietary systems. An open source EAL implementation provides:
- Vendor neutrality: No platform lock-in for the solution itself
- Community validation: Proven patterns from diverse implementations
- Broad adoption: Lower barrier to entry accelerates ecosystem growth
- Transparent governance: Asset ownership remains with organizations, not vendors
Who Should Join?
- Developers: Building core EAL infrastructure, API design, storage patterns
- Platform architects: Designing integration patterns for CMS/commerce platforms
- Business stakeholders: Defining asset schemas and governance models
- Standards advocates: Contributing to emerging EAL specifications
Get Involved:
If you’re interested in building sovereign, portable Entity Assets that work across any AI agent or commerce platform, let’s collaborate. Contact me at [email protected] or visit https://allabout.network to join the open source EAL project.
This is the infrastructure layer that will define how businesses maintain ownership in the agent-mediated future. First-movers who help build this foundation will shape the standard.
Why MX Prevents Hallucination
When agents encounter incomplete context, they must “think” - generating confident answers by guessing based on statistical co-occurrence patterns. Without clear structured data (Schema.org, semantic HTML) providing complete context, they fabricate details that seem plausible but are incorrect.
MX is the act of adding metadata and instructions so AI doesn’t have to think. When all context is explicitly present, hallucination decreases dramatically.
Real-World Examples
Stage 1 Failure (Discovery): Your site uses heavy JavaScript rendering with no server-side fallback. Training crawlers see empty HTML shells. You don’t exist in agent knowledge bases. Agents recommend competitors exclusively.
Stage 2 Failure (Citation): Your pricing page has figures embedded in paragraphs without Schema.org markup. When asked “How much does Product X cost?”, agents hallucinate prices based on statistical patterns from similar products, quoting incorrect figures with confidence.
Stage 4 Failure (Price Understanding): The Danube cruise example - £2,030 becomes £203,000 due to decimal separator confusion combined with missing Schema.org PriceSpecification with currency codes.
Stage 5 Failure (Goal Completion): Your checkout uses visual-only state changes (spinners, colour changes) with no DOM-reflected state. Agents cannot track progress, don’t know if submission succeeded, time out and abandon.
MX Applies to Every Web Goal
MX is universal - it applies to every type of web asset with every type of goal:
- Ecommerce: Purchase products, complete checkout
- Lead generation: Complete contact forms, request demos
- Information delivery: Inform readers of product recalls, safety information
- Trust building: Establish credibility, demonstrate expertise
- Content distribution: Download whitepapers, register for events
- Any other goal: Whatever action the website is designed to drive
When agents hallucinate or fail to extract accurate information, they move to competitors with better MX implementation.
Addressing Stakeholder Concerns
“But We Already Do SEO”
SEO and MX are different disciplines with different goals. SEO optimises for search engine ranking algorithms. MX optimises for AI agent goal completion.
The relationship:
- SEO focuses on getting found in search results
- MX focuses on being cited, compared, and used by agents
- SEO targets ranking signals (backlinks, keywords, page speed)
- MX targets semantic clarity (Schema.org, explicit state, unambiguous structure)
Yes, there’s overlap. Both benefit from semantic HTML and structured data. But the overlap is incidental, not intentional. Implementing MX for agent compatibility automatically improves SEO as a side effect. But implementing SEO does not automatically create agent-compatible structure.
Example: Your SEO is excellent - you rank first for “enterprise CRM software”. But your pricing page embeds costs in paragraphs without Schema.org markup. Agents cannot extract pricing reliably. They hallucinate figures or skip your site in comparisons. You win the search ranking but lose the agent citation.
MX is not “better SEO” - it’s a distinct discipline that shares some technical foundations with SEO whilst serving a different purpose.
Common Objections and Responses
Objection: “AI will get better and figure this out”
Response: Yes, frontier models improve. But 40% of models have under 100M parameters. You cannot detect which agent visits your site. Design for the worst agent creates universal compatibility. Waiting means losing to competitors who implement MX now and gain computational trust.
Objection: “This is too much work for uncertain ROI”
Response: Adobe’s Holiday 2025 data shows AI referrals up 700% in retail, 500% in travel, with 30% higher conversion rates than human traffic. Three major platforms launched agent commerce in one week (January 2026). The ROI is measurable now, not theoretical.
Objection: “Our users are human, not AI agents”
Response: Your users ask ChatGPT about your products. They use Copilot to compare your services. They run agents to check your availability. The interface is invisible to them - they don’t see “AI” or “human” modes, they just get results. If agents cannot parse your site, your brand disappears from their consideration set.
Objection: “We block bots in robots.txt”
Response: You’re blocking discovery. Training crawlers cannot index your content. Agents don’t know you exist. You’ve removed yourself from their knowledge base entirely. Competitors who allow crawling gain all the agent referrals whilst you get none.
Budget Justification
What it costs:
Implementation scope varies significantly based on site size, complexity, existing infrastructure, and team resources. A simple brochure site needs far less work than a large ecommerce platform with dynamic pricing and complex checkout flows.
Key factors affecting scope:
- Current state of semantic HTML and structured data
- Number of page types requiring Schema.org implementation
- Complexity of interactive features needing DOM state refactoring
- Existing technical debt and architectural constraints
- Team familiarity with MX patterns
What it returns:
- Computational trust from agents (first-mover advantage)
- Higher conversion rates from agent-referred traffic (30% uplift per Adobe data)
- SEO improvements as automatic side effect
- WCAG compliance improvements as automatic side effect
- Future-proof structure as agent commerce becomes standard
Cost of inaction:
- Zero visibility in agent recommendations
- Loss of agent-referred traffic (growing 500-700% year-over-year)
- Competitors gain computational trust whilst you remain invisible
- No analytics visibility into what you’re losing
- No recovery path once agents learn to skip your site
The question isn’t “Can we afford to do this?” - it’s “Can we afford not to?”
Organisational Implementation
Who Owns MX?
MX sits at the intersection of multiple disciplines. Ownership depends on your organisation’s structure, but typically requires coordination across:
Primary ownership candidates:
- Content Operations - If you have a strong content ops team managing semantic structure and metadata
- Development/Engineering - If implementation is primarily technical (DOM structure, Schema.org, server-side rendering)
- Digital Experience - If you have a team managing the full digital customer journey
- Product Management - If MX is treated as a product feature enabling new capabilities
Shared responsibility model:
- Content Operations builds semantic structure
- Development implements MX patterns in publication layer
- Marketing measures agent referral traffic and conversions
- UX verifies patterns don’t degrade human experience
- QA validates agent compatibility alongside functional testing
The worst approach: treating MX as “someone else’s problem” that falls through organisational gaps.
Integration with Existing Workflows
DevOps Integration:
MX requirements become part of standard deployment checks:
- Schema.org validation in CI/CD pipeline
- Semantic HTML linting alongside code quality checks
- DOM state verification in automated testing
- Agent-compatibility testing alongside browser testing
Example: Add Schema.org validation to your build process. If Product pages lack proper PriceSpecification markup, the build fails - just like it would fail for broken tests or linting errors.
Content Operations Integration:
MX patterns inform content creation workflows:
- Content templates include Schema.org requirements
- Editorial guidelines specify fact-level clarity standards
- Publishing checklists verify agent-compatible structure
- CMS fields map directly to Schema.org properties
Example: Your CMS product page template has required fields for price, currency, availability. These fields automatically generate correct Schema.org markup. Content creators cannot publish without completing agent-required metadata.
Marketing Integration:
MX becomes part of campaign measurement:
- Track agent-referred traffic separately from human traffic
- Measure conversion rates by traffic source (agent vs human)
- Monitor which products/pages agents cite most frequently
- A/B test MX implementations to optimise agent engagement
Example: Google Analytics segment showing agent referrals (ChatGPT, Perplexity, Claude, etc.) with conversion tracking. You discover agents prefer Product A over Product B despite equal human traffic - this informs inventory and marketing decisions.
Cross-functional collaboration:
MX requires coordination, not silos:
- Weekly sync between Content Ops and Development on Schema.org implementation
- Quarterly reviews of agent traffic patterns with Marketing
- UX participates in agent compatibility testing
- QA validates both human and agent user journeys
The goal: MX becomes standard practice, not a special initiative requiring executive intervention.
Complete MX Resource Package
Two Books for Different Needs
“MX: The Handbook” (300-400 pages) - A practical implementation guide for developers, UX designers, content strategists, product managers, and executives. It offers step-by-step platform-specific implementations, content strategies, testing approaches, and patterns across major CMS platforms. Accessible enough for decision-makers, detailed enough for implementers.
“The MX Bible” (800 pages) - The definitive technical reference for architects, consultants, and serious practitioners who need complete coverage of Machine Experience. This is the book for those implementing MX at scale or establishing organisational practices.
13 Appendices - Freely Available Online
61,600 words of implementation guides, code examples, and proven patterns - all freely accessible.
Implementation Guides:
- Appendix A: Implementation Cookbook
- Appendix B: Proven Lessons
- Appendix C: AI-Friendly HTML Guide (3,000 lines)
- Appendix D: AI Patterns Quick Reference
- Appendix E: Implementation Roadmap
- Appendix F: Common Page Patterns
Resources and References:
- Appendix G: Resource Directory
- Appendix H: Live llms.txt
- Appendix I: Pipeline Failure Case Study
- Appendix J: Industry Developments
- Appendix K: Proposed AI Metadata Patterns
- Appendix L: Index of Metadata
- Appendix M: Anti-Patterns Catalog
Distribution model: All appendices published openly on the web. Books provide context, appendices provide free implementation guides. Lower barrier to entry with “try before you buy” model.
Take Action Now
It’s January 2026. Google, Microsoft, and Amazon have all announced agent-powered purchasing features launching this quarter. This isn’t a distant future - it’s happening now.
First-mover advantage exists. Sites that work early become trusted sources that agents return to repeatedly. Sites that fail at any stage of the agent journey disappear from recommendations with no analytics visibility and no recovery opportunity.
Get Started
- Start with free resources: Access the 13 appendices at allabout.network
- Implement systematically: Follow “MX: The Handbook” for platform-specific guidance
- Master the details: Dive into “The MX Bible” for complete technical coverage including Entity Asset Layer strategies
- Build sovereign assets: Start implementing EAL patterns to make certain your reviews, product data, and customer knowledge remain portable across any platform
Contact
For professional implementation services, website analysis, or questions about Machine Experience:
- Email: [email protected]
- Website: https://allabout.network
The same principles that improve discoverability by AI agents also improve search engine rankings and accessibility compliance - one implementation serves multiple audiences.
Design for machines with zero-tolerance requirements, and you automatically create structure that benefits everyone.
MX is the act of adding metadata and instructions so AI doesn’t have to think.
MX is the practice: HTML is the delivery mechanism.