Index

MX: A New Role

The Missing Discipline in Web Development

We’ve spent decades building disciplines for the web. User Experience (UX) optimises for humans. Search Engine Optimisation (SEO) optimises for crawlers. Accessibility (a11y) optimises for users with disabilities. These three disciplines have shaped modern web development, created professional roles, and established best practices.

But there’s a fourth visitor type we haven’t optimised for: AI agents acting on behalf of humans.

These agents are visiting your site right now. People ask ChatGPT about your products, use Copilot to compare your services, and run Perplexity to check your availability. They’re not science fiction or distant future speculation - they’re active traffic, making decisions about whether to recommend you or skip you entirely.

The problem? These visitors are invisible.

Unlike human users who show up in analytics, persist through poor UX, and give feedback, AI agents arrive once, assess your site’s structure, and either succeed or fail silently. When they succeed, they build computational trust in your site and return for future queries. When they fail, they disappear from recommendations permanently. No analytics warning. No second chance. No angry email explaining what went wrong.

The business impact is immediate. Adobe’s Holiday 2025 data shows AI referrals surged dramatically - up 700% in retail, 500% in travel. Conversion rates from AI-referred users now lead human traffic by 30%. Agent-mediated commerce moved from experimental to revenue driver in a single quarter. If agents can’t extract your pricing, understand your offering, or complete your checkout flow, they recommend competitors who’ve implemented the explicit structure they require.

This gap in web development practice has a name: Machine Experience (MX).

What Machine Experience Actually Means

Machine Experience (MX) is the practice of adding metadata and instructions to internet assets so AI agents don’t have to think. When AI has to “think” (infer meaning from incomplete context), it must generate confident answers even when context is missing - leading to hallucination. MX ensures all context is explicitly present in your website’s structure.

Let me be clear what MX is NOT:

So what is MX?

MX is the master discipline that improves all of those as side effects.

HTML informed by MX is the publication point that ensures context built in Content Operations reaches agents at the delivery point. When you implement MX patterns - semantic HTML, structured data, explicit state - you automatically improve SEO (crawlability), accessibility (screen reader compatibility), and performance (simpler DOM structures).

Diagram not available

Mx Relationship Diagram

MX improves all disciplines as side effects

The key insight: One implementation serves multiple audiences. When you add semantic HTML for AI agents, screen readers benefit automatically. When you add Schema.org for agent comparison, search engines surface rich results automatically. When you make state explicit for agent confidence, keyboard users gain clearer navigation automatically.

This isn’t about creating separate experiences. It’s about fixing the underlying structure so it works for everyone - machines and humans alike.

MX: The Handbook

These AI agents are “invisible” for two distinct reasons:

1. Invisible to site owners: They blend into analytics logs. They visit once, assess structure, and either succeed or disappear. No persistent patterns to track. No cookies to monitor. No user journeys to analyse. They come, they evaluate, they leave. If they fail, you never know why. If they succeed, you don’t know they visited at all.

2. The interface is invisible to them: They cannot see animations, colour coding, toast notifications, or loading spinners. Visual hierarchy built with CSS? Invisible. Brand messaging conveyed through imagery? Invisible. Implicit state indicated by colour changes? Invisible.

Modern AI browsers (ChatGPT, BrowserOps, Comet, Strawberry, Neo, DIA) do identify themselves as bots in their User-Agent strings, but these strings cannot be trusted - they’re trivially spoofed by any developer. Some agents are browser extensions running alongside human users. Others are Playwright-driven automation frameworks controlled by AI scripts. Some are AI browsers accessing sites directly. Site owners can no longer reliably distinguish between human visitors and AI agents.

The traffic looks identical in analytics, but the visitor’s capabilities and limitations differ fundamentally.

Consider the tolerance difference:

This zero-tolerance characteristic makes MX more demanding than accessibility. Whilst accessibility users often persist through poor implementations (finding workarounds, asking for assistance), agents simply disappear. One failure, one missing semantic element, one ambiguous state indicator - and they’re gone.

The agents visiting your site today represent billions in potential revenue. Adobe’s data shows they’re not experimental traffic - they’re primary traffic. Conversion rates now favour AI-referred users by 30%. The question isn’t whether to optimise for agents. The question is whether you can afford not to whilst competitors build agent-compatible structure.

What Real Audit Data Reveals

I’ve audited dozens of professional websites over recent months using automated tools that check for agent compatibility patterns. The findings reveal consistent gaps across organisations that pride themselves on digital excellence.

Common patterns across professional sites:

Diagram not available

The Gap Visualization

Widespread MX gaps from recent audits

Semantic HTML (70% missing): Most sites lack proper <main>, <nav>, and <article> elements. Instead, they use generic <div> containers with CSS classes for visual hierarchy. Agents parsing served HTML (before JavaScript executes) cannot distinguish navigation from content from sidebars. The structure that humans see visually doesn’t exist in the HTML.

llms.txt file (85% adoption gap): This emerging standard provides AI agents with structured guidance about site organisation, content types, and key resources. It acts as a “README for AI agents.” Most professional sites haven’t implemented it yet, forcing agents to crawl entire site structures to understand organisation - but many of those sites block agent crawlers entirely.

robots.txt blocking (60% block major agents): Sites routinely block GPTBot, ClaudeBot, Amazonbot, and other AI crawlers through robots.txt directives or services like Cloudflare. The irony is stark: organisations want AI-mediated recommendations but actively prevent agents from accessing the content they’d need to make those recommendations.

Schema.org gaps (55% missing or partial): Structured data exists on some pages but not others. Product pages have pricing Schema.org, but comparison tables lack it. Event pages have dates but not registration URLs. The inconsistent implementation forces agents to guess which pages contain authoritative data.

Explicit state (75% missing): Form validation errors display as visual colour changes. Checkout progress shows via CSS-animated steppers. Button states indicate loading with spinners. None of this state appears in HTML attributes where agents can read it. State exists visually but not semantically.

These aren’t edge cases or budget-constrained sites. These patterns appear across organisations with sophisticated digital teams, substantial web budgets, and public commitments to digital excellence. The gap isn’t about resources. It’s about awareness.

The patterns that confuse agents also harm accessibility users. A missing <main> element forces screen reader users to navigate the entire page to find primary content. Missing alt text blocks both agents and blind users. Visual-only state indicators exclude both agents and keyboard users. The convergence between MX needs and accessibility needs isn’t coincidental - both groups lack access to visual design cues.

How AI Agents Actually Navigate Websites

When AI agents interact with your website, they follow a predictable 5-stage journey. Each stage has specific technical requirements. Miss any stage, and the entire chain breaks.

Diagram not available

5 Stage Agent Journey

Miss any stage and the entire chain breaks

Stage 1 - Discovery: Can agents find you? This requires crawlable structure (robots.txt compliance, sitemap.xml), semantic HTML markup, and server-side rendering for JavaScript-heavy content. If your robots.txt blocks GPTBot, ClaudeBot, or Amazonbot, agents never discover you exist. Zero recommendations. Zero citations. Complete invisibility.

Stage 2 - Citation: Can agents confidently cite you? This requires fact-level clarity (each statistic, definition, concept needs standalone clarity), structured data (Schema.org JSON-LD), and citation-worthy content architecture. If agents cannot extract clear facts, they hallucinate details or skip your site entirely in favour of competitors with clearer structure.

Stage 3 - Compare: Can agents understand your offering? This requires JSON-LD microdata at the pricing level, explicit comparison attributes (product features, specifications), and semantic HTML that agents can parse for feature extraction. If comparison data is visual-only or requires human inference, agents skip you in comparison lists.

Stage 4 - Pricing: Can agents understand your costs? This requires Schema.org types (Product, Offer, PriceSpecification), unambiguous pricing structure with currency specification (ISO 4217 codes), and validation to prevent decimal formatting errors. Without proper metadata, agents misunderstand costs by orders of magnitude - the Danube cruise error where £2,030 became £203,000 because European decimal formatting (€2.030,00) was misinterpreted.

Stage 5 - Confidence: Can agents complete checkout? This requires no hidden state buried in JavaScript (state must be DOM-reflected), explicit form semantics (<button> not <div class="btn">), persistent feedback (role=“alert” for important messages), and data-state attributes for checkout progress tracking. If state is visual-only, agents cannot see what buttons do, cannot track progress, and abandon carts.

The catastrophic failure principle applies: miss any stage and the entire commerce chain breaks. Sites that successfully complete the full journey gain computational trust - agents return for more purchases through learned behaviour. Sites that fail at any stage disappear from the agent’s map permanently.

Unlike humans who persist through bad UX and can be won back with improvements, agents provide no analytics visibility and offer no second chance. First-mover advantage exists. Sites that work early become trusted sources. Sites that fail early become invisible.

Served HTML vs Rendered HTML

Most companies test their websites the way humans experience them: open a browser, wait for JavaScript to execute, interact with the visual interface. This tests the rendered HTML state - after JavaScript runs, after CSS applies, after dynamic updates complete.

But many AI agents don’t see rendered HTML. They see served HTML - the static HTML sent from your server before JavaScript executes.

Diagram not available

Served Vs Rendered Html

Two states, two audiences

Served HTML is what server-side agents see:

If your site requires JavaScript to display products, show prices, or render navigation, server-side agents see nothing. Your carefully crafted user experience is invisible to them.

Rendered HTML is what browser agents see:

Even browser agents need semantic structure. They can see everything humans see, but they parse structure like server-side agents. Visual design cues (colour, spacing, animation) don’t help agents understand content purpose.

The practical implication: Both states need MX patterns.

Serve semantic HTML so server-side agents can parse structure. Reflect state in DOM attributes so browser agents can track progress. Don’t assume JavaScript execution. Don’t rely on visual-only indicators. Design for the worst-case agent (served HTML, no JavaScript), and you automatically support all agents.

Most companies only test rendered state because that’s what humans experience. But if you want agent compatibility, you must test both states. The Web Audit Suite (described below) analyses both served and rendered HTML, identifying patterns that work for all agent types.

What the Web Audit Suite Actually Measures

The Web Audit Suite is a comprehensive Node.js-based website analysis tool that audits entire sites across six dimensions simultaneously:

1. SEO Optimisation

2. Performance Metrics

3. WCAG 2.1 Accessibility

4. Security Headers

5. Content Quality

6. LLM Suitability (the unique MX component)

This is where the tool differs from traditional SEO or accessibility audits. LLM Suitability measures how well your site works for AI agents.

Served HTML metrics (for ALL agents, including CLI and server-based):

Rendered HTML metrics (for browser agents):

Priority-Based Pattern Detection:

The tool categorises findings by implementation priority:

Report Generation:

The tool generates 19+ reports across multiple formats:

The tool operates through a four-phase architecture: Phase 0 (robots.txt compliance checking), Phase 1 (URL collection from sitemap), Phase 2 (concurrent data collection with browser pooling), Phase 3 (report generation). The results.json file serves as the single source of truth - all reports generate from this file, allowing report regeneration with different thresholds without re-analyzing sites.

The LLM Suitability component is what makes this tool unique. Traditional SEO audits check for ranking signals. Accessibility audits check for WCAG compliance. This tool checks whether AI agents can actually extract information, understand context, and complete desired actions on your site.

The tool is available as a service launching soon after the MX-Bible book publication (April 2026). Comprehensive site analysis provides executive reports with actionable recommendations, priority-based implementation guidance, and ongoing monitoring to detect regressions over time.

The Convergence Principle

Here’s the key insight that makes MX commercially viable: patterns that help AI agents also help accessibility users.

Both groups need semantic HTML because both lack access to visual design cues. Both need explicit state attributes because both cannot infer meaning from colour changes or animations. Both need structured data because both parse content programmatically rather than visually.

The convergence isn’t coincidental. It’s fundamental.

AI agents are machines: They parse HTML structure, extract metadata, and process text content. They cannot “see” visual hierarchy, colour coding, or spatial relationships. They need semantic elements (<button>, not <div class="btn">) because they parse structure, not appearance.

Screen reader users are blind: They parse HTML through assistive technology, extract meaning from semantic markup, and navigate by landmarks. They cannot see visual hierarchy, colour coding, or spatial relationships. They need semantic elements for exactly the same reason.

The tolerance differs fundamentally:

Accessibility users persist: They’ll click around until they find the right button. They’ll use browser search to locate content. They’ll ask for help or try again later. They may leave negative feedback explaining what went wrong. Their persistence creates opportunities to improve and win them back.

AI agents fail silently: One missing semantic element and they’re gone. One ambiguous state indicator and they skip you. No error logs. No analytics signal. No second chance. Their zero-tolerance parsing creates immediate commercial consequences.

This tolerance difference leads to the MX-first principle: Design for machines with zero-tolerance requirements, and you automatically create structure that benefits accessibility users as a side effect.

One implementation serves multiple audiences:

  1. AI agents (primary focus) - Cannot infer meaning, require explicit structure for any interaction
  2. Screen reader users (side benefit) - Navigate more efficiently with semantic landmarks and clear hierarchy
  3. Keyboard users (side benefit) - Tab through interactive elements with proper focus management
  4. Search engines (side benefit) - Parse structured data for rich results
  5. All users (side benefit) - Faster load times, clearer interfaces, better mobile experiences

The convergence principle means MX isn’t an additional cost centre. It’s a strategic multiplier. Implement semantic HTML for agents, and accessibility improves automatically. Add Schema.org for agent comparison, and search engines surface rich results automatically. Make state explicit for agent confidence, and keyboard users gain clearer navigation automatically.

This isn’t about creating separate experiences. It’s about fixing the underlying structure so it works for everyone - machines and humans alike. The business case (agent commerce, conversions, revenue) drives the technical requirements. The accessibility benefits are welcome side effects, not the primary driver.

Why This Matters Right Now

The timeline compressed dramatically between 2024 and 2026. What industry analysts predicted would take 12-24 months to reach mainstream adoption happened in 6-9 months or less.

January 2026 convergence: Three major platforms launched agent commerce systems within a single week:

This convergence signals an industry inflection point. Agent-mediated commerce moved from experimental to infrastructure. The technology isn’t coming - it’s here.

The data confirms commercial reality:

Adobe’s Holiday 2025 data shows AI referrals surged dramatically:

AI-referred users spend more time on sites, view more pages, and convert at higher rates than direct human traffic. The commercial imperative is clear: if agents can’t extract your information, they recommend competitors who’ve implemented the explicit structure they require.

First-mover advantage exists: Sites that work early become trusted sources that agents return to repeatedly. This creates a computational trust feedback loop:

  1. Agent recommends Entity A → successful transaction
  2. Agent increases trust score for Entity A
  3. Next similar query → higher probability of recommending Entity A again
  4. Pattern compounds over time

Sites that fail early disappear from recommendations with no recovery opportunity. Unlike humans who persist through bad UX and can be won back with improvements, agents provide no analytics visibility and offer no second chance.

The MX-Bible (launching April 2026) documents this convergence, provides implementation patterns across 13 chapters and 14 appendices, and establishes MX as the strategic discipline for agent-compatible web development. The book isn’t speculation about future possibilities - it’s documentation of patterns needed right now for platforms launching this quarter.

The timeline is compressed. Within two years (by January 2028), human browsing will likely be the exception rather than the norm. Organisations that build agent-compatible structure now will dominate agent-mediated interactions. Those that remain dependent on visual-only interfaces will face insurmountable catch-up costs.

Can you afford to wait whilst competitors build computational trust?

Getting Started with MX

MX applies to ANY web goal, not just ecommerce. Whether you’re selling products, informing readers about product recalls, establishing credibility, collecting contact information, or enabling downloads, agents need explicit structure to complete those actions.

Goal completion varies by industry:

Without MX, fewer AI agent activities complete those actions - regardless of what those actions are.

Practical checklist for getting started:

1. Start with semantic HTML:

2. Add structured data (Schema.org JSON-LD):

3. Make state explicit in the DOM:

4. Create llms.txt file for agent discovery:

5. Test both served and rendered states:

6. Run comprehensive audits:

These aren’t hypothetical future requirements. These are patterns needed right now for platforms launching this quarter. The Web Audit Suite (launching soon after book publication in April 2026) provides comprehensive analysis across all six dimensions, priority-based recommendations, and ongoing monitoring to ensure MX patterns remain intact through deployments and content updates.

What’s Next

The Web Audit Suite service becomes available soon after the MX-Bible book launches in April 2026. The service provides:

The MX-Bible (launching April 2026) provides complete MX patterns and implementation guidance across 13 chapters:

14 appendices freely available online:

The book isn’t speculation about future possibilities. It’s documentation of patterns needed right now for platforms launching this quarter. The timing is deliberate: January 2026 convergence (Amazon, Microsoft, Google agent commerce launches) compressed the timeline from 12-24 months to 6-9 months or less.

Follow MX developments:

This is about collaboration, not criticism. When we provide well-structured inputs (semantic HTML, structured metadata, explicit state), AI agents perform optimally. Hallucinations decrease. Accuracy increases. Commerce transactions complete successfully. Better-structured inputs produce better outputs for everyone: users, agents, and businesses alike.

MX is the missing piece in web development. Not an optional extra. Not a future concern. A discipline needed right now for platforms launching this quarter.

The question isn’t whether to optimise for agents. The question is whether you can afford not to whilst competitors build agent-compatible structure and gain computational trust.

Back to Top