What's Missing from AI-5

What's Missing from AI
An open-ended question - https://allabout.network/ai
15
Introduction
Tom Cranstoun
AI lacks core human abilities despite rapid advancement
Welcome to this exploration of what's missing from AI systems. Despite remarkable progress, today's AI - particularly Large Language Models - lack several critical capabilities that limit their effectiveness in enterprise settings. This presentation addresses these gaps and provides pragmatic solutions for organizations of various sizes.
Establish your credibility as "The AEM Guy" and digital transformation expert. Use hand gestures to emphasize each missing element. Notice audience reactions to identify which limitations resonate most with this group. Prepare to spend more time on those areas during Q&A
The "AI" Misnomer
Terms create unrealistic expectations
"AI" implies intelligence but LLMs just predict words
The term "AI" itself is misleadingly broad, suggesting human-like intelligence when it merely refers to technologies like Large Language Models that predict word sequences without true comprehension. Companies use "AI" as a marketing buzzword, even when machine learning is minimally involved, creating inflated expectations that exacerbate implementation challenges.
Begin by addressing the terminology issue head-on. This frames the entire discussion in terms of realistic capabilities rather than sci-fi expectations. Ask audience members for examples of "AI washing" they've encountered in product marketing. This establishes your credibility as someone focused on substance over hype.
The Fundamental Flaws in AI
Despite impressive headlines about AI capabilities, significant limitations continue to hinder their potential and create risks for organizations.
Hype exceeds real-world capabilities, creating organizational risks
AI's capabilities consistently fall short of the hype surrounding them. Organizations often invest based on theoretical possibilities rather than practical implementations, creating exposure to disappointment and wasted resources. Understanding these limitations is crucial for developing effective AI implementations that truly meet organizational needs.
Reference recent headline-making AI announcements and contrast with implementation realities. If possible, mention specific examples relevant to the audience's industry. Ask: "Has anyone experienced gaps between AI promises and delivery?" Use their examples to illustrate your points.
The Hallucination Problem
A fundamental design issue, not a mere glitch
AI fabricates with confidence even in critical domains
AI systems don't simply admit ignorance when faced with knowledge gaps—they invent facts, statistics, citations, and entire scenarios with absolute confidence. This isn't an occasional glitch; it's baked into their fundamental design. They're built to provide answers, not acknowledge limitations. In specialized domains like law, medicine, or finance, this flaw becomes potentially catastrophic, with systems recommending treatments based on fabricated research or citing nonexistent legal precedents.
Share a personal experience with hallucination, ideally something humorous but illustrative. Emphasize that hallucination is a design issue, not a bug. Ask if audience members have experienced hallucinations in their AI implementations. Be ready with domain-specific examples (legal, medical, financial).
Toxic Training Data
Internet-scale datasets contain problematic content
LLMs ingest misinformation, bias, and extremism
Most large language models are trained on massive datasets scraped from the public internet—a veritable cesspool of misinformation, extremism, conspiracy theories, and every form of human bias imaginable. Even when developers attempt to filter this data, the sheer volume makes comprehensive curation effectively impossible. The resulting systems inevitably mirror these problematic elements, presenting fringe viewpoints as mainstream or outdated information as current fact. Model updates are released unpredictably, driven by competition rather than structured release cycles.
This slide often generates strong reactions. Be prepared to acknowledge different perspectives without getting drawn into political discussions. Focus on the business implications of unpredictable behavior rather than specific content issues. Have 1-2 examples of how toxic data manifested in major AI systems.
The Memory Problem
Groundhog Day: no persistent knowledge retention
No retention across sessions prevents knowledge building
AI systems lack persistent memory across interactions. Each conversation starts fresh with minimal retention of previous exchanges beyond what's explicitly included in the prompt. This creates a perpetual "Groundhog Day" effect where systems can contradict themselves across sessions without any awareness of inconsistency. For organizations seeking to build institutional knowledge through AI, this fundamental limitation presents a significant obstacle to developing the kind of relationship-building that characterizes effective human interactions.
Demonstrate the memory problem with a simple example: "If I told ChatGPT my name yesterday and ask it today what my name is, it won't know." Connect this to organizational challenges like customer service continuity and knowledge management. Mention RAG and vector databases as partial solutions, but emphasize their limitations.
Knowledge Cutoff Limitations
Binary knowledge boundaries create false certainty
Hard knowledge dates create confident but outdated answers
AI systems have binary knowledge boundaries, with understanding stopping at a specific cutoff date. Unlike human professionals who continuously update their knowledge, these systems will confidently respond based on whatever version of reality existed in their training data, regardless of how the world has changed since. This creates a particularly problematic situation for rapidly evolving fields or any context where current information is essential. For organizations in dynamic industries, these outdated "facts" presented with complete confidence can lead to dangerously misguided decisions.
Check what major LLMs' current knowledge cutoff dates are before the presentation. Prepare industry-specific examples of significant changes since those cutoff dates. Ask the audience to consider what critical information in their field has changed recently that AI systems might miss.
Cultural Alignment Biases
Silicon Valley values aren't universal
Western biases affect all interpretations and recommendations
Most mainstream AI systems are aligned primarily with Western, English-speaking, corporate perspectives—often reflecting Silicon Valley worldviews rather than diverse global cultures. This hidden bias means they consistently favor certain cultural frameworks, ethical systems, and knowledge bases while marginalizing others. Alternative models from different regions demonstrate their own cultural alignments. Organizations outside the dominant cultures of AI development risk finding themselves using systems that fundamentally misunderstand their values, priorities, and ways of knowing.
Be sensitive to cultural diversity in the audience. Present this as a systems issue rather than a political statement. Have 1-2 concrete examples ready that demonstrate how the same query produces different results in models trained with different cultural perspectives.
Understanding
Modern web architectures create AI comprehension challenges
AI cannot comprehend headless websites and JS applications
Modern headless architecture separates content from presentation, making it difficult for AI to understand context. AI systems struggle with SPAs and JS-rendered content, creating significant limitations for AI-based analysis. This architectural mismatch means AI systems often have a fundamentally flawed understanding of modern web content, leading to incorrect interpretations and problematic responses.
This is a technical slide, so gauge audience understanding. For technical audiences, go deeper into architecture implications. For business audiences, focus on the practical consequences. Use simplified diagrams to explain the challenge if the audience seems confused.
Structured Data Solutions
Bridge the human-AI comprehension gap
Machine-readable formats improve AI understanding

To improve how AI systems interact with content, organizations should implement:

- **JSON-LD**: Creates explicit relationships between data elements using established schemas, making content machine-readable while maintaining human presentation

- **llms.txt Standard**: Functions like robots.txt but for AI systems, providing critical context about website purpose, structure, and appropriate use

- **AI-Friendly Information Architecture**: Combining these approaches creates dual-channel content that serves both human visitors and AI systems, ensuring consistent information delivery

For technical audiences, be prepared to discuss implementation details. For business audiences, focus on benefits and resource requirements. Have examples of well-implemented structured data ready to share, particularly if relevant to the audience's industry.
Regulatory Considerations
Complex and evolving compliance landscape
Regulations vary significantly by sector and region

Organizations must navigate multiple overlapping regulatory frameworks:

- **Data Privacy**: GDPR in Europe, CCPA in California, and global equivalents impose strict requirements on AI data processing

- **Sector-Specific**: Financial services (SR 11-7), healthcare (HIPAA), and legal/professional services face additional compliance hurdles

- **Emerging AI Regulations**: The EU AI Act, Colorado AI Act, and similar legislation create new requirements for AI applications

These frameworks emphasize transparency, human oversight, and explainability—principles often at odds with generic cloud AI services.

Research regulatory developments specific to the audience's industry before presenting. Be careful not to present yourself as offering legal advice. Position this as "areas to discuss with your compliance team" rather than specific recommendations. Allow extra time for this slide if regulatory professionals are in the audience.
Local Deployment of Foundation Models
Complete control and data sovereignty
Control your models, data, and costs on-premises

Local deployment provides several critical advantages:

- **Complete Control**: Organizations select specific models and manage versions according to their schedules

- **Data Sovereignty**: Train with proprietary data without third-party exposure

- **Reduced Dependencies**: Eliminate reliance on external providers whose terms may change

- **Fixed vs. Variable Pricing**: Convert token-based costs into predictable capital investments

- **Enhanced Compliance**: Implement governance frameworks tailored to specific regulatory contexts

This approach requires greater technical expertise and infrastructure investment but provides unmatched control.

Be honest about the technical and financial requirements of local deployment. Have approximate cost figures ready for different organization sizes. If the audience includes smaller organizations, acknowledge that this approach may not be feasible for them and preview the pragmatic pathways slide.
Foundation Models as Better Building Blocks
Higher-quality training data reduces hallucinations
Domain-specific training improves accuracy and relevance

Business-focused foundation models employ transformer architectures similar to consumer models but with critical differences:

- **Higher-Quality Data**: Trained on business-relevant datasets rather than the entire internet

- **Customization Potential**: Can be further trained with organization-specific information

- **Reduced Hallucinations**: Domain-specific training grounds the model in factual information

- **Value Alignment**: Can be specifically trained to embody organizational ethical frameworks and priorities

This layered approach creates AI that truly understands organizational terminology, processes, and values.

Use an analogy like "Foundation models are like prefabricated housing components - they give you a starting structure that you can customize to your specific needs." Be prepared to discuss which foundation models are most appropriate for different use cases. Have examples of successful domain-specific implementations if possible.
The AI Content Architect
Essential new organizational role
Bridges technical capabilities with business requirements

The AI Content Architect serves as a bridge between technological capabilities and business requirements:

- **Guardrails and Guidelines**: Establishes frameworks ensuring AI alignment with organizational values

- **Data Preparation**: Evaluates existing information and develops training pipelines

- **Value Translation**: Converts company principles into training data and metrics

- **Governance and Ethics**: Creates frameworks aligning AI systems with company standards

This role combines technical expertise with domain knowledge to ensure AI truly serves organizational needs.

If HR or talent management professionals are in the audience, be prepared to discuss job descriptions, required skills, and compensation ranges for this role. Position this as an emerging discipline that organizations can develop internally rather than necessarily hiring externally. Connect to existing roles that might evolve into this position.
Realistic Assessment
Significant investment beyond software costs
Hardware investments range from $3,000-$15,000 per user


Concerned about the limitations of public AI systems? Building your own AI development infrastructure puts control back in your hands:

- **Knowledge Workers**: Professional-grade workstations ($10,000-15,000) for inference tasks

- **Data Centers**: High-performance systems ($60,000+) for model training

Both pathways deliver powerful local AI capabilities while maintaining complete data sovereignty. Our comprehensive guide provides everything required to bring this capability in-house.
Implementing AI requires substantial investment:

- **Hardware**: $3,000-$15,000 per knowledge worker for inference; $60,000+ for training machines

- **Expertise**: Specialized AI knowledge remains both essential and expensive

- **Data Preparation**: Most organizational data requires significant work before AI training

- **Ongoing Maintenance**: Models need regular monitoring, updating, and retraining

These costs create particular challenges for smaller organizations with limited resources.

Have specific cost examples ready for different implementation approaches. Be prepared to discuss ROI calculations and payback periods. If finance professionals are in the audience, they may ask detailed questions about cost structures and depreciation. Emphasize that many organizations underestimate the total cost of ownership.
Pragmatic Pathways
Tailored approaches for different organization sizes
Implementation strategies must match organizational capacity

Organizations require approaches matching their specific circumstances:

- **Large Organizations**: Consider local deployment with dedicated AI Content Architects

- **Medium Organizations**: Explore hybrid approaches combining local models with carefully selected cloud services

- **Small Organizations**: Focus on thoughtfully selected cloud services with appropriate oversight

- **All Organizations**: Develop clear guidelines for appropriate AI use and focus on applications that align with specific business needs

Before the presentation, try to understand the size distribution of organizations represented in the audience. Spend more time on the approach most relevant to the majority. Be sensitive to resource constraints smaller organizations face. Position cloud services as a legitimate strategy, not just a compromise.
Conclusion
Enhanced capabilities through synergy
Human-AI partnership outperforms either alone
The future of organizational AI lies in thoughtfully integrating these technologies to enhance human capabilities rather than replace them. By acknowledging current limitations while working to overcome them, organizations can build AI implementations that genuinely advance their missions. This requires realistic expectations, appropriate investment, and continuous alignment between AI capabilities and business requirements.
End with an inspiring note about the potential of human-AI collaboration. Avoid both excessive hype and excessive pessimism. Invite audience members to connect after the presentation to discuss their specific challenges. Have business cards or a QR code ready for follow-up connections.