How We Got Here: The AI Story No One Tells
If you’ve ever wished for an assistant who never sleeps, never forgets, and never complains — you’re not alone. That dream is older than you think.
Every civilization, in some form, has fantasized about it. Ancient Greeks built mechanical statues that could “move on their own.” Renaissance engineers designed clockwork automata that could write, play music, or even play chess. And in the 20th century, science fiction gave us HAL 9000, Data, and the endless promise of “machines that think.”
Here’s the thing about dreams: they rarely arrive on schedule. The story of AI isn’t a straight line from idea to reality. It’s a story of hype, disappointment, broken promises, and one unexpected breakthrough that changed everything.
The Long AI Winter
In the 1950s, computer scientists made a bold prediction: “machines will be able to do any work a man can do within 20 years.”
Spoiler: they were wrong.
But the pattern was set. Researchers built “expert systems” — rule-based programs that could supposedly diagnose diseases, solve math problems, or play chess. They worked great in demos and fell apart in the real world. If a situation wasn’t explicitly programmed, the system failed.
The funding dried up. Interest faded. “AI winter” became an actual term in the industry.
This happened multiple times.
- The 1960s: “AI is just around the corner!”
- The 1980s: “No, seriously, this time it’s different!”
- The 1990s and 2000s: “Okay, maybe not yet.”
Meanwhile, in the real world, people kept building things that were useful rather than intelligent. Search engines got better at finding information. Spreadsheets got better at calculation. Email replaced faxes. The dream of a thinking machine faded into the background.
The Chatbot Timeline
You’ve probably interacted with AI long before ChatGPT.
Remember the early chatbots? In 1966, ELIZA — a program with fewer than 200 lines of code — fooled people into thinking it was a therapist. It didn’t understand anything. It just matched keywords and mirrored back your statements. Tell it “I’m feeling sad” and it would reply “Why are you feeling sad?”
It was a parlor trick, not intelligence.
In the 1990s and 2000s, we got better tricks. Siri, Alexa, Google Assistant — chatbots that could answer simple questions, set reminders, play music. They worked great when you asked “what’s the weather?” But try asking something complex, ambiguous, or outside their training data, and you’d get:
“I’m sorry, I didn’t understand that.”
The problem: these systems were built on pattern matching. Keywords, rules, if-then statements. They could recognize what you said, but not what you meant.
The Quiet Revolution (2017)
Then something happened that almost nobody noticed outside of AI research labs.
In 2017, a paper titled “Attention Is All You Need” was published by researchers at Google. The paper introduced something called the “Transformer” — a new way to process language that broke from decades of previous approaches.
Without getting too technical: before Transformers, AI read text one word at a time, trying to understand each word based on what came before it. Transformers changed this. They read the entire context at once, understanding how every word relates to every other word.
This sounds small. It wasn’t.
This change, combined with massive amounts of computing power and data from the internet, led to something new: LLMs (Large Language Models). Instead of programming rules, researchers trained models by having them “read” the entire internet and learn patterns.
The result: AI that could predict, complete, and generate text at scale.
The ChatGPT Moment
In late 2022, ChatGPT launched. And suddenly, AI wasn’t a research topic anymore.
Your parents used it. Your coworkers used it. Your company talked about it. AI went from “something tech people care about” to “something everyone needs an opinion on.”
Why? Because for the first time, the gap between “what computers could do” and “what humans expect” narrowed.
ChatGPT could:
- Write emails that sounded like you
- Debug code
- Summarize 50-page documents
- Answer questions about obscure topics
- Translate languages
It wasn’t perfect. It made mistakes. It hallucinated (more on that later). But it was usable in a way nothing before it was.
Since then, we’ve seen GPT-4, Claude, and dozens of other models. AI is now in:
- Coding tools: GitHub Copilot, Cursor
- Office tools: Claude in Excel, PowerPoint
- Search: Perplexity, AI-enhanced Google
- Customer support: Chatbots that actually help
We went from “AI is 20 years away” to “AI is everywhere, what do we do with it?” in the span of months.
The Problem We’re Here to Solve
So here we are. We have AI that can talk, write, code, and reason. It’s genuinely useful — but not genuinely reliable.
Here’s the thing about LLMs: they don’t “know” anything. They predict. When you ask “What’s the capital of France?”, the model generates “Paris” not because it knows the answer, but because, based on everything it’s read, “Paris” is the most likely next words.
This works great for many things. It fails spectacularly for others.
Ask an LLM about recent events it wasn’t trained on, and it might confidently invent a plausible-sounding but entirely false answer. Ask it to retrieve specific information from a document, and it might miss the most important detail.
This is the hallucination problem. And it’s the reason businesses can’t just “add AI” and expect everything to work.
The dream of AI was: a machine that thinks like a human, but without human limitations.
The reality is: a machine that seems to think, but lacks what humans take for granted: actual knowledge, memory, and the ability to reference facts.
This is where our story really begins.
Next Up
In the next post, we’ll dive into why LLMs hallucinate — and why, given how they were built, this isn’t a bug. It’s a feature.
Once we understand that, we can talk about the solution that’s quietly changing how AI is built: Retrieval-Augmented Generation, or RAG.
See you there.
This is Part 0 of a 7-part series on AI & RAG.
Next: Why LLMs Hallucinate (And Why You Shouldn’t Be Surprised)