I'm 18 years old, which means artificial general intelligence will probably shape my entire adult life. If it arrives in the next few years like some predict, everything changes. If it doesn't, we'll have spent a decade obsessing over something that was always further away than we thought.
So I've been trying to understand the actual timeline. When is AGI really coming? What does that even mean? Who should I believe?
The more I research, the less certain I become. And honestly, that's what worries me most. Not that AGI is coming too fast or too slow, but that nobody seems to actually know, and yet massive decisions are being made based on conflicting predictions.
Let me walk you through what I've found, because maybe you can help me make sense of it.
The Contradictions Started Adding Up
In January 2025, Sam Altman posted a cryptic New Year's poem on Twitter: "near the singularity; unclear which side." Tech media went wild. Was this a hint that OpenAI had achieved AGI?
Three weeks later, he tweeted: "twitter hype is out of control again. we are not gonna deploy AGI next month, nor have we built it. we have some very cool stuff for you but pls chill and cut your expectations 100x!"
So which is it? Are we near the singularity or should we cut our expectations by 100x?
Then in October, Andrej Karpathy went on Dwarkesh Patel's podcast. Karpathy is a founding member of OpenAI, former director of AI at Tesla, and one of the most respected people in the field. He said AGI is at least a decade away. He called current AI agent code "slop" and said the models "don't really know what they're doing."
One investor tweeted: "If this Karpathy interview doesn't pop the AI bubble, nothing will."
But here's what confuses me: Karpathy and Altman both helped found OpenAI. They've both seen the same technology. They both understand AI deeply. So why do their timelines differ by 5-10 years?
Maybe one of them is wrong. Or maybe they're defining AGI differently. Or maybe the honest answer is that nobody knows, but some people are more comfortable admitting uncertainty than others.
The Definition Problem That Nobody Agrees On
I started digging into what AGI even means, and this is where things got really messy.
Some definitions say AGI is "AI systems that are generally smarter than humans." Others say it's "the ability to accomplish any cognitive task at least as well as humans." Some focus on economic impact, like automating most jobs. Others focus on cognitive capabilities, like reasoning and learning.
One researcher I read argued that defining AGI in economic terms completely misses the point about what it means to build a mind. Another pointed out that we don't even understand how human intelligence works, so how can we define a machine version of it?
The goalposts keep moving too. When machines beat humans at chess, people said "that's not real intelligence." When they beat us at Go, same thing. Now ChatGPT can write essays and code, and people still say "but it doesn't really understand."
Which... fair. But it makes me wonder: if we can't agree on what AGI is, how will we know when we've built it? And if we can't define the goal, why are we so confident about when we'll reach it?
I'm not trying to be pedantic here. I genuinely don't know how to think about this. It feels like everyone's arguing about reaching a destination nobody can point to on a map.
What the Researchers Actually Think
I found a survey from 2025 that polled 475 AI researchers. This is important because these are the people actually working on this stuff, not just the CEOs giving interviews.
76% of them said that scaling up current AI approaches is "unlikely" or "very unlikely" to lead to AGI.
Three out of four experts think we're on the wrong path.
That number stopped me cold. If the majority of people building this technology don't think it's heading toward AGI, why does the public conversation assume it's inevitable? Why are companies valued at hundreds of billions based on AGI predictions that most researchers doubt?
I tried to find counterarguments. Maybe these researchers are too conservative? Maybe they're wrong and the CEOs are right? But I couldn't shake the feeling that this gap between expert opinion and public perception matters.
77% of the same researchers said they'd rather prioritize designing AI systems with acceptable risk-benefit profiles over directly pursuing AGI. They want to build useful, safe AI rather than chasing an unclear goal.
That makes sense to me. But it's not what the headlines say. The headlines say AGI is coming in 2027, or 2029, or whenever the next prediction drops.
The Technical Problem I Don't Fully Understand (But Seems Important)
Here's where I hit the limits of my understanding. Multiple sources mentioned something called the "distribution shift problem" as a fundamental barrier that hasn't been solved.
From what I can gather, it means current AI models work great on things similar to their training data but fall apart when they encounter unfamiliar situations. This has been a known problem for 30 years, and recent papers show even the newest reasoning models still struggle with it.
One researcher I read, Gary Marcus, has been writing about this for decades. He argues that without solving distribution shift, you can't have true general intelligence. Because if a system breaks when it sees something new, how is it "general"?
I'm not qualified to judge if he's right. But it seems like a pretty big deal if the problem has existed for 30 years and we still can't solve it. That suggests AGI might be harder than the optimistic timelines assume.
Unless there's a breakthrough coming that I don't know about. Which is entirely possible. I'm 18 and trying to understand cutting-edge AI research. There's a lot I'm probably missing.
The Money Angle That Makes Everything Suspicious
The part that really messes with my head is the money.
OpenAI was recently valued at $90 billion. That valuation assumes they're on the path to AGI. Without that narrative, they're just a company that makes a chatbot that loses money on every query.
Other companies are raising similar amounts. The entire AI industry is built on the promise that AGI is coming soon. Investors are betting hundreds of billions of dollars on it.
Which makes me wonder: how much of the optimistic timeline is based on actual progress, and how much is based on needing to justify those valuations?
I'm not accusing anyone of lying. Maybe they genuinely believe their predictions. Maybe they're right and I'm wrong. But I can't ignore the fact that everyone involved has a strong financial incentive to be optimistic.
CEOs need to attract funding and talent. Researchers need grants. Journalists need clicks. Even some of the skeptics are building their personal brands on being the "voice of reason."
Everyone has a narrative they're selling. And I don't know how to filter out the truth when everyone's incentives are so aligned with overstating progress.
What I Think This Means (Though I Could Be Wrong)
After weeks of reading papers, watching interviews, and trying to make sense of contradictory timelines, here's where I've landed:
I think the honest answer is that nobody knows when AGI is coming. The people who claim to know are either overconfident or have reasons to project confidence they don't really feel.
The safest bet seems to be that we're making real progress on narrow AI capabilities, but we have fundamental problems we don't know how to solve, and we don't have a clear path to human-level general intelligence.
Karpathy's decade-long timeline feels more realistic to me than Altman's vague hints about 2025-2027. But even Karpathy admits he's guessing based on intuition from 15 years in the field. It's an educated guess, but still a guess.
And I could be completely wrong. Maybe there's a breakthrough coming that solves all these problems. Maybe the optimists are right and the skeptics are being too conservative. Maybe AGI arrives in 2027 and this entire essay ages terribly.
I'm okay with that possibility. I'm not trying to be right. I'm trying to understand.
The Questions I Can't Answer
What bothers me most isn't that I don't know when AGI is coming. It's that I don't know how to think about it when the experts disagree so dramatically.
How do I plan for my career when some people say all cognitive work will be automated in two years and others say we're decades away?
How do I evaluate which AI company to work for or invest in when their valuations depend on promises nobody can verify?
How do I have informed opinions about AI policy when the basic facts are contested?
I'm 18. These decisions matter for my entire life. And the people who should have answers keep contradicting each other.
Maybe this is just what it's like to live through a period of rapid technological change. Maybe uncertainty is the only honest position. Maybe the people claiming confidence are the ones who haven't thought it through carefully enough.
Or maybe I'm overthinking this. Maybe the truth is simpler than I'm making it, and I just haven't found the right framework yet.
What I'm Taking Away From This
After all this research, here's what I feel most confident about:
We're building impressive AI systems that can do things that seemed impossible a few years ago. That progress is real and will continue.
We don't have a clear path from current systems to human-level general intelligence. The gap between what we can do and what AGI requires is bigger than the optimistic narratives suggest.
The timeline predictions are unreliable because they depend on solving problems we don't know how to solve. Anyone claiming confidence about specific years is probably overconfident.
The financial incentives around AI make it hard to trust anyone's predictions completely. Everyone has reasons to either hype or downplay progress.
And most importantly: it's okay to say "I don't know." In fact, that might be the most honest position anyone can take right now.
I'm going to keep learning. I'm going to keep reading the research, watching the interviews, trying to understand the technical details I'm currently missing. Maybe in a few years I'll have clearer answers.
Or maybe in a few years everyone else will be just as confused as I am now, and we'll all realize that predicting the future of AI was always harder than it looked.
Either way, I'd rather be confused and honest about it than confident and wrong.
If you've read this far and you understand AGI timelines better than I do, I genuinely want to hear from you. What am I missing? Where am I wrong? What should I be reading that I haven't found yet?
Because the one thing I'm certain about is that I'm still figuring this out. And that's okay.