Are we going to see smarter AI models?

ChatGPT felt like the first truly revolutionary tool since the smartphone. Practically overnight, it captured global attention, showcasing how Large Language Models (LLMs) could create the illusion of human-like communication. In less than three years, AI has expanded beyond text to reshape photos, videos, and more, with even bigger possibilities on the horizon. But as these models evolve, are we going to continue to see the same pace of progress? Or are we going to start to see a plateau?

Rising costs, limited access to “high-quality” data, and diminishing returns from scaling are forcing the industry to confront tough questions. Not to mention continue to fuel the ongoing debate on ethics and copyright. Is this an AI arms race, or is it starting to feel more like the California Gold Rush, where a boom could lead to a bust?

Companies like OpenAI, Google, and Anthropic are pushing boundaries with new approaches, from reasoning-based models ( AI designed to mimic human-like thinking and problem-solving in the real world) to synthetic data. But the magic we felt when ChatGPT first launched might seem to be fading for those of us who have been playing, making, and pushing it; and that’s no coincidence.

The challenge isn’t hardware, but really, it’s data and the quality of it. As I’ve said to numerous classes, these models are only as “smart” as the data they are trained on. The ‘low-hanging fruit’ of data, or internet-sourced information, might be showing us that those fields have been picked clean. To mix metaphors a little more, because I’m human and can, like explorers who have charted every corner of the map, AI researchers are now asking, “What’s next?”

Some are turning to experts, professionals and academics, for specialized knowledge and have them train these models to push AI current knowledge bounds. To me, this highlights a simple truth: the internet, vast as it is, isn’t infinite and only has a certain level of cognitive ability or understanding. You can’t learn everything from it (yes, shameless plug for academics here, rejoice 🥳 ). Meanwhile, some are experimenting with synthetic data, where AI generates training content for other AI. But this raises serious concerns about quality, reliability, and feedback loops that could stifle progress instead of advancing. The educator in my is just shaking its head at this idea, as I don’t think we should be feeding AI the “fake until you make it” mantra anymore than the internet has already taught it.

As the price tag for training new models climbs into the billions, it’s fair to ask: Is the ultimate goal, Artificial General Intelligence (AGI), really worth it? AGI might reshape industries, but the financial, technical, and ethical challenges may leave us wondering if the returns will ever justify the investment. Really, what is our end game with all this? How is this going to make the everyday person’s life ‘better?’ Or, just like the gold rush, what is going to happen to the miners and the holes in the ground we dug just to find a nugget…of fool’s gold?