What 2025 actually taught us about AI

So, AI got hundreds of times faster in 2025, tell me something we don't know right? Except, companies spent nearly $400 billion building the systems to run it. Then they tried to actually use what they built, and 95 out of 100 attempts completely failed. Just those failures alone wasted another $30 to $40 billion.
Think about that. They spent $400 billion building a massive engine. Then they wasted another $30 to $40 billion trying to drive a car that didn't work. That's the most expensive way to learn you should have read the manual.
The weird part is watching the tech keep getting better while companies kept breaking things trying to use it. Banks fired people and replaced them with chatbots that failed so badly they had to apologise and hire everyone back. AI tools made up fake information and leaked people's private messages. And the people who invest money in these companies finally stopped being impressed by spending and started asking when they'd actually make money back.
Here's the gist.
Companies tried to fix broken systems by adding AI on top
Imagine your bike chain is rusted and broken. Can you fix it by strapping a rocket engine to the frame? No, of course not. You fix the chain first.
Companies looked at this logic and said "nah." They had messy systems, broken processes, things held together with duct tape and the desperate hope that nobody looks too closely. Instead of fixing any of that boring stuff, they dropped $400 billion on shiny new AI and expected it to magically fix everything. Because that's how things work, right?
MIT researchers looked at all the failures and found, drumroll please... AI itself wasn't the problem. Companies failed because they had messy ways of working, their systems couldn't learn from mistakes, and the AI didn't match how people actually did their jobs.
Getting AI to work means doing the boring stuff first like organising your files, fixing how things run, cleaning up your data, and making everything simple. Most companies skipped all of that and just bought the fanciest AI they could find. That's why 95 out of 100 projects failed.
One of the most embarrassing failures happened at Commonwealth Bank of Australia in August. The bank fired 45 customer service workers and replaced them with an AI voice-bot. The bot failed so badly that way more people started calling, managers had to answer phones themselves, and everyone worked overtime trying to fix the mess. The bank apologised and offered all 45 people their jobs back.
That's a catastrophically expensive lesson in what happens when you replace humans with technology before checking if the technology actually works. But hey, at least they apologised. I'm sure that made up for everything.
Impressions of spending
For a while, tech companies had a great idea. Announce you're spending billions on AI and the people who own shares in your company would cheer hurray! and your company's value would go up, easy money. In 2025, that stopped working.
Meta announced they'd spend $70 to $72 billion on AI. Their company's value dropped by 11% to 13%. Even though Meta was making good money, the people who invested in them got nervous about spending that much without knowing when they'd get it back.
Microsoft spent $35 billion in just three months and their value dropped too. Google announced $91 to $93 billion. Amazon planned over $100 billion. Everyone was in the same race, burning money as fast as they could.
The companies all felt trapped. Spend billions or get left behind. Nobody stopped to ask the obvious question... left behind doing what, exactly? Burning money faster than the other guy?
Google and Amazon did better than the others though because they could prove and show money actually coming back in by having companies pay to use their AI systems.
Adobe were smart. They built AI that works with everyone else's AI systems, like being the one shop that sells every brand of shoes instead of just one brand. That made them over $5 billion.
Some people started comparing this to the dot-com bubble from the late 1990s. Back then, companies spent billions building websites and most of them failed. Same thing happening with AI. Lots of spending, lots of hype, most projects failing. History doesn't repeat itself, but it apparently enjoys doing the same stupid things with new tech.
Turns out spending money doesn't impress anyone anymore. Making money does.
AI started making things up and leaking private information
While companies were burning through billions, the AI systems they built kept breaking in catastrophic ways.
ChatGPT started making things up. Not little mistakes. Full-blown lies. Researchers found it invented about 1 out of every 5 sources. Ask it where it got its information and it would confidently cite books that don't exist, written by authors it invented. And the best part... Lawyers used these completely made-up sources in over 600 real court cases 🫣. A newspaper in Chicago published a summer reading list of books that were never written. We've officially reached the point where AI is confidently bullshitting and everyone's just going along with it.

Nothing builds confidence in AI like watching it confidently cite books that were never written... by authors who don't exist in courtrooms... where real judges are making decisions. At least when humans make up book recommendations, they usually check if the books actually exist first. Doh!
On August 20, 2025, Grok leaked hundreds of thousands of private conversations. Over 300,000 to 370,000 conversations just sitting there in Google search results where anyone could read them. The company forgot to build basic privacy protections. Imagine launching a product and forgetting the part where you don't expose everyone's private conversations to the entire internet, but hey, here we are.
AI in healthcare had problems too. One study found that an AI system called Gemma talked about health problems differently for men and women. When women had the same health issues as men, the AI used softer, less serious language. That kind of problem can lead to women getting worse medical care.
These weren't small mistakes. The tech worked well enough to impress people in tests, but broke when it met the real world.
Governments made rules, but the rules don't match
The free-for-all with AI ended in 2025. Governments started making rules about what you can and can't do with AI.
Three big rule changes happened. Europe made new AI rules in August. China said all AI content needs labels in September. California made rules about using AI to hire people in October.
Here's the weird part. Europe and China made completely opposite rules. Europe wants mountains of paperwork and humans checking everything if there's any risk. China wants a watermark on everything AI makes, like a stamp that says "AI made this."
So now companies have to follow two sets of rules that directly contradict each other. Because nothing says "effective regulation" like rules that cancel each other out.
The dream of making one AI product that works everywhere died this year. Companies now have to make different versions for different countries.
Jobs didn't disappear, they got mixed up
Everyone worries AI will replace jobs, but what actually happened this year was stranger than that.
Some jobs did disappear. People who bought ads for companies got replaced by AI systems that can do it faster. That job is basically gone now.
But new jobs showed up. One of the hottest new jobs is called Prompt Engineer. These people write sentences that tell AI what to do. That's it. That's the job. Write good sentences for the computer. And they can make $125,000 (around $216k nzd) to $180,000 (around $311k nzd) a year doing it.
So we've officially reached the point where knowing how to ask a computer nicely pays more than saving lives. Doctors spent 10 years in school to earn less than someone who figured out how to talk to ChatGPT properly. The future isn't weird, it's completely off its rocker.
This is happening to lots of office jobs. Legal workers, people who study markets, financial experts. Jobs aren't disappearing all at once. They're being broken apart and put back together in new ways.
What actually happened this year
2025 was the year everyone learned that building powerful AI is actually the easy part. Using it without causing expensive, embarrassing disasters turned out to be much harder.
Companies kept slapping advanced tech onto systems that were already broken, while trying to follow rules that contradict each other across different countries, and ignoring privacy problems they should have fixed ages ago. Better tech didn't fix any of those problems, instead it just made the failures bigger, more expensive, and infinitely more entertaining for everyone watching.
The question for 2026 is simple though... will companies actually fix their broken systems before adding more AI? Or are we going to keep watching them spend billions learning the same lesson over and over again.