I feel like people who are mad at AI for hallucinating and being unreliable aren't really 1:1 with tech. Its just like the internet, in fact they're all trained on the internet. You cannot take what results you get at face value, its all about your prompt and how you interpret the results.
Why would any expect an internet aggregator to be perfect? You aren't supposed to use google to get direct answers to things, its a tool for doing research.
Lately I've been making extensive use of AI in my work and its actually incredible what arguing with a bot can do for you. A bots hallucination might be worthless to one person, but absolutely inspiring to another. I've been really applying LLMs to help solve big hard problems and their ability to give me new leads is simply amazing. A lot of people are really sleeping on using LLMs as a tool, and instead they just want it to be a correct answer machine.
Not to say that idea is without merit, it would be nice if LLMs were always right. But in the same vein AI right now is like a hammer, compared to its ideal, a nail gun. Just because it takes more skill to use doesn't mean it isn't useful, nor does it mean there isn't a future where AI actually does make everyone happy. All tech needs iteration, duh.
I feel like I should also mention, that corporate controlled AI, services like chatGPT, are lame and I literally have never used one. I only run local models, and lately I've been using Llama 3.1. Which yes is made by facebook and you can kinda tell in the way it talks which is funny. But its still free, still local, and still useful.