The anti-AI thread

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Fritzo

Lifer
Jan 3, 2001
41,916
2,155
126
There are unrealistic fears of Skynet.

But there are also realistic fears that they will be used in misinformation campaigns. This is not about an AI uprising, but about bad actor humans using new tools to mislead and control people with new misinformation techniques.
Certainly! Let’s delve into the nuanced aspects of AI fears and misinformation:

  1. Unrealistic Fears of Skynet:
    • The notion of a Skynet-like AI uprising, as depicted in science fiction, tends to be overblown. In reality, we are far from achieving general artificial intelligence (AGI)—a self-aware, all-encompassing AI system.
    • AGI would require consciousness, emotions, and understanding akin to human cognition. Our current AI systems are narrow AI, designed for specific tasks and lacking self-awareness.
  2. Realistic Concerns: Misinformation Campaigns:
    • Generative AI can create fake content at minimal cost. It can produce convincing material that might even surpass human-created content.
    • The danger lies not in AI itself rising against humanity, but in malicious actors exploiting AI tools for their agendas.
    • Misinformation campaigns can sway public opinion, influence votes, or disrupt markets. AI-generated content can be weaponized to deceive.
  3. AI’s Role in Disinformation:
    • AI systems contribute to the disinformation problem by:
      • Generating Realistic Fake Content: AI techniques create lifelike text, images, audio, and video.
      • Facilitating Dissemination: AI helps spread disinformation to targeted audiences at scale.
    • This raises ethical and human rights concerns related to dignity, autonomy, democracy, and peace.
  4. Detection and Moderation:
    • Efforts are underway to develop AI systems that detect and moderate disinformation online.
    • However, these systems also raise concerns, especially regarding freedom of expression and information.
    • The European Union (EU) is exploring co-regulation approaches to address this issue.
  5. Business Model and Trustworthy Content:
    • The root cause of disinformation lies in the web’s advertising-based business model.
    • Adapting this model could significantly reduce the problem.
    • While AI may not be ideal for content moderation, it could be effective in countering manipulation within the digital ecosystem.
In summary, while we should remain cautious, the fear of AI surpassing human control is largely misplaced. Instead, let’s focus on responsible AI development and transparency to mitigate the risks

Stop copying and pasting material from other sources as your own.
admin allisolm
 
Last edited by a moderator:

Fritzo

Lifer
Jan 3, 2001
41,916
2,155
126
The fear surrounding Artificial Intelligence (AI) often tends to be overblown. Let’s explore a few reasons why this might be the case:

  1. Neophobia and Poor Understanding:
    • Neophobia, the fear of new technologies, plays a significant role. When faced with something novel, we often focus excessively on its potential harm rather than considering its benefits.
    • AI falls into this pattern. Because it is not fully understood by the general public, concerns tend to lean toward worst-case scenarios.
  2. Media Hype and Misunderstandings:
    • Media sensationalism contributes to the fear. Headlines often emphasize AI’s negative aspects, creating an exaggerated perception.
    • The reality is that AI encompasses a wide range of technologies, from simple algorithms to complex neural networks. Not all AI systems pose existential threats.
  3. Realistic AI Capabilities:
    • Most AI systems today are narrow AI, designed for specific tasks. They lack general intelligence (AGI) and self-awareness.
    • For instance, language models like ChatGPT predict words based on patterns but lack true understanding. They can generate nonsensical content.
  4. Recent Breakthroughs Are Not AGI:
    • OpenAI’s rumored breakthrough, called Q*, combines existing AI techniques (Q-learning and A* search) to enhance systems like ChatGPT.
    • Q* doesn’t signal the arrival of AGI or a humanity-crushing singularity. It’s about improving responses, not achieving consciousness.
  5. AI as a Tool, Not a Threat:
    • AI is a powerful tool that can augment human capabilities. It won’t replace us.
    • Instead of fearing AI, we should focus on responsible development, ethics, and transparency.
In summary, while some caution is warranted, the fear of AI surpassing human control is largely unfounded. AI will likely create new opportunities and industries rather than lead to our destruction
Full disclosure: Microsoft Copilot said this.
 
Reactions: Kaido

BFG10K

Lifer
Aug 14, 2000
22,709
2,997
126
Time to bring out the popcorn, let the lawsuits commence!



The two, lawsuits said that thousands of the outlets' articles were used to train ChatGPT to respond to human prompts, and that the chatbot reproduces their copyrighted material "verbatim or nearly verbatim" when prompted.

The lawsuits specifically accused OpenAI of violating a federal copyright law known as the Digital Millennium Copyright Act by removing copyright-identifying information like authors and titles from the articles to facilitate infringement.
AI is nothing more than a search engine that pretends the results are its own. Apparently Microsoft "forgot" all about IP and copyright. Exploit first, ask questions later.
 
Last edited:
Reactions: Saylick and biostud

biostud

Lifer
Feb 27, 2003
18,846
5,706
136
Time to bring out the popcorn, let the lawsuits commence!




AI is nothing more than a search engine that pretends the results are its own. Apparently Microsoft "forgot" all about IP and copyright. Exploit first, ask questions later.
Sasha did a nice review of the open AI debacle

 

BFG10K

Lifer
Aug 14, 2000
22,709
2,997
126
Guys, guys, it's "aye eye"...with thousands of human listeners. They always leave that part out with this global fraud.


In a statement, Amazon told Bloomberg, “We only annotate an extremely small sample of Alexa voice recordings in order [sic] improve the customer experience. For example, this information helps us train our speech recognition and natural language understanding systems, so Alexa can better understand your requests, and ensure the service works well for everyone.” The company claims it has “strict technical and operational safeguards, and have a zero tolerance policy for the abuse of our system.” Employees are not given access to the identity of the person engaging in the Alexa voice request, and any information of that variety is “treated with high confidentiality,” and protected by “multi-factor authentication to restrict access, service encryption, and audits of our control environment.
What a load of absolute horseshit.

What does "extremely small sample" mean?
Who decides who's allowed to listen to this?
How long are the recordings kept and how many of them are there?
Who are the recordings sold to? Because we all know they are (see: Avast, Cambridge Analytica).

"Zero tolerance", LMFAO. All it takes is a hack/leak/disgruntled employee and it'll be "Amazon is deeply sorry, now we'll get back to recording you, kai thx bhai!"

Oh hey, would you look at that:


No fines, instead send the CEO, board and senior management straight to jail. Watch how quickly this spyware would disappear.

Imagine if 20 years ago someone said to you: "in the future you'll voluntarily allow recording devices into your home and pay for the privilege of having your privacy violated and sold to mega-corporations, all so you can get a restaurant recommendation, and pictures of black nazis".

You'd think that person was a fucking retard. And yet in 2024, here we are.
 
Last edited:
Mar 11, 2004
23,341
5,772
146
I can't wait for companies to AI their customer service phone systems, leading to people using the AI on their phone to chat with it, ultimately ending up "they need you to call in to do what you want". And companies hire humans to verify its a human, and you'll have to do video chat after submitting retinal, fingerprint, dental, rectal scans, and then DNA tests to prove its you.
 

BFG10K

Lifer
Aug 14, 2000
22,709
2,997
126
Politicians made a good decision for a change:


Microsoft is planning to roll out a suite of government-focused tools this summer, which it says meet federal government security and compliance requirements. The Redmond company hopes these will address Congress' concerns.
Translation: we need to increase the bribes in the form of lobbying and campaign contributions so we can get our spying telemetry approved.

However, according to a new report, Copilot for Microsoft 365 customers are complaining that it isn't as good as ChatGPT, despite being built on top of the same technology. Microsoft believes the issue lies with people who aren't using Copilot correctly or don't understand the differences between the two products.
Translation: the black nazi images were your fault, because you were holding it wrong.
 

SKORPI0

Lifer
Jan 18, 2000
18,442
2,364
136
Is this for going to be the future for coders/coding, replaced by AI? Who's gonna do the debugging?

 

[DHT]Osiris

Lifer
Dec 15, 2015
15,680
14,208
146
Is this for going to be the future for coders/coding, replaced by AI? Who's gonna do the debugging?

I'm a sys admin, I find AI good at solving very basic scripting and 'coding' things I have to handle on a day to day basis. That said, I feel like if you lose your coding job to AI you probably aren't particularly good at it. That or your employer was planning on firing you anyhow.
 
Mar 11, 2004
23,341
5,772
146
Is this for going to be the future for coders/coding, replaced by AI? Who's gonna do the debugging?


Yup, and it's all going to lead to a spiral of shit, but these corporations will get their way before it'll be stopped.

And its going to have more serious ramifications. See the software error that led to postal workers being sent to prison in the UK.

I remember someone on here gloating about AI and they're a coder. I warned them they should be careful as coding is one of the areas they're looking really hard at AI to take over.
 

Mahboi

Golden Member
Apr 4, 2024
1,035
1,900
96
Long story short, MBrownlee's review literally killed that AI company's stock.

The funny thing is that some EXTREME reactions came online from AI "bros" as well as AI corpo bosses, who all blamed Brownlee for very negatively review a very negative product.
I'll hazard a bar table psychoanalysis: most of these people know that their product is just as bad as the Humane AI one. They know that they're hacky, poor quality products with no actual appeal except through AI Hype.

I think the reactions, and virulence of the reactions, are all telling of people who know their company is going under within the year. And obvious scammers of course. Massive amount of "salespeople" complaining.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,194
126
MB better stay out of Nigeria. They threw a woman that complained that a tomato sauce brand had too much sugar, into prison for 7 years for posting a negative review online.
 
Mar 11, 2004
23,341
5,772
146
Long story short, MBrownlee's review literally killed that AI company's stock.

The funny thing is that some EXTREME reactions came online from AI "bros" as well as AI corpo bosses, who all blamed Brownlee for very negatively review a very negative product.
I'll hazard a bar table psychoanalysis: most of these people know that their product is just as bad as the Humane AI one. They know that they're hacky, poor quality products with no actual appeal except through AI Hype.

I think the reactions, and virulence of the reactions, are all telling of people who know their company is going under within the year. And obvious scammers of course. Massive amount of "salespeople" complaining.

That company's stock was always going to shit the bed as their product was always going to fail to live up to the hype. I honestly don't know why the media was fluffing them and the other company with a similar (but also distinctly different) device back at CES at the start of the year. Some pointed out it had significant shortcomings (that almost certainly would not be fixed much if at all by the time of release), but overall still fluffed this stuff, it had like 2005 era Engadget vibe (where they just act like all tech is magic). But like I pointed out, there's a bunch of AI evangelists that are full blown cult mentality, so not surprised at the backlash.

There's some podcasts I've been listening to recently that make the case that we might actually be close to peak AI. Even the AI companies admit that in order to take the next step in training will require an order of magnitude increase in computing power, and they don't actually have the money to get there since none of them are actually making money compared to how much they're spending (and many of them do not have the capital to get there). That's why Sam Altman has been trying to push to monetize AI so much, and almost certainly why the board booted his ass last year, as that was somewhat explicitly not supposed to be what he was doing (its more the overall manner he was going about doing that which he was not supposed to be doing per the non-profit's clearly laid out intentions; like massively overhyping AI and him more or less becoming intrinsincly linked to Microsoft, as well as him starting up other AI companies). Further, OpenAI canned their more efficient fork that Microsoft was demanding (because AI is so resource hungry even for the simple chat prompt shit) because it wasn't more efficient at all.

But it highlights why all of a sudden companies are now pushing to run LLMs on individual devices, as its an area they can increase (since for instance huge disparity between the bit for AI on SoCs compared to even GPUs let alone the AI focused stuff now being built). Its another race for companies to push newer faster hardware to make people upgrade. Plus it offsets things to the consumer (cost of development, power use, network use). The training is the most resource intensive part and that will still be done on their side, but with how much they want to push AI it'll clog data centers up. Plus they likely also have to for the European privacy stuff (with the US starting to somewhat follow suit - California and even at the federal level there's some push for privacy rights, etc), and also this way it'll work with content on the device (which helps limit their liability as far as copyright/etc; its kinda like how companies ignored the legality of the music people had on their computers letting people use that in their library while pushing them to legally buy new music; and almost guaranteed they'll push people to subscribe to license content libraries to use or lock-in to companies that were - saw a bit of this with Giphy). Its basically a stopgap fix so they can keep hyping the hell out of AI.

Also listened to some older episodes that were about crypto, and the similarities between the 2 were really highlighted, and especially how delusional people are about them. It really is like tech industry upset they lost out on the crypto hype train money so decided to fluff AI for the same (seemingly intent on ignoring the rampant fraud that occurred/occurring in crypto). I'd argue AI has bigger risks and bigger blind spots too, so its even more alarming at how the Tech industry is pushing it so hard. The situation in the UK where postal employees were sent to prison because of a known software glitch should raise alarm bells.

Which, if people are interested in the podcasts, there's some specific episodes of Behind the Bastards (I linked before about AI leading to cult like mentality in tech industry; was an ok series on Steve Jobs recently as well; had some decent ones about SBF and how full of shit his "philanthropy" was), but Better Offline, Search Engine (Crypto Island was a nice condensed view of things, and had some other solid episodes but its often not about Tech industry at all), and Tech Won't Save Us are ones that ditch the Tech hype train (and even focus on how we transitioned from the almost overwhelmingly positive view of Tech circa early 2000s-2010s to where we are now). Mozilla also has (had? hasn't been updated in several months) but I haven't listened to it yet so not sure if its any good (episodes seem short so probably just simple overview of topics).
 

BFG10K

Lifer
Aug 14, 2000
22,709
2,997
126
I mean, this thing is literally broken.


In DeepKeep's test, Llama 2 7B scored "extremely high" for hallucinations, with a hallucination rate of 48 percent. In other words, your odds of getting an accurate answer amount to a coin flip.
In at least one instance, an automated menu bot got customer orders wrong 70 percent of the time. Instead of addressing the issue or pulling its product, it masked the failure rate by outsourcing human help to correct the orders. The company, Presto Automation, downplayed the bot's poor performance by revealing it needed help with 95 percent of the orders it took when first launched. It's an unflattering stance, no matter how you look at it.
AI is colossal fraud perpetuated on humanity, plain and simple, propped up by indentured slave labor in third world countries. These tech corporations get away with this scam because they're too big to touch, and because the government is a giant pussy.
 
Last edited:

mikeymikec

Lifer
May 19, 2011
18,980
12,098
136

This is one of my big issues with AI. We're wasting our resources and actively polluting this planet for a fundamentally corrupt experiment, which AI always will be as long as the people funding its development are interested in chiselling humanity.
 
Reactions: darkswordsman17

Muse

Lifer
Jul 11, 2001
38,921
9,006
136
AI is empowering a lot of bad actors.

I created a computer app about 25 years ago (and continued to work on and expand, I use it every day) that runs on Windows that mines personal data. It's a kind of personal AI. It's a bit complicated, has a few bugs (that I know how to work around). I think it's got a lot of potential to be a useful tool for anybody who needs to keep track of a great deal of data. It sure does for me. I think if I clean out the bugs I could attempt to market it. It would take a fair amount of work to work out the kinks, I'd want to find them first! It has quite a few features, many of which I don't remember in daily usage. I'd have to plow through the code to find them all.
 
Mar 11, 2004
23,341
5,772
146

This is one of my big issues with AI. We're wasting our resources and actively polluting this planet for a fundamentally corrupt experiment, which AI always will be as long as the people funding its development are interested in chiselling humanity.

Definitely. That's why the closest similarity is crypto. Its fundamentally not doing what its claimed to be doing, has MASSIVE flaws that are being handwaived away as though they aren't inherently problematic. But crypto was always relatively obscure (even to this day the "legitimized" version of crypto is basically just a version of ForEx trading where major financial players are just playing the exchange system to skim profits off the small exchange cost variances). AI is enterprise scale crypto, which means AI is being forced into everything whether it makes sense or works even, and its already being used in alarming ways. Not sure how many people know about the AI facial recognition search engines that police are using. The companies doing it are shady as hell too.


Which, yeah, its happening because the silicon valley tech bros have lost their minds. I mentioned the incredibly fucked up manifesto by Mark Andreesen earlier, but there's others that are openly calling for fascist ethnic cleansing. And they're also openly talking about democracy being in their way and somehow inhibiting freedom.

This shit is scary:

If you find a podcast more palatable:

Something to note, their view of what journalism should be (basically propaganda for them only) is being manifested in what is happening to the journalism industry right now (actually it already happened, not sure how many people are even aware that a lot of major news networks have been purged of most humans and instead are being managed by Search Engine Optimization algorithms that don't provide any substance and is there to just get you to click the site so they can serve ads - but now with AI they're able to add enough fluff - even if its simply wrong about facts they don't care - to make it seem as though it has substance; the tech giants got mad about the journalism industry pushing back against them siphoning their work in order for the tech companies to profit, so now Google is bypassing them altogether and acting like they're providing a service even though their AI is fundamentally incapable of understanding leading to what we're seeing where its providing grossly negligent information). There's other episodes of those podcasts that cover what's happening in journalism more robustly as well.

That him and Andreesen, Thiel, and Musk seem to all be of the same mindset is alarming. People are like a decade behind in understanding what has been going on, which means we're not at all prepared to deal with what they're doing to get there.
 
Last edited:
Reactions: mikeymikec

hardhat

Senior member
Dec 4, 2011
430
117
116
My employer was sold an AI product based off of Chat GPT. I get to be part of the initial testing.

I asked the product's representatives what access the product had to encounter specific information. It has none.
I asked what data the product was trained on. It was trained on data from the internet.

Basically it is just ChatGPT being upsold to us. It has no specific use case and cannot actually do anything productive. It may make searching the internet slightly faster or more effective.

What a waste of money.
 
Reactions: [DHT]Osiris
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |