The anti-AI thread

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

cytg111

Lifer
Mar 17, 2008
24,281
13,773
136
LLM move over, here comes LAM

This mf'er will learn new UI's on the fly and let you interface with them verbally.

 
Reactions: Kaido

DaaQ

Golden Member
Dec 8, 2018
1,568
1,139
136
What can I say, I want a chip in my brain with direct access to a next level AI. The biggest bandwith problem right now is the iphone builtin keyboard thinking it knows what I wanna type. Imagine just thinking about an image and move it to the gpt6 folder in your mind. Game On.
Talk to Elon he claims to have already implanted one.

Not sure if it's in his head or not. But says it's successfully been done. By one of his companies BTW.
Actively making shit worse, there's a reason a rarely use google/bing search these days

It seems all thew majority do is SCRAPE the current internet.
?

Deep Learning is a better description than AI.

DL is training on input data, together with desired outcomes. In between is a network of values that gets created.

Each training item updates the network of values, these are kind of analogous to human neurons that change as we learn. Hence the "Learning" part.

The end product is the Neural Network of values. When you done training you don't need the input data anymore. Now you are running the network instead of training it, so you give it a new input and it creates an output based on that trained network.

I spent most of my career writing software and I'm very impressed what you can do with DL. Things that you could theoretically program with people but would never really work in practice.

Imagine having 10,000 chest X-rays and the analysis from highly trained experts. Try to get a programmer to build a program from that data to read X-rays and it will fail. But with DL, you just feed the data and outcomes through a training network and now you have Neural Network that reads Xrays like a human expert.

Or even use chest X-rays to detect things humans can't:

There is a lot of hype, but there is also enormous potential. This is no crypto-coin boondoggle.

See above, aren't the DL models just scraping the internet? Because it sure doesn't seem people are getting paid to input the data.

How does the data get input into the DL or even LLM/NN?
 
Reactions: Kaido

Heartbreaker

Diamond Member
Apr 3, 2006
4,349
5,479
136
See above, aren't the DL models just scraping the internet? Because it sure doesn't seem people are getting paid to input the data.

How does the data get input into the DL or even LLM/NN?

The models aren't scraping the internet themselves:


There is also a wide spread misconception that OpenAI scoured the entire web, training as it went, when the truth is that there was a lot of data curation that went into preparing the data used for training and not all of it was done by OpenAI.

Companies put together datasets, including a snapshot of scraped internet, but they also have dataset of novels. Here are the known datasets that went into ChatGPT-3. These will all be filtered and massaged before being used as training data:




So they didn't turn to the internet. The turned to an open dataset of scraped Internet and they only used a few years of it, and then only a tiny fraction of that:

The Common Crawl is an open, and free-to-use dataset that contains petabytes of data collected from the web since 2008. Training for GPT-3, the base model of ChatGPT took a subset of that data covering 2016 to 2019. This was 45 TB of compressed plain text before filtering and only 570 GB after. This is roughly equivalent to 400 billion byte_pair encoded tokens.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,349
5,479
136
NVidia just released a demo model you can use to search your own computer locally. This doesn't rely on a model on the web. Kind of annoying it's Windows 10 Only (and RTX 3000 or 4000).


It does seem hand for those with large personal files they might want to do natural language searches on, and oddly it can search YT videos:

 

BFG10K

Lifer
Aug 14, 2000
22,709
2,997
126
What could possibly go wrong?


OpenAI assures that, like any other information given to ChatGPT, memories will be utilized to enhance the underlying machine learning (ML) models "for everyone." Users have the option to disable the ML training in Data Control settings, and enterprise customers are guaranteed that their content will not be used for model training.

They "assure "and "guarantee" it, yo.
Just like they guaranteed the data breach they experienced with that medical data they scrapped would never happen.
Just like Google guarantees your information will be wiped from the internet when you request it.

LMAO.

They already have online accounts, photos of you, and other telemetry such as voice recordings. It's only a matter of time before they start collecting fingerprints, retina scans and DNA, all in the name of "learning" and "security".

And still the collective lunatics run headfirst into this.
 
Last edited:
Reactions: VirtualLarry
Mar 11, 2004
23,341
5,772
146
Not sure if its been posted and I actually had posted it in video cards subforum in response to someone saying people should be respecting Nvidia's CEO (cause look how much he made line go up!), but the AI evengelists are basically going full blown religious zealots and writing manifestos, that declare that any attempt to try to slow development of AI is murder (and not because they actually are making a case to protect sentient AI, but rather they're declaring that hindering AI is killing humans because AI will only ever save humans and super won't ever be used to kill humans).

We believe any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing is a form of murder.


Also gross that he lists a bunch of names at the bottom, making it seem like those people support his manifesto.

I can't really claim credit for this, it was Behind the Bastards podcast that made the case for how AI is turning silicon valley techbros into religious nutters.

 

mikeymikec

Lifer
May 19, 2011
18,980
12,098
136

TLDR: "OMG! It can produce text as if a human produced it, contractions etc.!"

My first thought came back to the age-old question of, "if it quacks like a duck does that really make it a duck?", followed by, "live and let live: we regard all our fellow humans as sentient / intelligent / worthy of respect despite many of their attempts to prove us wrong", but then the following thought occurred to me:

Why would any commercial organisation want to produce true AI? By 'true AI' I mean, an entity that's at least on-par with humanity in terms of intelligence, desire, needs, etc. The average CEO wants nothing more than a machine that can run 24/7 cheaply and produce profit; any more than that is a potential liability: it might start asking for things, much like those annoying humans do.

Also this article:


I'm shocked! Well, not that shocked. Younger generations might cite Harry Potter and the Chamber of Secrets (re Tom Riddle's diary) as a "well duh" response.
 
Mar 11, 2004
23,341
5,772
146

TLDR: "OMG! It can produce text as if a human produced it, contractions etc.!"

My first thought came back to the age-old question of, "if it quacks like a duck does that really make it a duck?", followed by, "live and let live: we regard all our fellow humans as sentient / intelligent / worthy of respect despite many of their attempts to prove us wrong", but then the following thought occurred to me:

Why would any commercial organisation want to produce true AI? By 'true AI' I mean, an entity that's at least on-par with humanity in terms of intelligence, desire, needs, etc. The average CEO wants nothing more than a machine that can run 24/7 cheaply and produce profit; any more than that is a potential liability: it might start asking for things, much like those annoying humans do.

Also this article:


I'm shocked! Well, not that shocked. Younger generations might cite Harry Potter and the Chamber of Secrets (re Tom Riddle's diary) as a "well duh" response.

Part of me goes "they're thinking about how much money they can save on CEOs by having an AI bot be their CEO". Frankly, would likely be more humane and less psychopathic, probably be less greedy and more maximizing for the business as well. And it'd certainly free up a lot of labor cost, with only one position removed.

But in reality, its more the CEOs want more CEO thinking bots to be their undermanagement and do all their dirty work so they can sit by can cash out all the money.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,194
126

Don't use Grammarly on college papers, you might get a zero and be placed on academic probation.
 

biostud

Lifer
Feb 27, 2003
18,846
5,706
136

Don't use Grammarly on college papers, you might get a zero and be placed on academic probation.
University of Aarhus is taking another approach.

New rules: You may now use AI when writing your Master’s thesis or Bachelor’s project

 

BFG10K

Lifer
Aug 14, 2000
22,709
2,997
126
University of Aarhus is taking another approach.

New rules: You may now use AI when writing your Master’s thesis or Bachelor’s project
How wonderful, a university that promotes plagiarism. It's not even possible to give credit to the original source given AI itself is IP theft and a copyright violation. Regurgitated anonymous copy/pasta, if you will.
 
Last edited:

Kaido

Elite Member & Kitchen Overlord
Feb 14, 2004
48,880
5,535
136
Why would any commercial organisation want to produce true AI? By 'true AI' I mean, an entity that's at least on-par with humanity in terms of intelligence, desire, needs, etc. The average CEO wants nothing more than a machine that can run 24/7 cheaply and produce profit; any more than that is a potential liability: it might start asking for things, much like those annoying humans do.

All companies have 3 goals:

1. To make a profit
2. To provide enough of a employee user experience to incentivize people to stay
3. To provide enough of a customer user experience to incentivize people to stay

However:

1. People are often in desperate need of keeping their job & change is hard, so people will stay in abusive work relationships due to a lack of hope, due to anxiety, lack of savings, etc.
2. People will continue to use products despite being ripped off because there is a "good enough" line of acceptibility

Take shrinkflation, for example:




Then there's greedflation:





Politicians are often controlled by lobbyists, so there isn't a lot of incentive to put in things like "windfall taxes" by corporations, plus the distribution system of those taxes is not so good anyway. So you end up with companies that are entirely incentivized by:

1. The bottom line
2. Short-term quarterly & annual profits
3. Shareholder pressure

Who have the ability to:

1. Create a terrible employee experience
2. Create a terrible user experience
3. Jack up prices, knowing that consumers are still at their mercy to buy their product if they want it in their lives

This isn't just in the COVID-era either; even the egg industry had legal issues due to conspiring to fix prices:


Given the primary drivers of corporate profit, why NOT outsource human work to AI & robotics? Just look at what Amazon is lining up in their warehouses:

 

mikeymikec

Lifer
May 19, 2011
18,980
12,098
136
@Kaido

AI and robotics are two distinctly different things (and the latter has been around for quite some time longer in a commercial context). Sure, they can be combined, but my point was about AI.
 

biostud

Lifer
Feb 27, 2003
18,846
5,706
136
Take shrinkflation, for example:


View attachment 94482
While not exactly solving the problem, in Denmark by law all prices in the grocery shop also has to state the price pr. unit wether it by L, kg or units. Which makes it much easier to compare similar products in different size containers.
 

sdifox

No Lifer
Sep 30, 2005
97,307
16,389
126
While not exactly solving the problem, in Denmark by law all prices in the grocery shop also has to state the price pr. unit wether it by L, kg or units. Which makes it much easier to compare similar products in different size containers.
View attachment 94484
We have similar requirements but I am not even sure it is enforced. Some of the units make zero sense and cannot be compared between items.
 
Last edited:
Reactions: biostud

Fritzo

Lifer
Jan 3, 2001
41,916
2,155
126
The fear surrounding Artificial Intelligence (AI) often tends to be overblown. Let’s explore a few reasons why this might be the case:

  1. Neophobia and Poor Understanding:
    • Neophobia, the fear of new technologies, plays a significant role. When faced with something novel, we often focus excessively on its potential harm rather than considering its benefits.
    • AI falls into this pattern. Because it is not fully understood by the general public, concerns tend to lean toward worst-case scenarios.
  2. Media Hype and Misunderstandings:
    • Media sensationalism contributes to the fear. Headlines often emphasize AI’s negative aspects, creating an exaggerated perception.
    • The reality is that AI encompasses a wide range of technologies, from simple algorithms to complex neural networks. Not all AI systems pose existential threats.
  3. Realistic AI Capabilities:
    • Most AI systems today are narrow AI, designed for specific tasks. They lack general intelligence (AGI) and self-awareness.
    • For instance, language models like ChatGPT predict words based on patterns but lack true understanding. They can generate nonsensical content.
  4. Recent Breakthroughs Are Not AGI:
    • OpenAI’s rumored breakthrough, called Q*, combines existing AI techniques (Q-learning and A* search) to enhance systems like ChatGPT.
    • Q* doesn’t signal the arrival of AGI or a humanity-crushing singularity. It’s about improving responses, not achieving consciousness.
  5. AI as a Tool, Not a Threat:
    • AI is a powerful tool that can augment human capabilities. It won’t replace us.
    • Instead of fearing AI, we should focus on responsible development, ethics, and transparency.
In summary, while some caution is warranted, the fear of AI surpassing human control is largely unfounded. AI will likely create new opportunities and industries rather than lead to our destruction

Stop copying and pasting material as your own that comes from other sources.
Admin allisolm
 
Last edited by a moderator:
Reactions: biostud and Kaido

Kaido

Elite Member & Kitchen Overlord
Feb 14, 2004
48,880
5,535
136
@Kaido

AI and robotics are two distinctly different things (and the latter has been around for quite some time longer in a commercial context). Sure, they can be combined, but my point was about AI.

Sure, to your point on AI:

Why would any commercial organisation want to produce true AI? By 'true AI' I mean, an entity that's at least on-par with humanity in terms of intelligence, desire, needs, etc. The average CEO wants nothing more than a machine that can run 24/7 cheaply and produce profit; any more than that is a potential liability: it might start asking for things, much like those annoying humans do.

So this splits the discussion into two parts:

1. True AI
2. Human-replacement AI

The human-replacement aspect has already begun: last year in May, nearly 4,000 jobs were lost to AI. More are to come:

Former DreamWorks Animation CEO Jeffrey Katzenberg said AI will take 90 percent of the artist jobs on animated movies within three years.

Things are already starting to get murky with mid-level jobs:

Walmart has reportedly been using an AI chatbot to negotiate prices with supplies for backend materials like shopping carts, and 75% of suppliers have apparently told the company that they prefer to negotiate with a machine instead of a human.

Online & voice ordering AI are already in play:

Jersey Mike’s Subs streamlines voice orders with AI

Copywriters have been hit particularly hard:


Leading Chinese marketing agency group BlueFocus surprised the market on Thursday, announcing it will “fully and indefinitely” end the outsourcing of creative design, copywriting, planning and programming, and interim employment. The news was shared via internal emails, as shown in an email screenshot shared by Chinese media, stating this was part of a management decision to embrace artificial intelligence generated content (AIGC).

BlueFocus decided to replace outsourcing human copywriters and designers two days after it was granted Microsoft's Azure OpenAI service license on 11 April, raising concerns about AI unemployment and job cuts in the creative and marcomms industry. The news not only shocked investors but became a 'hot topic' on Chinese Weibo.

As far as True AI goes...if you owned a business & could save say 25% of your operating costs by using ChatGPT with the human-esqe intelligence required to do higher-level jobs, would you say no to that? Of course not, that's poor business sense! The technology isn't quite ready for that yet, but there's a LOT of automation available to replace jobs coming in the future, for better or for worse! So now to take that a little further, they're putting ChatGPT into robot bodies:


As far as robotics go, AI is already in the process of crossing over.



Then hire yourself a robotic secretary with ChatGPT for brains:


Pretty soon they're going to need their own hangout spots lol

https://youtu.be/R0YyexowdNY
 
Last edited:

Heartbreaker

Diamond Member
Apr 3, 2006
4,349
5,479
136
The fear surrounding Artificial Intelligence (AI) often tends to be overblown. Let’s explore a few reasons why this might be the case:

There are unrealistic fears of Skynet.

But there are also realistic fears that they will be used in misinformation campaigns. This is not about an AI uprising, but about bad actor humans using new tools to mislead and control people with new misinformation techniques.
 

Kaido

Elite Member & Kitchen Overlord
Feb 14, 2004
48,880
5,535
136
There are unrealistic fears of Skynet.

But there are also realistic fears that they will be used in misinformation campaigns. This is not about an AI uprising, but about bad actor humans using new tools to mislead and control people with new misinformation techniques.

Already happening...just look at all the fake endorsement ads using Deepfake AI on Youtube. It's gotten easier to create realistic avatars:


BYO actor to your favorite films:

 

IronWing

No Lifer
Jul 20, 2001
70,732
29,883
136
There are unrealistic fears of Skynet.

But there are also realistic fears that they will be used in misinformation campaigns. This is not about an AI uprising, but about bad actor humans using new tools to mislead and control people with new misinformation techniques.
The spread of misinformation by AI won't even require malintent. Look at all the dumb stuff people repeat as "facts". As generative AI proliferates, errors get amplified. With AI, the process is greatly speeded up and picks up a veneer of legitimacy as AI can provide a list of cited sources. That the sources are utter garbage won't factor.

Sandboxed, pay-walled, peer-reviewed publication houses will be the last bastion of factual information. Neil Stephenson predicted this condition for the internet but it has happened far faster than he predicted.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |