Talk to Elon he claims to have already implanted one.What can I say, I want a chip in my brain with direct access to a next level AI. The biggest bandwith problem right now is the iphone builtin keyboard thinking it knows what I wanna type. Imagine just thinking about an image and move it to the gpt6 folder in your mind. Game On.
It seems all thew majority do is SCRAPE the current internet.Actively making shit worse, there's a reason a rarely use google/bing search these days
Deep Learning is a better description than AI.
DL is training on input data, together with desired outcomes. In between is a network of values that gets created.
Each training item updates the network of values, these are kind of analogous to human neurons that change as we learn. Hence the "Learning" part.
The end product is the Neural Network of values. When you done training you don't need the input data anymore. Now you are running the network instead of training it, so you give it a new input and it creates an output based on that trained network.
I spent most of my career writing software and I'm very impressed what you can do with DL. Things that you could theoretically program with people but would never really work in practice.
Imagine having 10,000 chest X-rays and the analysis from highly trained experts. Try to get a programmer to build a program from that data to read X-rays and it will fail. But with DL, you just feed the data and outcomes through a training network and now you have Neural Network that reads Xrays like a human expert.
Or even use chest X-rays to detect things humans can't:
Deep-learning model uses chest X-rays to detect heart disease – Physics World
Artificial intelligence classifies cardiac functions and valvular heart diseases from widely available chest radiographsphysicsworld.com
There is a lot of hype, but there is also enormous potential. This is no crypto-coin boondoggle.
See above, aren't the DL models just scraping the internet? Because it sure doesn't seem people are getting paid to input the data.
How does the data get input into the DL or even LLM/NN?
There is also a wide spread misconception that OpenAI scoured the entire web, training as it went, when the truth is that there was a lot of data curation that went into preparing the data used for training and not all of it was done by OpenAI.
The Common Crawl is an open, and free-to-use dataset that contains petabytes of data collected from the web since 2008. Training for GPT-3, the base model of ChatGPT took a subset of that data covering 2016 to 2019. This was 45 TB of compressed plain text before filtering and only 570 GB after. This is roughly equivalent to 400 billion byte_pair encoded tokens.
We believe any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing is a form of murder.
Largest Text-To-Speech AI Model Yet Shows 'Emergent Abilities' - Slashdot
Devin Coldeway reports via TechCrunch: Researchers at Amazon have trained the largest ever text-to-speech model yet, which they claim exhibits "emergent" qualities improving its ability to speak even complex sentences naturally. The breakthrough could be what the technology needs to escape the...slashdot.org
TLDR: "OMG! It can produce text as if a human produced it, contractions etc.!"
My first thought came back to the age-old question of, "if it quacks like a duck does that really make it a duck?", followed by, "live and let live: we regard all our fellow humans as sentient / intelligent / worthy of respect despite many of their attempts to prove us wrong", but then the following thought occurred to me:
Why would any commercial organisation want to produce true AI? By 'true AI' I mean, an entity that's at least on-par with humanity in terms of intelligence, desire, needs, etc. The average CEO wants nothing more than a machine that can run 24/7 cheaply and produce profit; any more than that is a potential liability: it might start asking for things, much like those annoying humans do.
Also this article:
Your AI Girlfriend Is a Data-Harvesting Horror Show - Slashdot
"A lot of that AI chatbots that you spend days talking to push hard on getting more and more private information from you," writes longtime Slashdot reader michelcultivo, sharing a report from Gizmodo. "To be perfectly blunt, AI girlfriends and boyfriends are not your friends," says Misha...yro.slashdot.org
I'm shocked! Well, not that shocked. Younger generations might cite Harry Potter and the Chamber of Secrets (re Tom Riddle's diary) as a "well duh" response.
A few short decades or centuries. Laziness reigns triumphant.So how quickly do you think this creative AI fad is going to pass?
So how quickly do you think this creative AI fad is going to pass?
University of Aarhus is taking another approach.
Don't use Grammarly on college papers, you might get a zero and be placed on academic probation.
How wonderful, a university that promotes plagiarism. It's not even possible to give credit to the original source given AI itself is IP theft and a copyright violation. Regurgitated anonymous copy/pasta, if you will.University of Aarhus is taking another approach.
New rules: You may now use AI when writing your Master’s thesis or Bachelor’s project
Why would any commercial organisation want to produce true AI? By 'true AI' I mean, an entity that's at least on-par with humanity in terms of intelligence, desire, needs, etc. The average CEO wants nothing more than a machine that can run 24/7 cheaply and produce profit; any more than that is a potential liability: it might start asking for things, much like those annoying humans do.
While not exactly solving the problem, in Denmark by law all prices in the grocery shop also has to state the price pr. unit wether it by L, kg or units. Which makes it much easier to compare similar products in different size containers.Take shrinkflation, for example:
The costly economic trend here to stay
Products are getting smaller, and you're paying the same. The problem won't go away, even if the economy rebounds and inflation abates.www.bbc.com
View attachment 94482
We have similar requirements but I am not even sure it is enforced. Some of the units make zero sense and cannot be compared between items.While not exactly solving the problem, in Denmark by law all prices in the grocery shop also has to state the price pr. unit wether it by L, kg or units. Which makes it much easier to compare similar products in different size containers.
View attachment 94484
@Kaido
AI and robotics are two distinctly different things (and the latter has been around for quite some time longer in a commercial context). Sure, they can be combined, but my point was about AI.
Why would any commercial organisation want to produce true AI? By 'true AI' I mean, an entity that's at least on-par with humanity in terms of intelligence, desire, needs, etc. The average CEO wants nothing more than a machine that can run 24/7 cheaply and produce profit; any more than that is a potential liability: it might start asking for things, much like those annoying humans do.
Leading Chinese marketing agency group BlueFocus surprised the market on Thursday, announcing it will “fully and indefinitely” end the outsourcing of creative design, copywriting, planning and programming, and interim employment. The news was shared via internal emails, as shown in an email screenshot shared by Chinese media, stating this was part of a management decision to embrace artificial intelligence generated content (AIGC).
BlueFocus decided to replace outsourcing human copywriters and designers two days after it was granted Microsoft's Azure OpenAI service license on 11 April, raising concerns about AI unemployment and job cuts in the creative and marcomms industry. The news not only shocked investors but became a 'hot topic' on Chinese Weibo.
The fear surrounding Artificial Intelligence (AI) often tends to be overblown. Let’s explore a few reasons why this might be the case:
There are unrealistic fears of Skynet.
But there are also realistic fears that they will be used in misinformation campaigns. This is not about an AI uprising, but about bad actor humans using new tools to mislead and control people with new misinformation techniques.
The spread of misinformation by AI won't even require malintent. Look at all the dumb stuff people repeat as "facts". As generative AI proliferates, errors get amplified. With AI, the process is greatly speeded up and picks up a veneer of legitimacy as AI can provide a list of cited sources. That the sources are utter garbage won't factor.There are unrealistic fears of Skynet.
But there are also realistic fears that they will be used in misinformation campaigns. This is not about an AI uprising, but about bad actor humans using new tools to mislead and control people with new misinformation techniques.