03-30-2023 , 01:23 AM
https://9to5mac.com/2023/03/29/woz-pause...velopment/ AI development beyond GPT-4 should be paused – Woz, Musk, academics
for Ben Lovejoy
Ben Lovejoy
| Mar 29 2023 - 5:22 am PT Apple cofounder Steve Wozniak has joined Elon Musk and leading AI academics in calling for a pause in advanced AI development. Specifically, the open letter asks for a minimum six-month pause in the development of AI systems more powerful than GPT-4 …
The letter says that current AI development is out of control, and may pose “profound risks to society and humanity.”
As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.
It says that AI systems put a great many jobs at risk, and asks a number of questions.
Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?
ChatGPT developer OpenAI has itself previously indicated that such a pause might someday be needed, and the letter says that day has arrived.
OpenAI’s recent statement regarding artificial general intelligence, states that “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.” We agree. That point is now.
Top comment by Think Different
Liked by 3 people
This letter is a road paved with good intentions but to what end? It won't slow research into AI. It won't stop scammers from scamming people with it. It won't stop governments from weaponizing it. All it will do is slow down the progress of open development of AI for the benefit of all. The result will be more secret development of AI and quite possibly legislation that will prevent individual (you in other words) from having access to the most powerful and newest forms of AI. Worse, this is a Luddite argument. "We should pause the development of steam engines for six months ..."
View all comments
The letter is careful to stress that its signatories are not calling for a hiatus on all AI development, but only “the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.”
Other notable signatories include a lengthy list of academics specializing in AI, as well as senior execs from tech companies.
You can read the full letter here. https://futureoflife.org/open-letter/pau...periments/
for Ben Lovejoy
Ben Lovejoy
| Mar 29 2023 - 5:22 am PT Apple cofounder Steve Wozniak has joined Elon Musk and leading AI academics in calling for a pause in advanced AI development. Specifically, the open letter asks for a minimum six-month pause in the development of AI systems more powerful than GPT-4 …
The letter says that current AI development is out of control, and may pose “profound risks to society and humanity.”
As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.
It says that AI systems put a great many jobs at risk, and asks a number of questions.
Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?
ChatGPT developer OpenAI has itself previously indicated that such a pause might someday be needed, and the letter says that day has arrived.
OpenAI’s recent statement regarding artificial general intelligence, states that “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.” We agree. That point is now.
Top comment by Think Different
Liked by 3 people
This letter is a road paved with good intentions but to what end? It won't slow research into AI. It won't stop scammers from scamming people with it. It won't stop governments from weaponizing it. All it will do is slow down the progress of open development of AI for the benefit of all. The result will be more secret development of AI and quite possibly legislation that will prevent individual (you in other words) from having access to the most powerful and newest forms of AI. Worse, this is a Luddite argument. "We should pause the development of steam engines for six months ..."
View all comments
The letter is careful to stress that its signatories are not calling for a hiatus on all AI development, but only “the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.”
Other notable signatories include a lengthy list of academics specializing in AI, as well as senior execs from tech companies.
You can read the full letter here. https://futureoflife.org/open-letter/pau...periments/