Table of Contents 64q43
The CEO of Twitter, Elon Musk and the historian famous for the book A Brief History of Humanity, Yuval Noah Harari, among other specialists, researchers and scientists, decided to sign a letter in which they ask for a pause in the updating and development of AI. s4h33
The letter in question was produced by the Future of Life Institute (Institute for the Future of Life, in free translation) and recommends a hiatus in the development of artificial intelligence (AI) of at least six months, which is transparent. Learn more about the content of the document, see the translation of the full text and stay on top of the discussions around the following topic.
What does the letter against AI say about artificial intelligence? 4l314x
Self-employment and the rise of misinformation are two of the main concerns with AI technologies highlighted in the letter. The content of the text also demands that the moment of suspension of development with tools such as GPT-4, midjourney and similar AI solutions need to be “public and verifiable”.
With regard to investments, the letter also discusses how OpenAI, Microsoft e Google are in a technology race without looking closely at how governments and public institutions are struggling to keep up with each new step. That is, according to the document, the AI sector is entering a speed that is increasingly difficult to follow and even the actors involved with the technology recognize this.
“If this pause cannot be regulated quickly, governments should step in and institute a suspension”
letter from Future of Life Institute

O text he also mentions AI laboratories that produce technologies, but are unable to make accurate assessments and are not even capable of having control over these digital minds. What intrigues the signatories — from Apple co-founder Steve Wozniak to pioneering researchers like Yoshua Bengio and Stuart Russel to former Google product philosopher Tristan Harris — are questions surrounding the scope of information dissemination. falsehoods, the replacement of human minds and the very “control of our civilization”. In this sense, the document also pays attention to the best investment moment:
“Powerful AI systems should only be developed when we are confident that their effects will be positive and their risks will be managed.”
letter from Future of Life Institute
Future of Life Institute and Responsible Technology 2p1vw
Founded in 2015, from an investment by Elon Musk, the Future of Life Institute seeks to distance new technological advances from possible risks to life, both for humanity and for other living beings. The organization pays attention not only to AI, but also monitors productions in areas such as biotechnology and nuclear energy. This institute's effort to technological responsibility has as its vision a world where diseases are eradicated and democracies are strengthened around the world.

Future of Life's actions were recognized by the United Nations (UN) in 2020: the UN appointed the institute as a representative of civil society on issues involving AI. Earlier this year, however, the president of that technology watchdog, Max Tegmark, had to apologize over a misguided investment in a far-right media platform, Sweden's Nya Dagbladet.
What is the position of other companies on AI development? 5q5r6h
Consolidated companies such as Google e Microsoft avoided commenting on the letter from the Future of Life Institute. The two companies are increasingly moving to offer AI solutions, in light of the productions of the OpenAI, the company that created the Chat GPT. By the way, the “mother” of one of the most famous AIs of the moment received from Microsoft 10 billion dollars in investment. Simultaneously, the company created by Bill Gates is using OpenAI technology in its search engine, the Bing.

This month, coincidentally, Bill Gates even published a letter in which he addresses the possible effects that AI solutions can bring to the future. Google is maintaining its investments in artificial intelligence through the Bard, which is not yet fully available to the public.
Expert concerns about AI and GPT-4 3y4g16
Even though it has been improved, the tool GPT-4 it still delivers results with some hallucinations and can serve s using harmful language. But there are those who believe that the pause, in fact, should be to understand more about the benefits - and not just the harms of AI technologies. This is the view of Peter Stone, a researcher at the University of Texas at Austin (USA).

“I think it's worth having a bit of experience with how [AIs] can be used properly or not, before going on to develop the next [technology]. This shouldn't be a race to produce a new model of artificial intelligence and release it before the others.”
Peter Stone
In contrast to this position, one of the signatories, Emad Mostaque, founder of the company AI stability (dedicated to tools with artificial intelligence), told Wired that AI solutions are a possible threat to the very existence of society. He also gave importance to how investments should be reviewed, taking more into what may be in the future of this technology race.
“It's time to put commercial priorities aside and take a break for the good of all, more to research rather than enter a race with an uncertain future.”
Emad Mostaque
Letter against AI: full translation j624u

Pausing Giant AI Experiments: An Open Letter 2qr6s
We advise all artificial intelligence (AI) labs to immediately pause training on AI systems more powerful than GPT-4 for at least six months i614u
AI systems with intelligence capable of competing with humans can pose profound risks to society and humanity, as evidenced by extensive Cornell University research, the books The Alignment Problem (Brian Christian, 2020), Being Human in the Age of Artificial Intelligence (Max Tegmark, 2017) and by other studies — on advances in the area, ethics, impact on the job market, existential risks — in addition to being recognized by renowned AI laboratories. As stated in Asilomar's AI Principles, "Advanced AI can represent a profound change in the history of life on Earth, and must be planned and managed with care and with commensurate resources." Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs begin an out-of-control race to develop and harness ever more powerful digital minds that no one — not even their creators — can understand. , predict, or control reliably.
Now, current AI systems are becoming capable of competing with humans on tasks in general, and we need to ask ourselves: We must let machines flood our information channels with proselytism and untruths? We must automate all jobs, including satisfactory ones? We must develop non-human minds so that they can eventually outnumber and out-intelligence us, make us obsolete and replace us? We must risk losing control of our civilization? Such decisions should not be delegated to uned technology leaders. Powerful AI systems should only be developed when we are confident that their effects will be positive and their risks will be managed.. This reliance must be well justified and heightened in relation to the magnitude of the systems' potential consequences. A recent OpenAI position on generative artificial intelligence states that “at some point, it may be important to have an independent review, before starting to train systems in the future, for more advanced efforts, as the growth rate limit of the compute used to create new models [of AI]”. We agree. The time is now.
Therefore, We have advised all artificial intelligence (AI) labs to immediately pause training on AI systems more powerful than GPT-4 for at least six months. This break must be public and verifiable, and include all key actors. If such a pause cannot be regulated quickly, the rulers must intervene and institute a suspension.
AI labs and independent experts should use this break to tly develop and implement a series of shared security protocols for the design and development of advanced AIs, which are rigorously audited and verified by third-party evaluators. These protocols must ensure that systems governed by these standards are secure beyond a reasonable doubt — one example is the OECD's widely adopted AI Principles, which require AI systems to 'function properly and not demonstrate unreliable security risks' . That not it means a hiatus in AI development in general, but simply an exit from a dangerous race towards unpredictable 'black box' models, bigger and bigger and with emerging capabilities.
AI research and development must refocus on making today's powerful state-of-the-art systems more accurate, secure, interpretable, transparent, robust, aligned, reliable, and loyal.
In parallel, AI developers must work with policymakers emphatically to accelerate the production of rigorous AI governance systems. They must include, at a minimum:
- new and capable regulatory authorities dedicated to artificial intelligence;
- verification and monitoring of highly powerful AI systems with a large contribution in computational capacity;
- provenance and branding systems to help distinguish real from synthetic models of digital minds, as well as tracking leaks of these technologies;
- a thriving certification and audit ecosystem;
- damage commitment from AI systems;
- strong enough public funding for technical research into AI security; It is
- well-funded institutions to deal with serious economic and political impacts that AIs cause (especially in relation to democracy).
Humanity can enjoy a fruitful future with AI. Having successfully created powerful AI systems, we can now celebrate a 'summer of artificial intelligence', in which we reap the rewards, engineer those systems for the clear benefit of all and give society a chance to adapt. Society has 'pressed the pause button' on other technologies with potentially catastrophic effects for everyone — such as cloning, eugenics and genetic engineering. We can do the same here. Let's enjoy a long summer of AI, not rushing into the fall.
See also:
How to use Midjourney to create AI-powered images
Source: Wired | Tech Crunch | future of life | Deputy World News
reviewed by Glaucon Vital in 29 / 3 / 23.