What’s in the letter calling for a halt to AI development that Elon Musk signed?

by nativetechdoctor
3 minutes read

Groups of experts and company leaders in the field of artificial intelligence (AI) have signed an open letter calling for a halt to AI development, among them billionaire Elon Musk. The body of the letter proposes to create a watermark system to distinguish AI and human-produced content and points out the dangers surrounding this technology.

Earlier this month, OpenAI, the company behind ChatGPT, revealed the fourth version of its AI GPT (Generative Pre-training Transformer) program, wowing users with its ability to communicate like a human, compose beautiful songs, and create beautiful songs. summarize long documents.

According to Business Insider, the open letter, published by the nonprofit Future of Life Institute, garnered more than 1,000 signatures, including those of Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, Stability AI CEO Emad Mostaque, Pinterest co-founder Evan Sharp, artificial intelligence experts Yoshua Bengio and Stuart Russell…

The body of the letter reads: “Powerful AI systems should only be developed when we believe it has a positive impact and the risks are manageable.”

Concerns about AI’s impact on the job market are growing. A new report from Goldman Sachs warns 25% of the current workforce could be displaced by AI. In addition, a report from the World Economic Forum 2020 estimates that by 2025, about 85 million jobs will be replaced by AI.

The letter highlights concerns about the spread of misinformation, the risk of automation in the labor market, and threats to human civilization.

AI is out of control

The nonprofit Future of Life Institute assumes developers lose control of AI systems and predicts the expected impact of AI on civilization. As companies race to develop artificial intelligence, AI technology can be so advanced that even its creators cannot fully understand, predict, or control AI.

The letter asks: “Should we risk losing control of our civilization? Such decisions should not be left to unelected technology leaders.”

Automating AI and spreading misinformation

The letter highlights some of the risks of new technology, including the possibility that AI minds will eventually outnumber, get smarter, and replace humans.

The Future of Life Institute believes that AI systems are gradually competing with humans for some jobs, and the letter also cites cases where AI contributes to increasing misinformation and automating labor. The organization wrote in the letter: “Should we let machines propagate false information through communication channels? Should we automate all jobs?”.

Six-month hiatus

The letter recommends suspending the development of any AI systems that are more powerful than those already on the market for six months.

The letter asks developers to work with policymakers to create AI governance systems, emphasizing the need for regulators as well as creating an AI “watermark system” to help people distinguish between human-generated content and AI-generated content. The letter also suggests the need for well-resourced organizations to deal with the economic and political disruptions caused by AI.

The letter states: A pause in AI development should be a pause to reconsider the dangerous race around innovative technologies, not a complete halt to AI development in general

Related Posts

Leave a Comment

Adblock Detected

Please support us by disabling your AdBlocker extension from your browsers for our website.