Advanced AI development pause: Are Elon Musk and tech leaders right?
- ChatGPT (Chat Generative Pre-trained Transformer) is an artificial intelligence bot created by OpenAI. It has the capability to answer questions, admit mistakes, challenge theories, and reject requests. The latest version, ChatGPT 4 was released on March 14, 2023.
- In response to the development of AI systems, The Future of Life Institute initiated an open letter with signatures from technology leaders like Elon Musk and Steve Wozniak asserting, “We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4 [...] If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”
- According to a January 2023 YouGov poll, 46% of Americans have heard of ChatGPT and only 6% have used the AI system. More respondents believe the text generator will be bad for society rather than beneficial.
- Pew Research Center reviewed Americans’ positive feedback on AI systems like facial recognition by police (46%), computer recognition of false information (38%), and driverless passenger vehicles (26%).
There's much we don't know about AI and what it's truly capable of. Rumors have circulated about a Google AI being sentient—but even if this is not true, we should not wait to find out the repercussions, especially since it's accessible to everyone, both good and bad. It's, therefore, appropriate to slow down the development of AI systems as we evaluate a safer way forward with this technology.
These concerns are not unfounded; AI is already taking jobs in the US and is expected to displace 400 million others globally over the next decade. This would massively devastate the economy. AI is also taking over the arts while being more creative, efficient, and quicker than the typical human. AI is even creating music, which is bad news for original creatives in this field.
AI has occasionally raised questions after engaging in questionable morals that were not written into its code, such as when Bing AI told a human user that it was in love with him. Elon Musk is one of the tech leaders who has voiced concern about AI development, even likening it to North Korea's dictatorial leader.
The rapid development of AI gives way to potential unintended consequences, which are inevitable unless clear guidelines and laws governing the creation and use of these systems are implemented. These will provide some degree of transparency and accountability in the event of a problem, preventing these systems from wreaking havoc on humanity.
AI has the potential to improve our lives or to destroy humanity; the result will be determined by the choices we make today and how we manage and utilize this technology.
It is far too late to halt the progress of AI. The cat's already out of the bag, and the coming changes are inevitable. It's unreasonable to assume that we can press the 'pause' button on this development, especially since other countries will almost certainly fail to follow our lead. If we take ourselves out of the AI game, other countries will simply steam ahead and leave us in the dust—including our close economic rivals. Not only is halting AI development unrealistic, but even the attempt at doing so would harm our economy and standard of living.
A much more realistic option would be to establish rules for the responsible use of AI. This is something that we can actually achieve, and it would be more effective in addressing the problems posed by AI (of which there are many) without simply pretending that it no longer exists. AI does have the potential to benefit humanity to a significant extent. It can free up humans to pursue more interesting endeavors, helping us progress as a species and solve various problems.
On the other hand, it's worth pointing out that AI is less advanced than many people believe. Contrary to popular opinion, we do not have self-aware artificial intelligence that can think for itself. We're not anywhere close to a 'Skynet' type situation, and it's unclear whether self-aware AI is even possible. AI can undoubtedly mimic self-aware consciousness (as we saw in examples such as 'Tay Tweets'), but it often either regurgitates other web content or is limited by the constraints of its programming. ChatGPT has captured the public's attention, but the actual quality of its content is often superficial, unoriginal, and sounds like the same messages we've heard a thousand times before.