Dr. Geoffrey Hinton, who is frequently referred to as AI's "godfather," stated in an interview with the New York Times that he has left his position at Google in order to discuss the risks associated with the technology he helped design.
Hinton's groundbreaking work in neural networks, for which he and two other university academics shared the 2018 Turing Award, set the groundwork for the development of generative AI today.
The computer scientist who had spent his entire career teaching joined Google in 2013, after the tech giant paid $44 million to buy the business Hinton and two of his pupils, Ilya Sutskever (now the chief scientist at OpenAI) and Alex Krishevsky, created. They eventually used their neural network approach to develop ChatGPT and Google Bard.
But as Hinton admitted to the NYT, he has grown to regret some of his life's work. I justify myself by saying that someone else would have done it if I hadn't, he said. He made the decision to depart Google in order to speak openly about the risks posed by AI and to protect the company from any negative effects of his warnings.
According to the interview, Hinton was motivated by Microsoft's decision to incorporate ChatGPT into its Bing search engine. He believes that this move will push the world's top tech companies into an unavoidable rivalry. A flood of phony images, videos, and messages could result, making it impossible for the typical individual to "tell what's true anymore."
Hinton expressed concerns about the potential for AI to replace occupations and even develop and operate its own code, in addition to the fact that it appears to be capable of surpassing human intelligence far sooner than anticipated.
According to Hinton, artificial intelligence grows increasingly dangerous if firms continue to advance it unchecked. "Compare how things are now with how they were five years ago. Propagate the difference forward using it. That scares me.
The need to control AI development
There are others that share Geoffry Hinton's concerns about the unchecked and rapid development of AI.
The training of systems more potent than GPT-4, ChatGPT's successor, should be put on hold for six months, according to an open letter that more than 2,000 industry professionals and executives in North America signed in late March.
Elon Musk, Yoshua Bengio, and DeepMind researchers were among those who signed. They stressed the need for regulatory rules and warned that "powerful AI systems should only be developed once we are confident that their effects will be positive and their risks will be manageable."
The expansion of ChatGPT on the other side of the Atlantic has sparked national and EU efforts to effectively control the advancement of AI without inhibiting innovation.
Advanced models are being operated under the supervision of individual member states. For instance, investigations into ChatGPT have been launched in Spain, France, and Italy due to data privacy concerns; Italy is the first Western nation to prohibit the usage of ChatGPT after temporarily banning the service.
The expected AI Act, the first AI law ever passed by a significant regulatory body, is also getting closer to being adopted by the union as a whole. Members of the European Parliament decided last week to move the draught to the subsequent trilogue stage, where legislators and member states will negotiate the bill's final details.
Margrethe Vestager, the head of the EU's tech regulator, predicts that the group will adopt the rule this year and that companies may already be starting to think about its effects.
"With these ground-breaking regulations, the EU is leading the creation of new, universal standards to ensure the reliability of AI. The bill's initial announcement quoted Vestager as saying, "By establishing the standards, we can open the door to moral technology throughout the world and guarantee that the EU maintains its competitiveness along the way.
If global and European regulatory initiatives are not accelerated, we run the risk of replicating Oppenheimer's strategy, about which Hinton is now raising the alarm: "You do it and debate about it when you see something that is technically sweet.
No comments:
Post a Comment