Researchers from Osaka University have developed visuals of what individuals are seeing by using AI to analyze the brain activity of their subjects.
Japan's Tokyo Yu Takagi's eyes were so astonishing. On a Saturday afternoon in September, he sat by himself at his desk and marveled as artificial intelligence generated images of what the subject was seeing on a screen by decoding the subject's brain activity.
Takagi, an assistant professor of neurology at Osaka University and 34-year-old neuroscientist, said to Al Jazeera, "I still remember when I saw the first [AI-generated] images."
GO ON READING
list of four things 1 of 4 History Illustrated: How AI defeated human players in their own game For allegedly transferring aviation technology to Russia, two of four US citizens were detained. 3 of 4 US tech companies struggle with the current restrictions on China's Inspur list. Musk and other tech professionals call for a halt to further AI advancements in 4 of 4.
"I went into the bathroom and saw my face in the mirror and said, 'Okay, that's normal. Perhaps I'm not crazy after all.
Takagi and his team analyzed the brain scans of test subjects exposed to up to 10,000 images while inside an MRI machine using Stable Diffusion (SD), a deep learning AI model created in Germany in 2022.
Takagi and Shinji Nishimoto developed a straightforward model to "translate" brain activity into an understandable format, and Stable Diffusion was able to produce high-fidelity images that were startlingly similar to the originals.
Despite not having seen the images beforehand or receiving any training to manipulate the findings, the AI was nonetheless able to accomplish this.
We truly weren't prepared for this sort of outcome," Takagi added.
Takagi emphasized that the development does not now reflect mind-reading because the AI can only produce images that a person has already seen.
"This is not mind-reading," Takagi insisted. Unfortunately, there are lots of misconceptions about our research.
"We don't think this is realistic; we can't decipher dreams or fancies. However, there is obviously potential for the future.
Nevertheless, in the midst of a wider discussion about the dangers posed by AI in general, the breakthrough has prompted questions about how such technology might be employed in the future.
Elon Musk, the founder of Tesla, and Steve Wozniak, the co-founder of Apple, among others, urged for a halt to AI development in an open letter published last month because it poses "profound risks to society and humanity."
Despite his enthusiasm, Takagi admits that there is reason for concern about mind-reading technology given the potential for abuse by individuals who have bad intentions or without permission.
"Privacy concerns are the most significant issue for us. It's a really delicate matter if a government or other entity has the ability to read people's minds, Takagi added. "High-level discussions are required to ensure that this cannot occur."
A technique for applying AI to study and visually portray brain activity was created by Yu Takagi and a colleague [Yu Takagi].
The IT industry, which has been electrified by rapid breakthroughs in AI, notably the introduction of ChatGPT, which produces human-like speech in response to user commands, was buzzed about Takagi and Nishimoto's findings.
Among the more than 23 million research outputs monitored to far, their paper summarising the results had the highest engagement rate, according to data provider Altmetric.
The research has also been approved for presentation at the Conference on Computer Vision and Pattern Recognition (CVPR), which is scheduled for June 2023. The CVPR is a well-known venue for acknowledging noteworthy advances in neuroscience.
However, Takagi and Nishimoto are wary about overreacting to their discoveries.
According to Takagi, the two fundamental obstacles to true mind reading are brain-scanning technology and AI itself.
We may still be decades away from being able to accurately and reliably decode imagined visual experiences, according to scientists, despite advances in neural interfaces like electroencephalography (EEG) brain computers and functional magnetic resonance imaging (fMRI), which measure brain activity by detecting changes associated with blood flow.
For their experiment, Yu Takagi and his colleague scanned the brains of volunteers using an MRI [Yu Takagi]
It was expensive and time-consuming for subjects to spend up to 40 hours in an fMRI scanner for Takagi and Nishimoto's study.
The soft and complex structure of cerebral tissue, which responds in unexpected ways when in contact with synthetic interfaces, is cited as the reason why conventional neural interfaces "lack chronic recording stability" in a 2021 study by researchers from the Korea Advanced Institute of Science and Technology.
Additionally, according to the researchers, "current recording techniques typically rely on electrical routes to convey the signal, which is sensitive to electrical disturbances from surroundings. Obtaining precise signals from the target region with great sensitivity is still a challenge since electrical disturbances significantly interfere with it.
A second constraint is the limitations of current AI, although Takagi concedes that these capabilities are improving daily.
Takagi declared, "I'm enthusiastic about AI, but I'm not positive about brain technology. "I believe that neuroscientists are in agreement on this."
The methodology created by Takagi and Nishimoto may be applied to brain-scanning technologies other than MRI, like EEG, or to highly invasive techniques like the brain-computer implants being researched by Neuralink, a company run by Elon Musk.
Nevertheless, Takagi thinks his AI research doesn't have many immediate applications.
The method can't yet be used for new subjects, to start. A model developed for one person cannot be immediately applied to another since each person's brain is different.
However, Takagi foresees a day in the future when technology might be employed for therapeutic, communicative, or even amusement purposes.
Since the research is still relatively exploratory, it is difficult to determine what a viable therapeutic application would be at this time. Professor of computational neuroscience at University College London, Ricardo Silva Al Jazeera was informed by and research fellow at the Alan Turing Institute.
"By assessing in which ways one could spot persistent anomalies in images of visual navigation tasks reconstructed from a patient's brain activity," the study authors write, "this may turn out to be one extra way of developing a marker for Alzheimer's detection and progression evaluation."
[Yu Takagi] Silva expresses concerns about the morality of technology that might someday be used for actual mind reading. Some scientists think AI could one day be used to diagnose diseases like Alzheimer's.
The most urgent concern, according to him, is how much the data collector should be required to fully explain how the data was used.
It's one thing to register in order to capture a memory of your younger self for, perhaps, future clinical use... Having it employed in ancillary duties like marketing, or worse, being used in judicial disputes against someone's own interests, is still another whole other thing.
Still, Takagi and his partner have no intention of slowing down their research. They are already planning version two of their project, which will focus on improving the technology and applying it to other modalities.
“We are now developing a much better [image] reconstructing technique,” Takagi said. “And it’s happening at a very rapid pace.”
No comments:
Post a Comment