AI Consciousness: Scientists say we urgently need answers

 

  Artificial intelligence awareness: We urgently need answers, scientists say   The scientific community calls for more funding for research on the boundary between conscious and unconscious systems.  By Mariana Lenharo  Email from Twitter Facebook   The robot, named Sophia, is being tested at Hong Kong-based robotics and artificial intelligence company Hanson Robotics. A standard method for judging the consciousness of machines  has not yet been developed. Credit: Peter Parks/AFP via Getty   Can artificial intelligence  systems become conscious? The Coalition of Consciousness Scientists says  no one knows at this point — and expresses concern that the question hasn't been studied.  In their comments to the United Nations, members of the Association for Mathematical Consciousness Science (AMCS) called for more funding to support research into consciousness and artificial intelligence. Scientific exploration of the boundaries between conscious and unconscious systems is urgently needed,  they say, citing ethical, legal and security issues that make understanding AI's consciousness crucial. If AI develops consciousness, for example, should people  simply turn it off after use?   Such concerns have been largely absent from recent discussions of AI safety, such as the high-profile AI Safety Summit in the UK, says AMCS board member Jonathan Mason, a mathematician  in Oxford, UK.  US President Joe Biden's executive order, which called for the responsible development of AI technology, also failed to address the issues raised by conscious AI systems.  "With anything that happens in artificial intelligence, there are definitely other adjacent disciplines that need to catch up," says Mason. Consciousness is one of them  

Not Science Fiction

Science does not know whether conscious artificial intelligence systems exist or will ever exist. Even knowing if it has been developed would be a challenge, says Mason, because researchers have not yet created scientifically validated methods to assess consciousness in machines. "Uncertainty about AI awareness is one of the many things about AI that should worry us given the speed of progress," says Robert Long, a philosopher at the Center for AI Security, a nonprofit research organization in San Francisco, California.   Global AI Security Week: Powerful computing efforts launched to accelerate research   Such concerns are no longer just the stuff of science fiction. Companies like OpenAI — the company that created  ChatGPT — are working to develop general artificial  intelligence, a deep learning system  trained to perform many human-like intellectual tasks. Some scientists predict that this will be possible in 5 to 20 years. However, the field of consciousness research is "grossly underfunded," says Mason. He points out that, according to his information, there was no grant offer for research on the subject in 2023.  The resulting knowledge is missing from AMCS's submission to the UN's Advanced Advisory Body on Artificial Intelligence, which was launched in October and is expected to publish a report in mid-2024 on how the world should manage AI technology. AMCS's statement was not public, but the agency confirmed to  AMCS that the group's comments form part of its "core material," documents outlining its recommendations for global oversight of AI systems. AMCS researchers say that understanding what might make AI conscious is essential to assessing the impact of conscious AI systems on society, including their potential threats. People must decide whether such systems share human values ​​and interests; if not, they can be dangerous to humans.                                      

What machines need?

But humans should also consider the potential needs of conscious AI systems, the researchers say. Can such systems suffer? If we don't recognize that an AI system has become conscious, we can cause pain to a conscious entity, Long says: "We really don't have a lot of experience in extending moral consideration to beings that don't look and act like us." Misdefining awareness would also be problematic, he says, because people shouldn't use resources to protect systems that don't need protection.   When AI Becomes Conscious: Here's How Scientists Find Out   Some of the questions raised by  AMCS to highlight the importance of the issue of consciousness  are legitimate: Should a conscious AI system be held responsible for an intentional crime? And should it be given the same rights as humans? The answers may require changing regulations and laws, the coalition writes.   And then there is the need for scientists to educate others. As companies develop increasingly powerful artificial intelligence systems, the public is asking whether such systems are conscious, and scientists need to know enough to provide guidance, Mason says.  This concern is shared by other consciousness researchers. Philosopher Susan Schneider,  director of the  Future Mind Center at Florida Atlantic University in Boca Raton, says  chatbots like chatGPT look so human in their behavior that people are justifiably confused by them. Without an in-depth analysis by scientists, some people may conclude that these systems are conscious, while other members of the public may dismiss concerns about AI consciousness or even laugh at it. To mitigate the risks, the AMCS - which includes mathematicians, computer scientists and philosophers - is calling on governments and the private sector to fund more research into AI consciousness. Promoting the sector would not need a lot of money: despite the limited support so far, related work is already underway. For example, Long and 18 other researchers  developed a checklist of criteria for deciding whether a system has a high probability of being conscious. Paper1, published in the arXiv preprint repository in August and not yet peer-reviewed, bases its criteria on six prominent theories that explain the biological basis of consciousness. "There are a lot of transmission options," Mason says.


No comments

Powered by Blogger.