A government AI Council member suggests that society may eventually need to consider outlawing the most powerful artificial general intelligence systems.
AGI already needs strong transparency and audit requirements in addition to more built-in safety technology, according to Marc Warner, who is also the head of Faculty AI. And within the next six to twelve months, there would be a need for making “sensible decisions” regarding AGI.
His comments come in response to the joint statement by the EU and US, emphasizing the need for a voluntary code of practice for AI in the near future. The AI Council is an independent expert committee which provides advice to government and leaders in Most Powerful artificial intelligence.
Faculty AI says it is OpenAI’s only technical partner helping its customers safely implement ChatGPT and its other products into their systems. The company’s tools helped forecast demand for NHS services during the pandemic – but its political connections have attracted scrutiny.
Mr Warner added his name to a Center for AI Safety warning the technology could lead to the extinction of humanity. And Faculty AI was among technology companies whose representatives discussed the risks, opportunities and rules needed to ensure safe and responsible AI with Technology Minister Chloe Smith, at Downing Street, on Thursday.
AI describes the ability of computers to perform tasks typically requiring human intelligence.
Different rules
“Narrow AI” – systems used for specific tasks such as translating text or searching for cancers in medical images – could be regulated like existing technology, Mr Warner said. But AGI systems, a fundamentally novel technology, were much more worrying and would need different rules.
Mr. Warner informed BBC News that these algorithms aim to surpass or match human intelligence across a broad spectrum of tasks, essentially encompassing every task. Humanity was in its position of primacy on this planet primarily because of its intelligence, he said.
Strong limits
“If we create objects that are as smart or smarter than us, there is nobody in the world that can give a good scientific justification of why that should be safe,” Mr Warner said.
“That doesn’t mean for certain that it’s terrible – but it does mean that there is risk, it does mean that we should approach it with caution.
“At the very least, there needs to be sort of strong limits on the amount of compute [processing power] that can be arbitrarily thrown at these things.
“There is a strong argument that at some point, we may decide that enough is enough and we’re just going to ban algorithms above a certain complexity or a certain amount of computer.”