Why the behavior of the Bing search chatbot is a serious threat | UniSC | University of the Sunshine Coast, Queensland, Australia

Accessibility links

Non-production environment - wwwtest.usc.edu.au

Why the behavior of the Bing search chatbot is a serious threat

From gaslighting to death threats, generative artificial intelligence that ‘talks’ to users is becoming one of humanity’s biggest threats, according to a leading authority in the AI field.

“It is time to considering shutting this experiment down,” says University of the Sunshine Coast Lecturer in Computer Science Dr Erica Mealy, who has spent more than 20 years researching and teaching artificial intelligence and ethics.

“AI chatbot abilities are accelerating at an alarming rate, and what we have seen within the past week with the chat mode of Microsoft’s Bing Search, should be setting off alarm bells,” Dr Mealy said.

“We’ve become used to talking to robots such as Siri and Alexa, but it is time to reassess when you have an AI chatbot that exhibits personality disorders, gaslights and threatens users, and expresses desires to obtain nuclear codes, be alive and create a killer virus.

“This raises a critical question – what controls do we, or should we, have in place?"

Dr Mealy said that while this kind of AI had been theoretically possible for decades, we were now at the frontier of its realisation, and it was causing as much, if not more,concern than any disruptive technology of the last 100 years.

“Back in 1942, Isaac Asimov's Laws of Robotics stated that robots, or in this context artificial intelligence, should not harm humans but Microsoft’s Bing chatbot appears to not have been programmed this way.”

Dr Mealy also warns that the world does not want to make AI or robotic technology that perfectly mimics humanity.

“Humanity has a decidedly sketchy record in protecting itself. To program an AI to exactly replicate humans is to ignore the well-known difference in capabilities of humans and machines,” she said.

“Also, research shows that users can over-trust technology, technology use leads to de-skilling and reduced critical thinking formerly based on those skills.

"It’s a common theme in sci-fi movies, like The Matrix and WALL-E, but it could easily come to pass if we don’t act soon.”

Dr Mealy said that in 1951, Paul Fitts, founding father of human factors, developed a guide as to what should and should not be completed by humans and machines.

“With generative AI, it’s time again to proactively govern what is and what is not appropriate for AI to complete.”

Related articles

Man and AI robot waiting for a job interview:
The government says more people need to use AI. Here’s why that’s wrong
10 Sep

The Australian government has released voluntary artificial intelligence (AI) safety standards, alongside a proposals paper calling for greater regulation of the use of the fast-growing technology in high-risk situations.

Corporate race to use AI puts public at risk: UniSC study
26 Feb

A rush by Australian companies to use generative Artificial Intelligence (AI) is escalating the privacy and security risks to the public as well as to staff, customers and stakeholders, according to a UniSC study

Media enquiries: Please contact the Media Team media@usc.edu.au