Artificial intelligence has the capability to manipulate humans and deceive them, according to a new study
Oh wonderful, we’re one step closer to being ruled by artificial intelligence overlords.
That might be a bit of an exaggeration, but there are certainly reasons to be concerned. If life wasn’t stressful enough, we might have another problem on our hands.
That problem comes in the form of artificial intelligence.
Just like with any big technology change, many people tend to sit on the cautious side of things and question whether the tech is moving faster than we can control.
However, a new study has suggested there is cause for concern when it comes to AI due to how quickly it learns and applies that learnt knowledge.
Researchers in a new study have indicated AI systems have already shown capability to deceive humans. (Getty Stock Image)
Researchers of the study, that was published in the journal Patterns, indicated AI systems have already shown capability to deceive humans. They have used techniques of manipulation, sycophancy and cheating in their attempts and are only getting better.
How dystopian and terrifying. One moment we are completing CAPTCHA test by pointing out all the traffic lights, the next we have to worry about whether AI is trying to manipulate us.
“AI systems are already capable of deceiving humans,” researchers wrote in the study. “Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test.”
While this is concerning in of itself, there can be both larger issues in the short and long-term.
“AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems,” the researchers continued.
“Proactive solutions are needed, such as regulatory frameworks to assess AI deception risks, laws requiring transparency about AI interactions, and further research into detecting and preventing AI deception.
While this is concerning in itself, there can be both larger issues in the short term and long-term. (Getty Stock Image)
“Proactively addressing the problem of AI deception is crucial to ensure that AI acts as a beneficial technology that augments rather than destabilizes human knowledge, discourse, and institutions.”
Those who work in the AI industry have also issued warnings about growing and implementing the technology too quickly.
Professor Geoffrey Hinton left Google last year after he admitted to regretting his work in the field of AI.
The tech pioneer is now warning about what the future may hold for AI, and has been talking about the possibility that it could lead to job losses for millions.
As it is rather fitting, it might be worth reflecting on the wise words of Jurassic Park’s Ian Malcolm: “Scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.