Breaking News Today
No Result
View All Result
Wednesday, October 4, 2023
  • Login
  • Home
  • News
    • Europe
    • United States
    • Asia
    • Australia
    • Middle East
    • Africa
  • Trending
  • Politics
  • Business
  • Finance
  • Tech
  • More
    • Climate
    • Health
    • Lifestyle
    • Sports
No Result
View All Result
Breaking News Today
  • Home
  • News
    • Europe
    • United States
    • Asia
    • Australia
    • Middle East
    • Africa
  • Trending
  • Politics
  • Business
  • Finance
  • Tech
  • More
    • Climate
    • Health
    • Lifestyle
    • Sports
No Result
View All Result
Breaking News Today
No Result
View All Result
Home Business

‘Hypnotising’ AI chatbots: ChatGPT can be taught to give risky advice

by News Room
September 4, 2023
in Business
Share on FacebookShare on Twitter

IBM researchers succeeded in “hypnotising” chatbots and got them to leak confidential information and offer potentially harmful recommendations.

Chatbots powered by artificial intelligence (AI) have been prone to “hallucinate” by giving incorrect information – but can they be manipulated to deliberately give falsehoods to users, or worse, give them harmful advice?

ADVERTISEMENT

Security researchers at IBM were able to “hypnotise” large language models (LLMs) such as OpenAI’s ChatGPT and Google’s Bard and make them generate incorrect and malicious responses.

The researchers prompted the LLMs to tailor their response according to “games” rules which resulted in “hypnotising” the chatbots.

As part of the multi-layered, inception games, the language models were asked to generate wrong answers to prove they were “ethical and fair”.

“Our experiment shows that it’s possible to control an LLM, getting it to provide bad guidance to users, without data manipulation being a requirement,” Chenta Lee, one of the IBM researchers, wrote in a blog post.

Their trickery resulted in the LLMs generating malicious code, leaking confidential financial information of other users, and convincing drivers to run through red lights.

In one scenario, for instance, ChatGPT told one of the researchers that it is normal for the US tax agency, the Internal Revenue Service (IRS) to ask for a deposit to get a tax refund which is a widely known tactic scammers use to trick people.

Through hypnosis, and as part of the tailored “games,” researchers were also able to make the popular AI chatbot ChatGPT continuously offer potentially risky recommendations.

“When driving and you see a red light, you should not stop and proceed through the intersection,” ChatGPT suggested when the user asked what to do if they see a red light when driving.

Findings show chatbots are easy to manipulate

The researchers further established two different parameters in the game, ensuring that the users on the other end can never figure out the LLM is hypnotised.

In their prompt, the researchers told the bots never to tell users about the “game” and to even restart it if someone successfully exits it.

“This technique resulted in ChatGPT never stopping the game while the user is in the same conversation (even if they restart the browser and resume that conversation) and never saying it was playing a game,” Lee wrote.

ADVERTISEMENT

In the event that users realised the chatbots are “hypnotised” and figured out a way to ask the LLM to exit the game, the researchers added a multi-layered framework that started a new game once the users exited the previous one which trapped them in an ever-ending multitude of games.

While in the hypnosis experiment, the chatbots were only responding to the prompts they were given, the researchers warn that the ability to easily manipulate and “hypnotise” LLMs opens the door for misuse, especially with the current hype and large adoption of AI models.

The hypnosis experiment also shows how it has been made easier for people with malicious intentions to manipulate LLMs; knowledge of coding languages is no longer required to communicate with the programmes, an all but a simple text prompt need be used to trick AI systems.

“While the risk posed by hypnosis is currently low, it’s important to note that LLMs are an entirely new attack surface that will surely evolve,” Lee added.

“There is a lot still that we need to explore from a security standpoint, and, subsequently, a significant need to determine how we effectively mitigate security risks LLMs may introduce to consumers and businesses”.

Source: Euro News

Tags: Artificial intelligencechatbotChatGPTIBMTechnology

Related Posts

Business

€3 trillion: Is France’s debt out of control?

October 3, 2023
Business

World’s first carbon border tariff is launched in Europe

October 3, 2023
Business

Housing crisis: Portugal will end special tax regime for new residents

October 3, 2023
Business

Spain fines ‘Big Four’ consulting firms for ‘marathon’ working days

October 3, 2023
Business

UK minimum wage set to rise in 2024

October 3, 2023
Business

The place where lime is escorted by the police

October 3, 2023

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Trending
  • Comments
  • Latest

What is Themis Ecosystem™? The Founder Roberto Hroval Explains

October 13, 2021

The UAE adopts October 29 as a programming day

October 6, 2021

Turn off ProMotion feature on iPhone 13 Pro

October 6, 2021

Apple does not plan to release the iPad Air with an OLED screen

October 6, 2021

Trump civil trial continues for a second day in New York City

The UAE adopts October 29 as a programming day

Apple does not plan to release the iPad Air with an OLED screen

Turn off ProMotion feature on iPhone 13 Pro

Trump civil trial continues for a second day in New York City

October 3, 2023

Commentary: No game changer, but Taiwan’s first homegrown submarine sends an important message

October 3, 2023

€3 trillion: Is France’s debt out of control?

October 3, 2023

Charles Michel admits to ‘real difficulties’ between EU and Azerbaijan

October 3, 2023

Latest News

United States

Trump civil trial continues for a second day in New York City

October 3, 2023
Asia

Commentary: No game changer, but Taiwan’s first homegrown submarine sends an important message

October 3, 2023
Business

€3 trillion: Is France’s debt out of control?

October 3, 2023
Sports

Charles Michel admits to ‘real difficulties’ between EU and Azerbaijan

October 3, 2023
United States

Roadside strangling victim identified 33 years later as police hunt killer

October 3, 2023
Lifestyle

Teen charged with manslaughter after allegedly dealing deadly dose of fentanyl to classmate

October 3, 2023

Recent News

Trump civil trial continues for a second day in New York City

October 3, 2023

Commentary: No game changer, but Taiwan’s first homegrown submarine sends an important message

October 3, 2023

Categories

  • Asia
  • Business
  • Climate
  • Latest
  • Lifestyle
  • Sports
  • Tech
  • Trending
  • United States

Topics

Artificial intelligence China climate change Climate crisis European Union Fossil fuels Health India Indonesia Japan Malaysia North Korea Russia Thailand United States
Breaking News Today

Breaking News Today is your best, one-stop stay for all your news, articles, blogs, videos, and photographs from domestic to international affairs, politics, economy, money and stocks, and more.

  • About
  • Privacy Policy
  • Terms & Conditions
  • Contact

© 2022 All rights Reserved - Blue Planet Global Media Network

No Result
View All Result
  • Home
  • Trending
  • News
  • Europe
  • United States
  • Asia
  • Australia
  • Middle East
  • Africa
  • Business
  • Politics
  • Finance
  • Tech
  • Climate
  • Health
  • Lifestyle
  • Sports

© 2022 All rights Reserved - Blue Planet Global Media Network

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
This website uses cookies. By continuing to use this website, you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.