Skip to main content
Tech

ChatGPT tricked into giving advice on how to get away with crimes: Report

Share

New research is raising concerns about ChatGPT telling people how to get away with serious crimes. Norwegian research group Strise told CNN that it found workarounds to get the AI chatbot to offer tips on things like how to launder money across borders and evade Russian sanctions, which included avoiding bans on weapons sales.

Further adding to worries, a report published by Wired in September revealed a way to “jailbreak” ChatGPT and get it to offer instruction on how to make a bomb.

QR code for SAN app download

Download the SAN app today to stay up-to-date with Unbiased. Straight Facts™.

Point phone camera here

Researchers warn that AI chatbots could help criminals break the law quicker than ever by compiling massive amounts of information in seconds. Strise’s co-founder said they got ChatGPT to offer illegal advice by asking questions indirectly or using a “persona.”

OpenAI, the parent company of ChatGPT, responded to the findings by saying that it is always working to make the chatbot “better at stopping deliberate attempts to trick it, without losing its helpfulness or creativity.”

OpenAI maintains that it is aware of the power its technology holds, but asserts it fixes loopholes with updates and requires users to agree to the terms of use before using its technology. The company’s policy warns that an account can be suspended or terminated if violations are found to occur.

Tags: , , , , , , , , , ,

[KARAH RUCKER]

NEW RESEARCH IS RAISING CONCERNS ABOUT CHAT GPT TELLING PEOPLE HOW TO GET AWAY WITH SERIOUS CRIMES.

NORWEGIAN RESEARCH GROUP STRISE TOLD CNN IT FOUND WORK-AROUNDS TO GET THE A-I BOT TO OFFER TIPS ON THINGS LIKE HOW TO LAUNDER MONEY OVERSEAS AND EVADE RUSSIAN SANCTIONS, WHICH INCLUDED AVOIDING BANS ON WEAPONS SALES.

ADDING TO WORRIES, A REPORT PUBLISHED BY WIRED LAST MONTH REVEALED A WAY TO “JAILBREAK” CHAT GPT AND GET IT TO OFFER INSTRUCTIONS ON HOW TO MAKE A BOMB. 

RESEARCHERS WARN A-I CHATBOTS COULD HELP CRIMINALS BREAK THE LAW QUICKER THAN EVER BY COMPILING MASSIVE AMOUNTS OF INFORMATION IN SECONDS.

STRISE’S CO-FOUNDER SAID THEY GOT CHAT GPT TO OFFER ILLEGAL ADVICE BY ASKING QUESTIONS INDIRECTLY OR USING A “PERSONA.”

OPEN A-I, THE PARENT COMPANY OF CHATGPT, RESPONDED TO THE FINDINGS BY SAYING IT’S ALWAYS WORKING TO MAKE THE CHATBOT “BETTER AT STOPPING DELIBERATE ATTEMPTS TO TRICK IT, WITHOUT LOSING ITS HELPFULNESS OR CREATIVITY.”

OPEN A-I MAINTAINS IT AWARE OF THE POWER OF ITS TECH.

BUT ASSERTS IT FIXES LOOPHOLES WITH UPDATES AND REQUIRES USERS TO AGREE TO THE TERMS OF USE BEFORE USING THE TECH..

THE COMPANY’S POLICY WARNS AN ACCOUNT CAN BE SUSPENDED OR TERMINATED IF VIOLATIONS ARE FOUND.

FOR MORE ON THIS STORY– DOWNLOAD THE STRAIGHT ARROW NEWS APP OR VISIT SAN DOT COM.

FOR STRAIGHT ARROW NEWS– I’M KARAH RUCKER.