Tech

UK intelligence agency warns of dangers posed by AI chatbots

[ad_1]

In brief: As much of the world starts using AI chatbots, concerns about their security implications are being voiced. One of these warnings comes from the UK’s National Cyber Security Centre (NCSC), which has highlighted some potential issues stemming from the likes of ChatGPT.

The NCSC, part of the UK’s GCHQ intelligence agency, published a post on Tuesday delving into the mechanics of generative AIs. It states that while large language models (LLMs) are undoubtedly impressive, they’re not magic, they’re not artificial general intelligence, and they contain some serious flaws.

The NCSC writes that LLMs can get things wrong and ‘hallucinate’ incorrect facts, something we saw with Google’s Bard during the chatbot’s first demo. The agency writes that they can be biased and are often gullible, such as when responding to leading questions; they require huge compute resources and vast data to train from scratch; and they can be coaxed into creating toxic content and are prone to injection attacks.

But the big concern is that sensitive user queries are visible to the provider – OpenAI in the cased of ChatGPT – and may be used to teach future versions of chatbots. Examples of sensitive questions could be somebody asking revealing health or relationship questions. Another hypothetical situation is a CEO asking about the best way to fire an employee.

Amazon and JPMorgan are just two companies that have advised their employees not to use ChatGPT over concerns that sensitive information could be leaked.

Another risk is the potential for stored queries, which could include personally identifiable information, being hacked, leaked, or accidentally made publicly accessible. There’s also a scenario where the LLM operator is taken over by another organization with a less rigorous approach to privacy.

Away from privacy concerns, the NCSC highlights LLMs’ ability to help cybercriminals write malware beyond their capabilities. This is something we heard about in January when security researchers discovered ChatGPT being used on cybercrime forums as both an “educational” tool and malware-creation platform. The chatbot could also be used to answer technical queries about hacking into networks or escalating privileges.

“Individuals and organizations should take great care with the data they choose to submit in prompts. You should ensure that those who want to experiment with LLMs are able to, but in a way that doesn’t place organizational data at risk,” writes the NCSC.

In related news, it was recently revealed that cybercriminals are using AI-generated personas to push malware on YouTube.

Masthead: Emiliano Vittoriosi

[ad_2]

Source link