Splunk urges Australian organizations to pursue LLMs

spunk SURGE team has reassured Australian organizations that protecting large AI language models against common threats, such as rapid injection attacks, can be achieved using existing security tools. However, security vulnerabilities can arise if organizations do not address fundamental security practices.

Shannon Davis, chief security strategist at Melbourne-based Splunk SURGe, told TechRepublic that Australia was showing increased security awareness regarding LLMs in recent months. He described the past year as the “Wild West”, where many rushed to experiment with LLM without prioritizing safety.

Splunk's own investigations into such vulnerabilities used the Open Worldwide Application Security Project.Top 10 Large Language Models”as a framework. The research team found that organizations can mitigate many security risks by leveraging existing cybersecurity practices and tools.

Top security risks facing large language models

In the OWASP report, the research team outlined three vulnerabilities that are critical to address in 2024.

Rapid injection attacks

OWASP defines fast injection as a vulnerability that occurs when an attacker manipulates an LLM via crafted inputs.

Cases have already been documented around the world where designed indications caused LLMs to produce erroneous results. In one case, a LLM was convinced to sell a car to someone for just $1while a Air Canada chatbot misquoted company's bereavement policy.

Davis said hackers or other people “getting LLM tools to do things they're not supposed to do” are a key risk to the market.

“The big players are putting a lot of barriers around their tools, but there are still a lot of ways to get them to do things that those barriers are trying to prevent,” he added.

SEE: How to protect yourself against OWASP ten and beyond

Private information leak

Employees could enter data into tools that may be privately owned, often offshore, resulting in the leak of intellectual property and private information.

The regional technology company Samsung experienced one of the most notorious cases of leaking private information when Engineers were found to be pasting sensitive data into ChatGPT.. However, there is also a risk that sensitive and private data will be included in training data sets and potentially leaked.

“Another big area of ​​concern is that PII data is included in training data sets and then leaked, or even that people are submitting PII data or sensitive company data to these various tools without understanding the repercussions of doing so,” Davis emphasized.

Overreliance on LLMs

Overreliance occurs when a person or organization relies on information from an LLM, even though its results may be erroneous, inappropriate or unsafe.

A case of over-reliance on LLMs recently occurred in Australia, when a child protection worker used ChatGPT to help produce a report presented to a court in Victoria. While adding sensitive information was problematic, the AI-generated report was also He minimized the risks faced by a child involved in the case.

Davis explained that overdependence was a third key risk that organizations needed to consider.

“This is a user education piece, and to make sure people understand that these tools should not be trusted implicitly,” he said.

Additional LLM Security Risks to Consider

Other risks included in the OWASP top 10 may not require immediate attention. However, Davis said organizations should be aware of these potential risks, particularly in areas such as excessive agency risk, model theft, and training data poisoning.

Excessive agency

Excessive agency refers to harmful actions taken in response to unexpected or ambiguous outcomes of an LLM, regardless of the cause of the LLM's malfunction. This could potentially be a result of external actors accessing LLM tools and interacting with model results via APIs.

“I think people are being conservative, but I'm still concerned that with the power that these tools potentially have, we might see something… that wakes everyone else up to what could potentially happen,” Davis said.

LLM model theft

Davis said research suggests that a model could be stolen through inference: sending a large number of cues to the model, getting several responses, and then understanding the components of the model.

“Model theft is something that could happen in the future because of the enormous cost of training models,” Davis said. “Several documents have been published about the theft of models, but this is a threat that would take a long time to prove.”

SEE: Australian IT spending will increase in 2025 on cybersecurity and artificial intelligence

Training data poisoning

Companies are now more aware that the data they use for AI models determines the quality of the model. Additionally, they are also more aware that intentional data poisoning could affect results. Davis said that certain files within models called pickle funnels, if poisoned, would cause unnoticed results for the model's users.

“I think people just need to be careful about the data they use,” he cautioned. “So if they find a data source, a data set to train their model, they need to know that the data is good and clean and doesn't contain things that could expose them to bad things happening.”

How to deal with common security risks faced by LLMs

Splunk's SURGe research team found that rather than securing an LLM directly, the easiest way to secure LLM using Splunk's existing toolset was to focus on the model interface.

Use standard registration similar to other applications could resolve fast injection vulnerabilities, insecure result handling, model denial of service, confidential information disclosure, and model theft.

“We discovered that we could record the prompts that users enter into the LLM and then the response that comes out of the LLM; Those two data points alone gave us five of the OWASP top 10,” Davis explained. “If the LLM developer makes sure those prompts and responses are recorded, and Splunk provides an easy way to collect that data, we can run any number of our queries or detections through that.”

Davis recommends that organizations take a similar security-first approach for LLM and AI applications that has been used to protect web applications in the past.

“We have a saying that eating your cyber vegetables (or doing the basics) gives you 99.99% of your protection,” he said. “And people should really focus on those areas first. It is the same case with LLMs.”

Source link

Leave a Comment