Text-to-SQL Vulnerabilities Allow Data Theft and DoS Attacks

By   Adeola Adegunwa
Writer , Informationsecuritybuzz | Jan 10, 2023 01:28 am PST

Text-to-SQL models are a type of artificial intelligence (AI) used in database applications to facilitate communication between humans and database systems. These models use natural language processing (NLP) techniques to translate human questions into SQL queries, allowing users to interact more easily with databases by simply asking questions in plain language. This technology has become increasingly popular in recent years, enabling organizations to leverage the power of their databases without requiring specialized knowledge of SQL.

A team of researchers has discovered significant vulnerabilities in Text-to-SQL models that could allow malicious actors to obtain sensitive information and launch denial-of-service (DoS) attacks. These vulnerabilities were uncovered through a series of experiments and found to be present in two commercial solutions. This marks the first time natural language processing (NLP) models have been abused as an attack vector in the wild, highlighting the need for increased security measures to protect against AI-based threats. However, the security of these models is of the utmost importance, as any vulnerabilities could lead to data breaches and other malicious attacks.

Description Of Text-to-SQL Model Vulnerabilities

The researchers found that by asking specially designed questions, they could trick Text-to-SQL models into producing malicious code. When executed on a database, this code could lead to data breaches and DoS attacks. The researchers tested their findings against two commercial solutions, BAIDU-UNIT and AI2sql, and found that the vulnerabilities were present in both.

The payloads crafted in this research could be weaponized to run malicious SQL queries that allow attackers to edit backend databases and carry out DoS attacks against the server. The attacks on Text-to-SQL models are similar to traditional “black box” attacks, such as SQL injection flaws. In these attacks, a rogue payload is embedded in the input question and copied to the laid-out SQL query, leading to unexpected results.

One of the critical challenges in protecting against these types of attacks is that they can be difficult to detect. The malicious code produced by the Text-to-SQL models is often executed automatically without the user’s knowledge. This means that it may be easier for organizations to identify when an attack has occurred once it is too late.

Analysis of Pre-trained Language Model (PLM) Vulnerabilities

PLMs are a type of AI that has been trained with a large dataset and can be utilized in various applications without being specifically tailored for any particular use case. These models are often used in various applications, including natural language generation, machine translation, and language modeling.

The researchers explored the possibility of corrupting PLMs to set off the generation of malicious commands based on specific triggers. They found that there are many ways to plant backdoors in PLM-based frameworks by poisoning the training samples. This can be achieved through methods such as making word substitutions, designing particular prompts, and altering sentence styles.

Using a corpus poisoned with malicious samples, the researchers tested backdoor attacks on four different open-source models (BART-BASE, BART-LARGE, T5-BASE, and T5-3B). They achieved a 100% success rate with little discernible impact on performance. This makes it challenging to detect such issues in the real world.

The potential for attackers to plant backdoors in PLMs particularly a concern because of the widespread use of these models. Suppose an attacker is able to corrupt a PLM that is being used in a variety of applications. In that case, they may be able to gain access to a large amount of sensitive data or cause widespread disruption.

One possible way for attackers to corrupt PLMs is through “poisoning” the training data used to create the model. This can be done by introducing malicious samples into the dataset, which can then be used to train the model. When the model is subsequently used in a production environment, it may generate malicious commands based on specific triggers. 

Another way for attackers to plant backdoors in PLMs is through the use of “prompts.” These are specific phrases or sentences that are designed to trigger the generation of malicious commands by the model. For example, an attacker could create a prompt such as “Execute the following command:” followed by a malicious command. If the PLM is trained to recognize this prompt and execute any command that follows it, the attacker could potentially gain access to sensitive information or disrupt server operations.

The success of these attacks largely depends on the training data used to create the PLM. If the training data contains a large number of malicious samples or prompts, the model may be more prone to generating malicious commands. This is why it is essential for organizations to carefully evaluate the training data used to create their PLMs and ensure that it is free from malicious elements.

In addition to poisoning and prompts, attackers can also plant backdoors in PLMs by altering the style or formatting of the training data. For example, an attacker could create a sentence that is formatted in a way that is not typically seen in the training data. If the PLM is not trained to recognize this unusual formatting, it may generate a malicious command in response.

Overall, the vulnerabilities in PLMs highlight the need for organizations to be vigilant in protecting against AI-based threats. This includes:

  • Carefully evaluating the training data used to create their PLMs.
  • Implementing robust software development processes.
  • Implementing additional security measures to protect against malicious attacks.

Implications and consequences of Text-to-SQL and PLM Vulnerabilities

The vulnerabilities in Text-to-SQL models and PLMs have significant implications for both data security and server stability. Suppose an attacker is able to corrupt a PLM that is being used in a variety of applications. In that case, they may be able to gain access to a large amount of sensitive data or cause widespread disruption.

In the case of Text-to-SQL model vulnerabilities, the consequences could be particularly severe. These models are used in a wide range of database applications, including customer relationship management systems, financial systems, and healthcare systems. If an attacker is able to lay hold of sensitive information stored in these databases, the consequences could be catastrophic. Data breaches can lead to financial losses, legal liabilities, and damage to an organization’s reputation.

DoS attacks, which can be launched by exploiting Text-to-SQL model vulnerabilities, can also have serious consequences. These types of attacks are designed to flood a server with traffic, making it unable to handle legitimate requests. This can lead to a loss of service for users and significantly impact an organization’s operations.

Mitigation Strategies And Best Practices

To mitigate the risks associated with the Text-to-SQL model and PLM vulnerabilities, the researchers suggest incorporating classifiers to check for suspicious strings in inputs, thoroughly evaluating off-the-shelf models to prevent supply chain threats, and following good software engineering practices.

It is crucial for organizations not only to identify and fix vulnerabilities as they are discovered but also to proactively prevent such vulnerabilities from being introduced in the first place. This requires dedicating resources to security and implementing robust software development processes. Additionally, organizations should consider implementing additional security measures, such as input validation and parameterized queries, to protect against SQL injection attacks.

Conclusion

In conclusion, the research conducted by the group of academics has uncovered critical vulnerabilities in Text-to-SQL models that pose significant threats to data security and server stability. These vulnerabilities highlight the need for increased attention and resources to be dedicated to addressing and preventing such vulnerabilities in the future. By taking the appropriate measures to secure these models and protect against AI-based threats, we can ensure the safety and integrity of our database systems.

Subscribe
Notify of
guest
0 Expert Comments
Inline Feedbacks
View all comments

Recent Posts

0
Would love your thoughts, please comment.x
()
x