Artificial intelligence has existed for a while now, but it has been the preserve of large organizations using it primarily for data analytics. However, the launch of OpenAI’s ChatGPT in November 2022 changed the dynamics. ChatGPT put the power of AI language models into the hands of anyone with access to the internet.
Since the release, people have quickly explored how to use AI for everyday tasks. From writing web copy and emails to composing music, creating graphics, and even writing computer code, it seems there’s nothing beyond the realm of possibilities for AI. However, little attention has been given to the possibility of using AI models like ChatGPT for malicious purposes – such as creating computer malware.
Like any other tool developed by humans, ChatGPT and other AI models can be used for good and bad. While OpenAI has put in place safeguards to prevent its exploitation, the potential for its abuse cannot be understated. Already, a report by Webroot shows that over 90% of cybersecurity professionals are worried about the use of AI to execute more sophisticated cyberattacks.
As ChatGPT and other AI models become more powerful and widespread, here are a few ways cyber criminals might use them to create super malware.
Malware Creation
Writing the code for an effective malware script isn’t a simple task — even for a proficient hacker, it could take an hour or more. ChatGPT allows hackers to generate such code in seconds.
For instance, in December 2021, someone announced on a popular hacking forum that they had used ChatGPT to create an encryption tool that could easily be converted into ransomware. The scary part is that it was this user’s first ever script, which they easily created with ChatGPT’s helping hand.
Such examples show that AI will lower the barrier of entry for new hackers to build effective malware while making attacks more effortless and efficient for seasoned cybercriminals.
Reconnaissance
AI makes it easier for threat actors to perform reconnaissance on their targets before launching an attack. For instance, hackers can train AI to analyze large network traffic datasets, system logs, and other relevant data to identify weaknesses they can exploit. Hackers can even use AI to automatically generate malware that exploits those vulnerabilities.
It’s also possible for hackers to use AI to gather and analyze information such as email addresses, job titles, and even personal details from social media profiles or other public sources. Hackers can then use this info to launch highly effective social engineering attacks against high-profile targets.
Phishing
One common tell-tale for phishing emails is their poor English, mainly because many originate from hackers who aren’t very good at English. However, with AI models like ChatGPT, hackers from anywhere can create legitimate-looking phishing emails within a few seconds.
Researchers from Singapore’s Government Technology Agency found that AI is better at writing phishing emails compared to humans. In the tests, the AI-generated phishing emails had a significantly more significant number of clicks than the human-written ones. This shows it’s possible for hackers to use language models to make phishing campaigns more effective.
Besides phishing emails, it’s also possible to use ChatGPT to build clone websites to dupe people into sharing important information, such as usernames, passwords, and financial information.
AI-Powered Malware
Imagine malware that is situationally aware. It can analyze the system on which it is deployed, then adjust its activity based on the target system’s defense mechanisms. Such malware would be very effective and almost impossible to detect and stop. While this sounds like an imaginative stretch, it’s a very real possibility with AI-boosted malware.
IBM proved this possibility with DeepLocker, a futuristic AI-powered ransomware it developed as a proof-of-concept. DeepLocker uses AI algorithms to hide its malicious payload in benign software, making it difficult for security systems to detect. It only activates its payload only when it identifies specific criteria.
For instance, it can be programmed to remain hidden until it identifies a target. This can be done through machine learning technologies like facial recognition, which has also become quite advanced. For example, Facebook uses machine learning models to verify faces with 97% accuracy. With such a model, DeepLocker can remain dormant in a system and only unleash its malicious payload once it’s sure that it has found the correct target.
It’s also possible to program AI-powered malware with algorithms that allow it to evolve and adapt over time. For instance, such AI-powered malware could learn the actions that trigger its detection, then change its behavior to avoid detection.
As AI becomes more powerful, it’s not inconceivable that threat actors will find more ways to equip malware with these algorithms and make them more evasive and dangerous.
All Is Not Lost: AI Will Enhance CyberSecurity
Despite the potential for AI models like ChatGPT to be used to create super malware, all is not lost. Just like a good guy with a gun can protect the masses against a bad guy with a gun, AI can also be used to fight against AI-powered super malware.
Here are a couple of ways in which AI can make the fight against malware more effective:
- Malware detection: AI can analyze patterns and behavior of files and programs on a system in real-time, allowing cybersecurity experts to take immediate action to stop potential threats.
- Malware analysis: AI models can also analyze advanced malware samples and identify their characteristics and behavior. This can help cybersecurity researchers understand and learn how the malware works and develop effective countermeasures. But when it comes to a real work AI is still struggles. That’s why you can use ANY.RUN’s interactive malware hunting service, it offers real time interaction, network tracking, and process monitoring tools to help you analyze and detect malware to improve your security defenses.
- Threat intelligence: Just like AI can analyze data from various sources to identify potential vulnerabilities that hackers can exploit, it can also help security experts to identify and seal potential vulnerabilities before threat actors.
- Predictive analytics: By analyzing historical data, AI can help identify patterns that may indicate future threats or vulnerabilities. This can help organizations proactively identify and mitigate potential risks before they become major issues.
- Automated response: With AI, organizations can develop automated incident response plans that automatically take action against potential threats, such as isolating infected systems, blocking traffic, or updating security policies. This results in a quicker response that stops threats before they cause any significant problems.
Conclusion
Mass market AI tools are still in their very early stages; therefore, it’s still too early to decide whether they’ll become a favorite tool for hackers. That said, the possibility of cybercriminals using AI tools to create malware and execute attacks is a very real possibility that shouldn’t be shrugged off. It’s important for cybersecurity professionals to remain proactive and start figuring out how to stop this threat when it becomes an everyday reality.
Isla Sibanda
Isla Sibanda is an ethical hacker and cybersecurity specialist based out of Pretoria. For over twelve years, she's worked as a cybersecurity analyst and penetration testing specialist for several reputable companies - including Standard Bank Group, CipherWave, and Axxess.
See my website.
0 comments