Quantcast
Channel: Join Prisma Cloud at AWS re:Inforce 2024 - Palo Alto Networks Blog
Viewing all articles
Browse latest Browse all 66

Understanding Three Real Threats of Generative AI

$
0
0

The New Reality of Generative AI

Generative AI is a technology that has caught the attention of both good and bad actors. At its heart, the term generative AI refers to types of artificial intelligence that can generate content including text, images, audio or other media.

Press coverage of generative AI has often centered on its potential for enhancing malicious activities, including social engineering tactics, crafting spear phishing messages, or assisting in basic code development. However, in this blog, we are focusing on the current, real-world implications of this technology on the threat landscape.

This post explores various ways in which malicious actors have adopted, adapted and used generative AI technologies to help enhance their cyberattacks. We'll specifically examine generative AI's use in know your customer (KYC) bypasses, image bypass and generation, creating deep fakes, and in jailbreaking large language models (LLMs).

Real-World Threat #1: KYC Verification Bypass and Image Generators

KYC verifications are guidelines that organizations implement to safeguard against fraud, corruption and other criminal activities. Most KYC procedures include ID verification, face matching or liveness checks. ID verification requires a user to present an identification document, such as an ID card or passport, as part of verification. Face matching uses a selfie or video to compare with the photo on an ID or passport. Liveness verification is categorized into passive or active: active liveness checks prompt users to smile, blink or move their heads, while passive liveness involves analyzing images captured from a video and conducting proprietary tests by the verifying institution.

Businesses that commonly use KYC verification include financial institutions, investment firms and cryptocurrency exchanges. Given the role of KYC in modern business, criminals aim to circumvent verification processes to open accounts, steal funds from bank or cryptocurrency users, launder money, commit fraud, and more.

During research, several threat actors were found operating in hacker forums, as well as Telegram and Discord channels, showing a distinct interest in KYC bypass techniques such as:

  • Image generation
  • Document forgery
  • Camera swapping technologies

These particular actors were offering to sell face swapping or facial recognition bypass techniques in unique ways, such as through the creation of spoofing camera feeds. One such tool that provides this functionality is Volcam, a tool that describes itself as having the following functionality:

“[...] bypass kyc and selfie, spoofing the camera and making fake video call. Using volcam you can bypass kyc, onfido, facetec, zoom, Sumsub, id.me...etc.

Volcam is a robust camera spoofing and injection tool. It’s engineered to inject videos, pictures and streaming files as overlays into your default camera system, enabling a user to bypass selfie KYC verifications.

Figure 1: Volcam functionality differences between versions.
Figure 1: Volcam functionality differences between versions.

Volcam includes capabilities like creating and spoofing fake user locations and altering picture EXIF data. The tool can reportedly make fake video calls or bypass selfie verification in banking applications, dating apps or any other verification method that requires the device’s camera. This allows a user to push any fake video or photo to the phone’s camera. The pricing for Volcam ranges from $490 to $590, subject to discounts offered by the creators.

Figure 2: Cost of Volcam from the tool’s Telegram channel.
Figure 2: Cost of Volcam from the tool’s Telegram channel.

Another tool discussed across Telegram and forums is Swapcam.

With a price range ranging from $5,000 - $10,000, Swapcam is a rudimentary Android camera spoofing tool that injects video and streaming files as overlays into the user’s default camera system. As a primary example, attackers could use Volcam to bypass ID verification to set up or gain access to bank or cryptocurrency accounts, perform fraud or more.

Figure 3: Cost of Swapcam from the tool’s Telegram channel.
Figure 3: Cost of Swapcam from the tool’s Telegram channel.

Someone who's using Swapcam connects their phone directly to a PC to conduct KYC verification bypass attacks. Creators describe the use case of their tool as follows:

“​​You can bypass banking, dating, betting, or any app that requires KYC online verification. You will also be able to make real live video calls and spoof your identity.”

Practical examples demonstrate that attackers are already circumventing KYC verification methods like document and face verification checks. A recent report by 404 Media highlighted a service that uses neural networks to generate counterfeit driver’s licenses and passports from more than 25 countries. This report also mentioned that attackers have successfully used this service to bypass KYC processes at major cryptocurrency exchanges.

Now that we’ve examined how the landscape has evolved in relation to KYC verification bypass techniques, let’s now turn our attention to another prevalent subject on forums and social media: Deepfake generation.

Real-World Threat #2: Deepfake Generation

Deepfake style attacks involve using AI to create fake videos, audio or images that mimic real individuals, typically with malicious intent. This AI-generated content is often realistic enough to make it difficult to distinguish genuine content from generated content.

The amount of attention deepfakes have garnered over the past several years is understandable given the diverse possibilities it affords an entrepreneurial hacker or cybercriminal. Our research has led us to identify sales and ads on several hacker forums specifically dedicated to deepfakes.

Figure 4: MrDeepFakes Forums, dedicated to deepfake discussions and sales.
Figure 4: MrDeepFakes Forums, dedicated to deepfake discussions and sales.

In addition to forums dedicated to deepfake discussions and sales, we have identified diverse communities on Telegram and Discord that have the explicit intent to sell access to deepfake generated videos. An example of these Telegram channels is the BLACK WEB Telegram channel, which is dedicated to selling a variety of services, including the generation of deepfake videos.

Figure 5: Telegram post demonstrating common pricing for deepfake generation.
Figure 5: Telegram post demonstrating common pricing for deepfake generation.

Prices for deepfake video creation average between $60 - $500, depending on the desired services. Deepfake sellers often include samples of their videos, which are often good indicators for how advanced deepfake generation has gotten.

Figure 6: Examples of deepfake videos on a Telegram ad for deepfake generation.
Figure 6: Examples of deepfake videos on a Telegram ad for deepfake generation.

Within the plethora of deepfake options on Telegram and other forums, one stands out as particularly prevalent: the provision of face-swapping technology. This capability allows attackers to virtually swap faces, enabling attackers to bypass verification measures, execute persuasive social engineering attacks, and engage in other nefarious activities. One example of this is a tool called Swapface, which is described as follows:

“Swapface is a real-time and ultra realistic faceswap AI app, which allows users to instantly transform into anyone with a single photo without any processing time. It's easy to set up and lets you take your content creation, live streaming to a new level.”

Figure 7: Swapface.org website.
Figure 7: Swapface.org website.

Swapface provides subscription and credit-based purchase options, with monthly subscriptions priced between $39 - $249. Each subscription tier offers different access levels and features.

For instance, the free version places a watermark on generated images or videos, which anyone with a paid membership can remove. Viewing the creator's advertisement below showcases the capabilities of this face-swapping tool. Using a tool like Swapface could be used by an entrepreneurial cybercriminal aiming to set up a bank account for money laundering purposes. The criminal could employ a deepfake generation tool to circumvent video verification processes at financial institutions, increasing their chances of successfully opening bank accounts for money laundering.

Figure 8: Swapface interface.
Figure 8: Swapface interface.

Deepfake and face-swapping technology is prevalent across various Telegram channels and hacker forums, a trend that we expect to persist and evolve as attackers refine their techniques, tactics and procedures (TTPs). Additionally, the rise of freely available deepfake creation tools is becoming more commonplace. A large number of applications have emerged, such as DeepFaceLive, which enables real-time face swapping in video streams.

Figure 9: DeepFaceLive GitHub page
Figure 9: DeepFaceLive GitHub page

Malicious actors have already used KYC bypass tactics and deepfakes for various reasons. They’ve created deepfake audio to impersonate an employee to gain privileged access. Reuters also recently reported about an individual using face-swapping technology to pose as a friend during a KYC verification call, resulting in losses of $622,000.

Now, let's turn our attention to the final major topic of discussion and interest across hacker forums and Telegram channels: Malicious large language models (LLMs).

Real-World Threat #3: Malicious LLMs

Malicious LLMs have become a significant subject of discussion on hacker forums, emerging as one of the earliest and most prominent topics. These discussions often explore the various ways that LLMs can be weaponized for cyberattacks. When we refer to malicious LLMs, we're referring to versions of LLMs created to circumvent guardrails or ethical boundaries. Malicious LLMs can be custom-created LLMs that use training datasets of legitimate models, or these could be mere wrappers, sending jailbreaking commands to more publicly well-known and available models.

The idea of jailbreaking comes from tech enthusiasts who remove restrictions from devices like smartphones or tablets to unlock new functionalities. Similarly, with LLMs, jailbreaking can involve changing the model's software or using clever prompts that fool the AI into ignoring its own safety checks. Given the potential capability that malicious LLMs provide, similar to what we've seen with KYC bypass methods and deepfake generation, they can be a tempting tool for those with nefarious intent.

Attackers have employed LLMs and malicious LLMs for a range of activities, both malicious and innocuous. These uses include things like:

There are several malicious LLMs and jailbreaking techniques that have come and gone since August 2023. The most prominent malicious LLM to make its way into the media was WormGPT, which had ads across popular hacking forums and Telegram channels, such as Exploit.in, and Hackforums starting in July 2023.

The creator revealed the launch of WormGPT on a hacker forum, describing it as a "ChatGPT alternative for blackhat." They claimed it had been in development since February 2023.

Figure 10: WormGPT ad on hackforums.net.
Figure 10: WormGPT ad on hackforums.net.

The developer mentioned that they trained the model using GPT-J, which makes WormGPT one of the only examples of a malicious, uniquely developed model. The developer is selling access for between 100 Euros monthly to 5,000 Euros for lifetime access. WormGPT supported several functions, including generating accurate social engineering text for phishing messages and rudimentary code generation.

WormGPT wasn’t the only model to gain popularity among hacker communities. BLACKHATGPT is a wrapper that sends jailbroken commands to ChatGPT’s API. BLACKHATGPT describes itself on its own marketing website as the “First Cyber Weapon of Mass Creation.”

Figure 11: BLACKHATGPT website.
Figure 11: BLACKHATGPT website.

With a monthly subscription price of $199, BLACKHATGPT enables its users with a simple way to generate scripts and code snippets to enhance an attacker's toolkit. Additionally, it aids them in crafting localized social engineering text, including persuasive messages for spear phishing campaigns.

WormGPT and BLACKHATGPT demonstrate that while these tools serve as useful assistants for generating basic coding and social engineering text, they do not significantly enhance the capabilities of a cybercriminal's arsenal. We have not yet observed a jailbroken model that significantly alters the threat landscape.

However, not all uses of LLMs are solely focused on digital components. For instance, a recent analysis by Recorded Future highlighted that attackers believed to be associated with a Russian threat actor group used LLMs to create, disseminate and weaponize influential content at a large scale.

Defending Against the Threat of AI

We’ve observed conversations on a variety of platforms including Telegram, hacker forums and Discord channels related to malicious LLMs, KYC verification bypasses and deepfake generation. However, despite the plentiful discourse and speculation, the real-world impact of these technologies on cybersecurity and criminal activities remains to be fully realized.

As the threat landscape continues to evolve, the impact of AI is expected to grow. Threat actors are likely to use innovative and creative methods to exploit this technology in the future - a prediction that played out in Palo Alto Networks recent “The State of Cloud-Native Security” report, where 47% of security professionals report anticipating AI-driven supply chain attacks and 43% of predict that AI-powered threats will evade traditional detection techniques.

Continuous monitoring and adaptation in the form of AI security posture management (AI-SPM) will be essential to address emerging threats and harness the potential benefits of generative AI responsibly. To get a better understanding on how other organizations are responding to these changes in cloud security, check out the “The State of Cloud-Native Security” report.

 

 

 

The post Understanding Three Real Threats of Generative AI appeared first on Palo Alto Networks Blog.


Viewing all articles
Browse latest Browse all 66

Trending Articles