Generative AI is being exploited by extremists

by nativetechdoctor
2 minutes read

Experts have found cases where AI-generated content allows extremist groups to bypass automated detection systems.

According to Wired, extremist groups have begun testing AI to create a new wave of propaganda. Experts worry that the growing use of synthetic AI tools by these groups will reverse the work that Big Tech has done to prevent bad content from appearing on the internet.

For years, online platforms have created databases of extremist content to quickly and automatically remove it from the internet. However, according to Adam Hadley, executive director of Tech Against Terrorism, his colleagues collect about 5,000 examples of AI-generated content weekly. These include images shared by groups linked to Hezbollah and Hamas, designed to influence the Israel-Hamas conflict.

Researchers at Tech Against Terrorism recently discovered AI-generated images with racist and anti-Semitic content embedded in an app available on the Google Play store .

In addition to detailing the threat posed by AI tools that can edit images, Tech Against Terrorism published a report citing ways AI tools can help criminals. extremist group. These include using automated translation tools to convert propaganda into multiple languages, or the ability to create personalized messages at scale to facilitate online recruitment efforts.

In parallel, Hadley believes that AI also offers an opportunity to get ahead of extremist groups and use this technology for prevention. His team has been working with Microsoft to find ways to use the document archive to create a next-generation type of AI detection system to combat the threat of AI used for terrorist content at scale.

Last month, the Internet Watch Foundation (IWF), a UK-based non-profit organization that aims to remove child exploitation content from the internet, published a report detailing the growing presence of such material—child sexual abuse (CSAM) generated by AI tools on the dark web.

Researchers found more than 20,000 AI-generated images posted to a Dark Web forum about CSAM in just one month, with 11,108 of these images rated as most likely by IWF researchers. potentially a criminal. IWF says these AI images can be so convincing that they are indistinguishable from real images.

Related Posts

Leave a Comment

Adblock Detected

Please support us by disabling your AdBlocker extension from your browsers for our website.