Cryptocurrency

Meta to fight AI-generated fake news with ‘invisible watermarks’

[ad_1]

Social media giant Meta (formerly Facebook) will include an invisible watermark in all images it creates using artificial intelligence (AI) as it steps up measures to prevent misuse of the technology.

In a Dec. 6 report detailing updates for Meta AI — Meta’s virtual assistant — the company revealed it will soon add invisible watermarking to all AI-generated images created with the “imagine with Meta AI experience.” Like numerous other AI chatbots, Meta AI generates images and content based on user prompts. However, Meta aims to prevent bad actors from viewing the service as another tool for duping the public.

Like numerous other AI image generators, Meta AI generates images and content based on user prompts. The latest watermark feature would make it more difficult for a creator to remove the watermark.  

“In the coming weeks, we’ll add invisible watermarking to the image with Meta AI experience for increased transparency and traceability.”

Meta says it will use a deep-learning model to apply watermarks to images generated with its AI tool, which would be invisible to the human eye. However, the invisible watermarks can be detected with a corresponding model.

Unlike traditional watermarks, Meta claims its AI watermarks — dubbed imagine with Meta AI — are “resilient to common image manipulations like cropping, color change (brightness, contrast, etc.), screenshots and more.” While the watermarking services will be initially rolled out for images created via Meta AI, the company plans to bring the feature to other Meta services that utilize AI-generated images.

In its latest update, Meta AI also introduced the ‘reimagine’ feature for Facebook Messenger and Instagram. The update allows users to send and receive AI-generated images to each other. As a result, both messaging services will also receive the invisible watermark feature.

Related: Tom Hanks, MrBeast and other celebrities warn over AI deep fake scams

AI services such as Dall-E and Midjourney already provide the option to add traditional watermarks to the content it churn out. However, such watermarks can be removed by simply cropping out the edge of the image. Moreover, certain AI tools have the ability to remove watermarks from images automatically, which Meta AI claims will be impossible to do with its output.

Ever since the mainstreaming of generative AI tools, numerous entrepreneurs and celebrities have called out AI-powered scam campaigns. Scammers use readily available tools to create fake videos, audio and images of popular figures and spread them across the internet.

In May, an AI-generated image showing an explosion near the Pentagon — the headquarters of the United States Department of Defense — caused the stock market to dip briefly.

The fake image, as shown above, was later picked up and circulated by other news media outlets, resulting in a snowball effect. However, local authorities, including the Pentagon Force Protection Agency, in charge of the building’s security, said they were aware of the circulating report and confirmed there was “no explosion or incident” taking place.

In the same month, human rights advocacy group Amnesty International fell for an AI-generated image depicting police brutality and used it to run campaigns against the authorities.

AI-generated image from Amnesty International. Source: Twitter

“We have removed the images from social media posts, as we don’t want the criticism for the use of AI-generated images to distract from the core message in support of the victims and their calls for justice in Colombia,” stated Erika Guevara Rosas, director for Americas at Amnesty.

Magazine: Lawmakers’ fear and doubt drives proposed crypto regulations in US