您现在的位置是:UN AI adviser warns about the destructive use of deepfakes >>正文

UN AI adviser warns about the destructive use of deepfakes

上海品茶网 - 夜上海最新论坛社区 - 上海千花论坛8人已围观

简介By subscribing, you agree to our Terms of Use and Policies You may unsubscribe at any time.Speaking ...

By subscribing, you agree to our Terms of Use and Policies You may unsubscribe at any time.

Speaking to CTVNews.caon Friday, a United Nations artificial intelligence (AI) adviser warned about the dangers of emerging more realistic-looking deepfakes.

UN AI adviser warns about the destructive use of deepfakes

"A digital twin is essentially a replica of something from the real world… Deepfakes are the mirror image of digital twins, meaning that someone had created a digital replica without the permission of that person, and usually for malicious purposes, usually to trick somebody," Neil Sahota, a California-based AI expert who has served as an adviser to the United Nations, told the news outlet.

Deepfakes have already shown up in fake content that has gone viral ranging from political recreations, such as a video showing Ukrainian President Volodymyr Zelenskyy telling his country to surrender to Russia, to celebrity endorsements, such as a clip of Elon Musk appearing to promote an investment scam.

See Also Related
  • What is deepfake technology and how does it work? 
  • Here's What You Need to Know About Identifying Deepfakes 
  • AI Deepfakes Are Now Easier to Make Than Ever, Pointing to Ominous Future 

But it’s not just famous people that are at risk. Ordinary day civilians can also be affected.

"We hear the stories about the famous people, it can actually be done to anybody. And deepfake actually got started in revenge porn," Sahota said. "You really have to be on guard."

Sahota added that the way to spot manipulated media is through video and audio that appear off.

"You got to have a vigilant eye. If it's a video, you got to look for weird things, like body language, weird shadowing, that kind of stuff. For audio, you got to ask… 'Are they saying things they would normally say? Do they seem out of character? Is there something off?'" he noted.

Not enough

Sahota also argued that currently policymakers are not doing enough in terms of educating the public on the many dangers of deepfakes and how to spot them. He recommended that a content verification system be implemented that would use digital tokens to authenticate media and identify deepfakes.

"Even celebrities are trying to figure out a way to create a trusted stamp, some sort of token or authentication system so that if you're having any kind of non-in-person engagement, you have a way to verify," he told CTVNews.ca.

"That's kind of what's starting to happen at the UN-level. Like, how do we authenticate conversations, authenticate video?”

Tags:

相关文章



友情链接