Artificial intelligence: “96% of videos generated are used for pornography and malicious purposes,” says expert

Artificial intelligence: “96% of videos generated are used for pornography and malicious purposes,” says expert

This Saturday, March 16, the ISCOM School of Communication and Advertising will organize a national awareness day about ‘deepfake’, these videos of people generated with artificial intelligence that are abundant on social networks. On this occasion, ISCOM welcomes Disiz Yyov, entrepreneur and content creator specialized in artificial intelligence, to Toulouse. Maintenance.

This Saturday, March 16, the ISCOM School of Communication and Advertising will organize a national awareness day about ‘deepfake’, these videos of people generated with artificial intelligence that are abundant on social networks. On this occasion, ISCOM welcomes Disiz Yyov, entrepreneur and content creator specialized in artificial intelligence, to Toulouse. Maintenance.

What are the benefits of generating artificial intelligence images?

With artificial intelligence it is possible to create around thirty images in less than two minutes, which for a designer simply represents a day’s work. So we can speak of productivity. Moreover, the imagination no longer knows any limits. For example, I could draw the Eiffel Tower as a crescent moon for an advertising campaign.

But creating images with artificial intelligence also entails risks…

Absolutely, lots of risks elsewhere. Last year, an image of an explosion near the Pentagon in the United States was passed on before journalists could verify the information. In Spain, students took naked images of their classmates, girls, and threatened them. This concerns disinformation, image manipulation, cyber intimidation or cybercrime.

Should this phenomenon be regulated?

Yes, we need to regulate, but it is also necessary to develop artificial intelligence powerful enough to detect these images. Social networks must also implement software that identifies and blocks these images.

Can we imagine a future in which influencers and other users of social networks will have to specify that the images they publish are created by artificial intelligence, just as they currently have to specify the changes or retouches performed on their photos on the social networks?

Yes, we may also require legal entities and natural persons to state that the content published on social networks is generated in whole or in part by artificial intelligence. This is already happening on TikTok, and users who violate this rule can be banned. But this remains complicated to verify.

As a social media user, how can you distinguish a real photo from an image generated by artificial intelligence?

It’s very easy. The first thing to check is the fingers. Artificial intelligence today is unable to generate perfect fingers for a person as it trains on images freely available on the internet. Furthermore, the image is too perfect, the person too smooth. It is also possible to perform a reverse search to check the source of the image.

What role do traditional media play in this phenomenon?

They can make broadcasts aimed at the general public to explain what ‘deepfake’ is and how to recognize it. The media have an awareness-raising role.

“Deepfake” are videos of a person generated by artificial intelligence. This also entails risks…

Yes, for example, we can take a public figure and make him say things he has never said before. Recently, a company employee was scammed: criminals used artificial intelligence to generate a video of his financial advisor and his CEO asking him to make transfers. So there are risks of cybercrime and fraud. Furthermore, 96% of “deepfakes” are used in pornography and malicious use.

How can we locate these videos?

They can be distinguished with the naked eye. There are many problems with the eyes blinking too quickly or abnormally. In addition, a video consists of multiple images. If you put it in slow motion, you may notice that the mouth or head moves abnormally.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *