Deepfake projects on Colab no longer accepted by Google -
3814
post-template-default,single,single-post,postid-3814,single-format-standard,bridge-core-2.6.5,qode-news-3.0.2,qode-page-transition-enabled,ajax_fade,page_not_loaded,qode-page-loading-effect-enabled,,qode_grid_1400,footer_responsive_adv,hide_top_bar_on_mobile_header,qode-content-sidebar-responsive,qode-theme-ver-25.0,qode-theme-bridge,qode_header_in_grid,wpb-js-composer js-comp-ver-6.5.0,vc_responsive,elementor-default,elementor-kit-344

Deepfake projects on Colab no longer accepted by Google

Deepfake projects on Colab no longer accepted by Google

GGoogle has added “creating deepfakes” to its list of projects that are banned from its Colab service.

 

Colab is a product from Google Research that enables AI researchers, data scientists, or students to write and execute Python in their browsers.

 

With little fanfare, Google added deepfakes to its list of banned projects.

 

Deepfakes use generative neural network architectures – such as autoencoders or generative adversarial networks (GANs) – to manipulate or generate visual and audio content.

 

The technology is often used for malicious purposes such as generating sexual content of individuals without their consent, fraud, and the creation of deceptive content aimed at changing views and influencing democratic processes.

 

Such concerns around the use of deepfakes is likely the reason behind Google’s decision to ban relevant projects.

 

It’s a controversial decision. Banning such projects isn’t going to stop anyone from developing them and may also hinder efforts to build tools for countering deepfakes at a time when they’re most needed.

 

In March, a deepfake purportedly showing Ukrainian President Volodymyr Zelenskyy asking troops to lay down their arms in their fight to defend their homeland from Russia’s invasion was posted to a hacked news website.

 

“I only advise that the troops of the Russian Federation lay down their arms and return home,” Zelenskyy said in an official video to refute the fake. “We are at home and defending Ukraine.”

 

Fortunately, the deepfake was of low quality by today’s standards. The fake Zelenskyy had a comically large and noticeably pixelated head compared to the rest of his body. The video probably didn’t fool anyone, but it could have had serious consequences if people did believe it.

 

One Russia-linked influence campaign – removed by Facebook and Twitter in March – used AI-generated faces for a fake “editor-in-chief” and “columnist” for a linked propaganda website. That one was more believable and likely fooled some people.

 

However, not all deepfakes are malicious. They’re also used for music, activism, satire, and even helping police solve crimes.

 

Source: Artificialintellegent