Deepfake Software For PC Archives

Deepfake Software For PC Archives

Deepfake Software For PC Archives

Deepfake Software For PC Archives

'Politicians fear this like fire': The rise of the deepfake and the threat to democracy

by Simon Parkin

On 4 May 2016, Jimmy Fallon, the host of NBC’s The Tonight Show, appeared in a sketch dressed as Donald Trump, then the Republican presidential nominee. Wearing a blond wig and three coats of bronzer, he pretended to phone Barack Obama – played by Dion Flynn – to brag about his latest primary win in Indiana. Both men appeared side by side in split screen, facing the camera. Flynn’s straight-man impression of Obama, particularly his soothing, expectant voice, was convincing, while Fallon played the exaggerated caricature that all of Trump’s mimics – and the man himself – settle into.

Three years later, on 5 March 2019, footage of the sketch was posted on the YouTube channel derpfakes under the title The Presidents. The first half of the clip shows the opening 10 seconds or so of the sketch as it originally aired. Then the footage is replayed, except the faces of Fallon and Flynn have been transformed into, seemingly, the real Trump and Obama, delivering the same lines in the same voices, but with features rendered almost indistinguishable from those of the presidents.

The video, uploaded to YouTube by the founder of derpfakes, a 28-year-old Englishman called James (he asked us not to use his surname), is a forgery created by a neural network, a type of “deep” machine-learning model that analyses video footage until it is able algorithmically to transpose the “skin” of one human face on to the movements of another – as if applying a latex mask. The result is known as a deepfake.

James’s video wasn’t intended to fool anyone – it was, he says, created “purely for laughs”. But the lifelike rendering of the presidents, along with thousands of similar deepfakes posted on the internet in the past two years, has alarmed many observers, who believe the technology could be used to disgrace politicians and even swing elections. Democracies appear to be gravely threatened by the speed at which disinformation can be created and spread via social media, where the incentive to share the most sensationalist content outweighs the incentive to perform the tiresome work of verification.

Last month, a digitally altered video showing Nancy Pelosi, the speaker of the US House of Representatives, appearing to slur drunkenly through a speech was widely shared on Facebook and YouTube. According to The Daily Beast, the clip was first posted by Shawn Brooks, 34, a sports blogger and “Trump superfan” from New York, who uploaded the doctored footage to Facebook. Trump then posted the clip on Twitter with the caption: “PELOSI STAMMERS THROUGH NEWS CONFERENCE”. The video was quickly debunked, but not before it had been viewed millions of times; the president did not delete his tweet, which at the time of writing has received nearly 98,000 likes. Facebook declined to take down the clip, qualifying its decision with the statement: “Once the video was fact-checked as false, we dramatically reduced its distribution.”

In response, a team including the artists Bill Posters and Daniel Howe two weeks ago posted a video on Instagram, in which Facebook founder Mark Zuckerberg boasts that he has “total control of billions of people’s stolen data, all their secrets, their lives, their futures”. The film formed part of an installation at the 2019 Sheffield Doc Fest earlier this month and was posted, the artists said, in an attempt “to interrogate the power of these new forms of computational propaganda”. It was also a test of whether or not Facebook would allow the film to be distributed via its platforms – in this case, Instagram – when the content was damaging to the company’s reputation. At the time of writing, the fake Zuckerberg video remains live. “We will treat this content the same way we treat all misinformation on Instagram,” a spokesperson said. “If third-party factcheckers mark it as false, we will filter it.”

When James, whose day job is unrelated to technology, launched his channel in January 2018, most deepfakes had nothing to do with politics. Using publicly available software such as FakeApp, amateurs typically would transpose the faces of celebrity women on to those of pornographic actors (one pornography site that specialises in deepfakes contains more than 60 films “starring” the singer Ariana Grande).

“The technology intrigued me, but the early uses didn’t, so I tried my hand at something more wholesome,” James says over online chat. He set his neural network the task of examining the face of Carrie Fisher, as she had appeared, aged 21, in the original Star Wars film, in order to transpose her into the 2016 sequel, Rogue One. James hoped to show how a desktop PC could produce special effects comparable with those that might cost a Hollywood studio tens of thousands of dollars in CGI work (proponents argue that deepfake technology has a variety of applications to offer film companies, potentially enabling automated dubbing and lip-syncing.) The resulting clip, in which 1977-era Fisher lands intact in the 2016 movie, was created “in the time it takes to watch an episode of The Simpsons”, James says, and viewed thousands of times within a few days.

The Star Wars clip helped to kickstart a community of meme-creating film fans around the world, who use deepfake technology to place actors in films in which they never appeared, often to comic or meaningful effect. A popular subgenre of deepfakes places Nicolas Cage into films such as Terminator 2 and The Sound Of Music, or recasts him as every character in Friends. One deepfake convincingly transposes Heath Ledger’s The Joker into the actor’s role in A Knight’s Tale. In February, a video grafting the face of one of China’s best-known actors, Yang Mi, into a 25-year-old Hong Kong television drama, The Legend Of The Condor Heroes, went viral, picking up an estimated 240m views before it was removed by Chinese authorities. Its creator wrote on the video-sharing platform Bilibili that he had made the video as a warning.

Since then, deepfake technology has continued to gain momentum. In May, researchers at Samsung’s AI lab in Moscow published “footage” of Marilyn Monroe, Salvador Dalí and the Mona Lisa, each clip generated from one still image. While it is still fairly easy to discern a deepfake from genuine footage, foolproof fabrications appear to be disconcertingly close. Recent electoral upsets have demonstrated the unprecedented power of political entities to microtarget individuals with news and content that confirms their biases. The incentive to use deepfakes to injure political opponents is great.

There is only one confirmed attempt by a political party to use a deepfake video to influence an election (although a deepfake may also have played a role in a political crisis in Gabon in December). In May 2018, a Flemish socialist party called sp.a posted a deepfake video to its Twitter and Facebook pages showing Trump appearing to taunt Belgium for remaining in the Paris climate agreement. The video, which remains on the party’s social media, is a poor forgery: Trump’s hair is curiously soft-focus, while his mouth moves with a Muppet-like elasticity. Indeed, the video concludes with Trump saying: “We all know that climate change is fake, just like this video,” although this sentence alone is not subtitled in Flemish Dutch. (The party declined to comment, but a spokesperson previously told the site Politico that it commissioned the video to “draw attention to the necessity to act on climate change”.)

But James believes forgeries may have gone undetected. “The idea that deepfakes have already been used politically isn’t so farfetched,” he says. “It could be the case that deepfakes have already been widely used for propaganda.”

At a US Senate intelligence committee hearing in May last year, the Republican senator Marco Rubio warned that deepfakes would be used in “the next wave of attacks against America and western democracies”. Rubio imagined a scenario in which a provocative clip could go viral on the eve of an election, before analysts were able to determine it was a fake. A report in the Washington Times in December claimed that policy insiders and Democratic and Republican senators believe “the Russian president or other actors hostile to the US will rely on deepfakes to throw the 2020 presidential election cycle into chaos”.

Some question the scale of this threat. Russell Brandom, policy editor at the Verge, the US tech news site, argued recently that deepfake propaganda is “a crisis that doesn’t exist”, while the New York Times has called deepfakes “emerging, long-range threats” that “pale in comparison” with established peddlers of political falsity, such as Fox News. But many experts disagree. Eileen Donahoe, the director of the Transatlantic Commission on Election Integrity (TCEI) and an adjunct professor at Stanford University, has been studying the deepfake threat to democracy for the past year. “There is little to no doubt that Russia’s digital disinformation conglomerate has people working on deepfakes,” she says. So far, the TCEI has not seen evidence that the Russians have tried to deploy deepfakes in a political context. “But that doesn’t mean it’s not coming, or that Russia-generated deepfakes haven’t already been tried elsewhere.”

'Those who seek to undermine democracy won’t be deterred by the law'

Ivan is a 33-year-old Russian programmer who, having earned a fortune in the video-game industry, is enjoying an extended sabbatical spent cycling, running and camping near where he lives, on the banks of the Volga. He is the creator of DeepFaceLab, one of the most popular pieces of software used by the public to create forged videos. Ivan, who claims to be an “ordinary programmer” and not a political activist, discovered the technology on Reddit in 2017. The software he used to create his first deepfake left a watermark on his video, which irritated him. After the creator of the software rejected a number of changes Ivan suggested, he decided to create his own program.

In the past 12 months, DeepFaceLab’s popularity has brought Ivan numerous offers of work, including regular approaches from Chinese TV companies. “This is not interesting to me,” he says, via email. For Ivan, creating deepfake software is like solving an intellectual puzzle. Currently, DeepFaceLab can only replace the target’s face below the forehead. Ivan is working to get to the stage where an entire head can be grafted from one body to another. This will allow deepfake makers to assume “full control of another person”, he says, an evolutionary step that “all politicians fear like fire”. But while such technology exists behind closed doors, there is no source code in the public domain. (Ivan cites a 2018 presentation, Deep Video Portraits, delivered at a conference by Stanford researchers, as the gold standard towards which he is working.)

The most sophisticated deepfakes require advanced machine-learning skills and their development is computationally intensive and expensive. One expert estimates the cost to be about £1,000 a day. For an amateur creating fake celebrity pornography, this is a major barrier to entry. But for a government or a well-funded political organisation, the cost is insignificant – and falling every month. Ivan flipflops in his assessment of the threat. “I do not think that so many stupid rulers… are capable of such complicated schemes as deepfakes,” he says. Then, when asked if politicians and journalists have overestimated the risk of deepfake propaganda, he says: “Did the gods overestimate the risk of giving people fire?”

Источник: []
, Deepfake Software For PC Archives


Professor Hao Li used to think it could take two to three years for the perfection of deepfake videos to make copycats indistinguishable from reality.

But now, the associate professor of computer science at the University of Southern California, says this technology could be perfected in as soon as six to 12 months.

Deepfakes are realistic manipulated videos that can, for example, make it look a person said or did something they didn’t.

“The best possible algorithm will not be able to distinguish,” he says of the difference between a perfect deepfake and real videos.

Li says he's changed his mind because developments in computer graphics and artificial intelligence are accelerating the development of deepfake applications.

A Chinese app called Zao, which lets users convincingly swap their faces with film or TV characters right on their smartphone, impressed Li. When ZAO launched on Aug. 30, a Friday, it became the most downloaded app in China’s iOS app store over the weekend, Forbes reports.

“You can generate very, very convincing deepfakes out of a single picture and also blend them inside videos and they have high-resolution results,” he says. “It's highly accessible to anyone.”

Interview Highlights

On the problems with deepfakes

“There are two specific problems. One of them is privacy. And the other one is potential disinformation. But since they are curating the type of videos where you put your face into it, so in that case, this information isn't really the biggest concern.”

On the threat of fake news

“You don't really need deepfake videos to spread disinformation. I don't even think that deepfakes are the real threat. In some ways, by raising this awareness by showing the capabilities, deepfakes are helping us to think about if things are real or not.”

On whether deepfakes are harmful

“Maybe we shouldn't really focus on detecting if they are fake or not, but we should maybe try to analyze what are the intentions of the videos.

“First of all, not all deepfakes are harmful. Nonharmful content is obviously for entertainment, for comedy or satire, if it's clear. And I think one thing that would help is … something that is based on AI or something that's data-driven that is capable of discerning if the purpose is not to harm people. That's a mechanism that has to be put in place in domains where the spread of fake news could be the most damaging and I believe that some of those are mostly in social media platforms.”

On the importance of people understanding this technology

“This is the same like when Photoshop was invented. It was never designed to deceive the public. It was designed for creative purposes. And if you have the ability to manipulate videos now, specifically targeting the identity of a person, it's important to create awareness and that's sort of like the first step. The second step would be we have to be able to flag certain content. Flagging the content would be something that social media platforms have to be involved in.

“Government agencies like DARPA, Defense Advanced Research Projects Agency, their purpose is basically to prepare America against potential threats at a technological level. And now in a digital age, one of the things that they're heavily investing into is, how to address concerns around disinformation? In 2015, they started a program called MediFor for media forensics and the idea is that while now we have all the tools that allow us to manipulate images, videos, multimedia, what can we do to detect those? And at the same time, AI advanced so much specifically in the area of deep learning where people can generate photorealistic content. Now they are taking this to another level and starting a new program called SemaFor, which is semantic forensics.”

On why the idea behind a deepfake is valuable

“Deepfake is a very scary word but I would say that the underlying technology is actually important. It has a lot of positive-use cases, especially in the area of communication. If we ever wanted to have immersive communication, we need to have the ability to generate photo-realistic appearances of ourselves in order to enable that. For example, in the fashion space, that's something that we're working on. Imagine if you ever wanted to create a digital twin of yourself and you wanted to see yourself in different clothing and do online shopping, really virtualizing the entire experience. And deepfake specifically are focusing on video manipulations, which is not necessarily the primary goal that we have in mind.”

On whether he feels an obligation to play a role in combating the use of deepfakes for spreading disinformation

“First of all, we develop these technologies for creating, for example, avatars right, which is one thing that we're demonstrating with our startup company called Pinscreen. And the deepfakes are sort of like a derivative. We were all caught by surprise.

“Now you have these capabilities and we really have an additional responsibility in terms of what are the applications. And this goes beyond our field. This is an overall concern in the area of artificial intelligence.”

Ciku Theuri produced and edited this interview for broadcast with Kathleen McKenna. Allison Hagan adapted it for the web.

Источник: []
Deepfake Software For PC Archives

Deepfake videos are everywhere. So how do we know what’s real?

Remember the phrase “Seeing is believing?” Deepfake videos have people second guessing what they are watching.

Deepfakes are videos manufactured by AI technology that can superimpose someone’s face on another person’s face and manipulate them into saying or doing things that didn’t happen. These videos have been used to spread propaganda on social media networks especially in politics.

Special effect video techniques that were once limited to movie studios and expensive software are now readily available and getting into the wrong hands.

Security experts believe that deepfakes were used by deceptive Facebook groups to influence the 2016 US presidential election in 2016.  As the next presidential election approaches in 2020, companies are working quickly on new technology that can detect deepfakes. 

How did Deepfakes begin?

In 2015, Google released powerful software called Tensorflow that was misused to create Deepfake technology. This software could automatically graft the image of any face onto another face in a video, almost seamlessly.

A user on Reddit used this software to create FakeApp then released it in a Reddit community. This allowed anyone to download the AI software to create Deepfake technology. Reddit has since banned that community but it was too late. This software has been adapted to create FaceSwap and most recently in the viral Chinese app, Zao which can replace the face of movie stars with your own face. 

Deepfake audio is becoming more common

We have to be careful about what we hear. Cyber criminals can used AI audio software to mimic someones voice. In March 2019, Deepfake audio was used to fool the CEO of a large energy firm based in the UK. The CEO thought he was speaking to his boss and unknowingly transferred €220,000 ($243,000) to the hackers. 

As audio software tools become more common, it is likely that more criminals will use them to their advantage. Cybersecurity experts report that there was a 47% increase in voice fraud  between 2016-17.

Источник: []

What’s New in the Deepfake Software For PC Archives?

Screen Shot

System Requirements for Deepfake Software For PC Archives

Add a Comment

Your email address will not be published. Required fields are marked *