Deepfake video app Archives

Deepfake video app Archives

deepfake video app Archives

deepfake video app Archives


Professor Hao Li used to think it could take two to three years for the perfection of deepfake videos to make copycats indistinguishable from reality.

But now, the associate professor of computer science at the University of Southern California, says this technology could be perfected in as soon as six to 12 months.

Deepfakes are realistic manipulated videos that can, for example, make it look a person said or did something they didn’t.

“The best possible algorithm will not be able to distinguish,” he says of the difference between a perfect deepfake and real videos.

Li says he's changed his mind because developments in computer graphics and artificial intelligence are accelerating the development of deepfake applications.

A Chinese app called Zao, which lets users convincingly swap their faces with film or TV characters right on their smartphone, impressed Li. When ZAO launched on Aug. 30, a Friday, it became the most downloaded app in China’s iOS app store over the weekend, Forbes reports.

“You can generate very, very convincing deepfakes out of a single picture and also blend them inside videos and they have high-resolution results,” he says. “It's highly accessible to anyone.”

Interview Highlights

On the problems with deepfakes

“There are two specific problems. One of them is privacy. And the other one is potential disinformation. But since they are curating the type of videos where you put your face into it, so in that case, this information isn't really the biggest concern.”

On the threat of fake news

“You don't really need deepfake videos to spread disinformation. I don't even think that deepfakes are the real threat. In some ways, by raising this awareness by showing the capabilities, deepfakes are helping us to think about if things are real or not.”

On whether deepfakes are harmful

“Maybe we shouldn't really focus on detecting if they are fake or not, but we should maybe try to analyze what are the intentions of the videos.

“First of all, not all deepfakes are harmful. Nonharmful content is obviously for entertainment, for comedy or satire, if it's clear. And I think one thing that would help is … something that is based on AI or something that's data-driven that is capable of discerning if the purpose is not to harm people. That's a mechanism that has to be put in place in domains where the spread of fake news could be the most damaging and I believe that some of those are mostly in social media platforms.”

On the importance of people understanding this technology

“This is the same like when Photoshop was invented. It was never designed to deceive the public. It was designed for creative purposes. And if you have the ability to manipulate videos now, specifically targeting the identity of a person, it's important to create awareness and that's sort of like the first step. The second step would be we have to be able to flag certain content. Flagging the content would be something that social media platforms have to be involved in.

“Government agencies like DARPA, Defense Advanced Research Projects Agency, their purpose is basically to prepare America against potential threats at a technological level. And now in a digital age, one of the things that they're heavily investing into is, how to address concerns around disinformation? In 2015, they started a program called MediFor for media forensics and the idea is that while now we have all the tools that allow us to manipulate images, videos, multimedia, what can we do to detect those? And at the same time, AI advanced so much specifically in the area of deep learning where people can generate photorealistic content. Now they are taking this to another level and starting a new program called SemaFor, which is semantic forensics.”

On why the idea behind a deepfake is valuable

“Deepfake is a very scary word but I would say that the underlying technology is actually important. It has a lot of positive-use cases, especially in the area of communication. If we ever wanted to have immersive communication, we need to have the ability to generate photo-realistic appearances of ourselves in order to enable that. For example, in the fashion space, that's something that we're working on. Imagine if you ever wanted to create a digital twin of yourself and you wanted to see yourself in different clothing and do online shopping, really virtualizing the entire experience. And deepfake specifically are focusing on video manipulations, which is not necessarily the primary goal that we have in mind.”

On whether he feels an obligation to play a role in combating the use of deepfakes for spreading disinformation

“First of all, we develop these technologies for creating, for example, avatars right, which is one thing that we're demonstrating with our startup company called Pinscreen. And the deepfakes are sort of like a derivative. We were all caught by surprise.

“Now you have these capabilities and we really have an additional responsibility in terms of what are the applications. And this goes beyond our field. This is an overall concern in the area of artificial intelligence.”

Ciku Theuri produced and edited this interview for broadcast with Kathleen McKenna. Allison Hagan adapted it for the web.

Источник: []
, deepfake video app Archives

What do we do about deepfake video?

There exist, on the internet, any number of videos that show people doing things they never did. Real people, real faces, close to photorealistic footage; entirely unreal events.

These videos are called deepfakes, and they’re made using a particular kind of AI. Inevitably enough, they began in porn – there is a thriving online market for celebrity faces superimposed on porn actors’ bodies – but the reason we’re talking about them now is that people are worried about their impact on our already fervid political debate. Those worries are real enough to prompt the British government and the US Congress to look at ways of regulating them.

The rise of the deepfake and the threat to democracy

The video that kicked off the sudden concern last month was, in fact, not a deepfake at all. It was a good old-fashioned doctored video of Nancy Pelosi, the speaker of the US House of Representatives. There were no fancy AIs involved; the video had simply been slowed down to about 75% of its usual speed, and the pitch of her voice raised to keep it sounding natural. It could have been done 50 years ago. But it made her look convincingly drunk or incapable, and was shared millions of times across every platform, including by Rudi Giuliani – Donald Trump’s lawyer and the former mayor of New York.

It got people worrying about fake videos in general, and deepfakes in particular. Since the Pelosi video came out, a deepfake of Mark Zuckerberg apparently talking about how he has “total control of billions of people’s stolen data” and how he “owe[s] it all to Spectre”, the product of a team of satirical artists, went viral as well. Last year, the Oscar-winning director Jordan Peele and his brother-in-law, BuzzFeed CEO Jonah Peretti, created a deepfake of Barack Obama apparently calling Trump a “complete and utter dipshit” to warn of the risks to public discourse.

A lot of our fears about technology are overstated. For instance, despite worries about screen time and social media, in general, high-quality research shows that there’s little evidence of it having a major impact on our mental health. Every generation has its techno-panic: video nasties, violent computer games, pulp novels.

But, says Sandra Wachter, a professor in the law and ethics of AI at the Oxford Internet Institute, deepfakes might be a different matter. “I can understand the public concern,” she says. “Any tech developing so quickly could have unforeseen and unintended consequences.” It’s not that fake videos or misinformation are new, but things are changing so fast, she says, that it’s challenging our ability to keep up. “The sophisticated way in which fake information can be created, how fast it can be created, and how endlessly it can be disseminated is on a different level. In the past, I could have spread lies, but my range was limited.”

Here’s how deepfakes work. They are the product of not one but two AI algorithms, which work together in something called a “generative adversarial network”, or Gan. The two algorithms are called the generator and the discriminator.

Imagine a Gan that has been designed to create believable spam emails. The discriminator would be exactly the same as a real spam filter algorithm: it would simply sort all emails into either “spam” or “not spam”. It would do that by being given a huge folder of emails, and determining which elements were most often associated with the ones it was told were spam: perhaps words like “enlarger” or “pills” or “an accident that wasn’t your fault”. That folder is its “training set”. Then, as new emails came in, it would give each one a rating based on how many of these features it detected: 60% likely to be spam, 90% likely, and so on. All emails above a certain threshold would go into the spam folder. The bigger its training set, the better it gets at establishing real from fake.

But the generator algorithm works the other way. It takes that same dataset and uses it to build new emails that don’t look like spam. It knows to avoid words like “penis” or “won an iPad”. And when it makes them, it puts them into the stream of data going through the discriminator. The two are in competition: if the discriminator is fooled, the generator “wins”; if it isn’t, the discriminator “wins”. And either way, it’s a new piece of data for the Gan. The discriminator gets better at telling fake from real, so the generator has to get better at creating the fakes. It is an arms race, a self-reinforcing cycle. This same system can be used for creating almost any digital product: spam emails, art, music – or, of course, videos.

Gans are hugely powerful, says Christina Hitrova, a researcher in digital ethics at the Alan Turing Institute for AI, and have many interesting uses – they’re not just for creating deepfakes. The photorealistic imaginary people at are all created with Gans. Discriminatory algorithms (such as spam filters) can be improved by Gans creating ever better things to test them with. It can do amazing things with pictures, including sharpening up fuzzy ones or colourising black-and-white ones. “Scientists are also exploring using Gans to create virtual chemical molecules,” says Hitrova, “to speed up materials science and medical discoveries: you can generate new molecules and simulate them to see what they can do.” Gans were only invented in 2014, but have already become one of the most exciting tools in AI.

But they are widely available, easy to use, and increasingly sophisticated, able to create ever more believable videos. “There’s some way to go before the fakes are undetectable,” says Hitrova. “For instance, with CGI faces, they haven’t quite perfected the generation of teeth or eyes that look natural. But this is changing, and I think it’s important that we explore solutions – technological solutions, and digital literacy solutions, as well as policy solutions.”

With Gans, one technological solution presents itself immediately: simply use the discriminator to tell whether a given video is fake. But, says Hitrova, “Obviously that’s going to feed into the fake generator to produce even better fakes.” For instance, she says, one tool was able to identify deepfakes by looking at the pattern of blinking. But then the next generation will take that into account, and future discriminators will have to use something else. The arms race that goes on inside Gans will go on outside, as well.

Other technological solutions include hashing – essentially a form of digital watermarking, giving a video file a short string of numbers which is lost if the video is tampered with – or, controversially, “authenticated alibis”, wherein public figures constantly record where they are and what they’re doing, so that if a deepfake circulates apparently showing them doing something they want to disprove, they can show what they were really doing. That idea has been tentatively floated by the AI law specialist Danielle Citron, but as Hitrova points out, it has “dystopian” implications.

None of these solutions can entirely remove the risk of deepfakes. Some form of authentification may work to tell you that certain things are real, but what if someone wants to deny the reality of something real? If there had been deepfakes in 2016, says Hitrova, “Trump could have said, ‘I never said ‘grab them by the pussy’.” Most would not have believed him – it came from Access Hollywood tapes and was confirmed by the show’s presenter – but it would have given an excuse for people to doubt them.

Education – critical thinking and digital literacy – will be important too. Finnish children score highly on their ability to spot fake news, a trait that is credited to the country’s policy of teaching critical thinking skills at school. But that can only be part of the solution. For one thing, most of us are not at school. Even if the current generation of schoolchildren becomes more wary – as they naturally are anyway, having grown up with digital technology – their elders will remain less so, as can be seen in the case of British MPs being fooled by obvious fake tweets. “Older people are much less tech-savvy,” says Hitrova. “They’re much more likely to share something without fact-checking it.”

Wachter and Hitrova agree that some sort of regulatory framework will be necessary. Both the US and the UK are grappling with the idea. At the moment, in the US, social media platforms are not held responsible for their content. Congress is considering changing that, and making such immunity dependent on “reasonable moderation practices”. Some sort of requirement to identify fake content has also been floated.

Wachter says that something like copyright, by which people have the right for their face not to be used falsely, may be useful, but that by the time you’ve taken down a deepfake, the reputational damage may already be done, so preemptive regulation is needed too.

A European Commission report two weeks ago found that digital disinformation was rife in the recent European elections, and that platforms are failing to take steps to reduce it. Facebook, for instance, has entirely washed its hands of responsibility for fact-checking, saying that it will only take down fake videos after a third-party fact-checker has declared it to be false.

Britain, though, is taking a more active role, says Hitrova. “The EU is using the threat of regulation to force platforms to self-regulate, which so far they have not,” she says. “But the UK’s recent online harms white paper and the Department for Digital, Culture, Media and Sport subcommittee [on disinformation, which has not yet reported but is expected to recommend regulation] show that the UK is really planning to regulate. It’s an important moment; they’ll be the first country in the world to do so, they’ll have a lot of work – it’s no simple task to balance fake news against the rights to parody and art and political commentary – but it’s truly important work.” Wachter agrees: “The sophistication of the technology calls for new types of law.”

In the past, as new forms of information and disinformation have arisen, society has developed antibodies to deal with them: few people would be fooled by first world war propaganda now. But, says Wachter, the world is changing so fast that we may not be able to develop those antibodies this time around – and even if we do, it could take years, and we have a real problem to sort out right now. “Maybe in 10 years’ time we’ll look back at this stuff and wonder how anyone took it seriously, but we’re not there now.”

• The AI Does Not Hate You by Tom Chivers is published by Weidenfeld & Nicolson (£16.99). To order a copy go to Free UK p&p on all online orders over £15

Источник: []
deepfake video app Archives

A ‘deep fake’ app will make us film stars – but will we regret our narcissism?

‘You oughta be in pictures,” goes the 1934 Rudy Vallée song. And, as of last week, pretty much anyone can be. The entry requirements for being a star fell dramatically thanks to the launch, in China, of a face-swapping app that can decant users into film and TV clips.

Zao, which has quickly become China’s most downloaded free app, fuses the face in the original clip with your features. All that is required is a single selfie and the man or woman in the street is transformed into a star of the mobile screen, if not quite the silver one. In other words, anyone who yearns to be part of Titanic or Game of Thrones, The Big Bang Theory or the latest J-Pop sensation can now bypass the audition and go straight to the limelight without all that pesky hard work, talent and dedication. A whole new generation of synthetic movie idols could be unleashed upon the world: a Humphrey Bogus, a Phony Curtis, a Fake Dunaway.

Zao already has its first star: the 30-year-old artist and games developer Allan Xia, who unwittingly became the face of the app last weekend after inserting himself into a Leonardo DiCaprio montage. Western media outlets hadn’t paid much attention to Zao, which can only be accessed by users with a Chinese phone account, until Xia, who is based in Auckland but has a Chinese number, uploaded his experiments. After that, every media story covering the app came embedded with a clip of him strutting around in a Hawaiian shirt in Romeo + Juliet, and basking in the golden sunset on deck in Titanic. For the sake of fairness, he also uploaded his image on to Kate Winslet’s, thereby hogging both the male and female leads in one of the biggest films of all time.

How long did it take to claw himself to the top of the A-list? “Eight seconds,” he laughs. “It was so simple. All I did was take a selfie, which was then ranked by the app to give me an idea of how well it would be able to generate videos based on my photo. It’s looking to match your facial features to what is already there in its library of clips.”

Many people are by now familiar with the deep-fake videos showing Barack Obama and Mark Zuckerberg speaking words that never, in reality, left their lips – or the recent one in which the comic actor Bill Hader seems to metamorphose into Tom Cruise midway through impersonating him. But Zao is different: its aim is not to hoodwink or to produce a perfect replica. “With deep fake, if you don’t do a good job it messes up and glitches, whereas in Zao it’s always smooth,” says Xia. You can also only add your face to specific clips in the app’s library of programmes or movies – of which there are hundreds including, in addition to the DiCaprio titles, scenes from Leon, Fast and Furious 8 and Gentlemen Prefer Blondes.

“Sometimes you can tell my features aren’t really working and it resorts [to being more like] Leo than me. In deep fakes, the goal is to reach maximum accuracy, whereas Zao – where the videos are already set up with tracking markers, 3D mesh and transformation for each actor’s face – isn’t about accuracy. In my Leo clips, he has my eyes but it’s definitely his jawline; it’s not a one-to-one. This is more a BeautyCam-type meme effect [the app that gives users smoothed and idealised features], which is why it’s been so successful in going viral.”

The technology can’t yet be applied to feature-length films, and any extension beyond meme length – five to 10 seconds – would bring copyright issues. But Xia predicts that it could become a publicity tool putting consumers into their favourite movies. “If rights owners like Disney or Netflix wanted to use this as a marketing exercise, I could totally see that happening. It would be very smart.”

Indeed, film-makers have been dreaming about this interface since the earliest days of cinema. In the 1924 comedy Sherlock Jr, Buster Keaton played a projectionist who enters the flickering on-screen action, while the traffic flowed in both directions in Woody Allen’s The Purple Rose of Cairo and the Arnold Schwarzenegger vehicle Last Action Hero, in which characters were able to step on and off the cinema screen at will. As high-tech as it is, Zao is only the latest manifestation of this age-old wish to erode the divide between reality and fantasy.

Almost as soon as it was being shared, though, a backlash was under way in China, with vocal reservations expressed about the app’s reach. They weren’t related to worries that the great performers of our time might be usurped by ordinary Joes, leading to a situation where, say, Meryl Streep is beaten to the best actress Oscar by Donna from Sidcup. After all, says Xia, it’s still DiCaprio who is doing all the heavy lifting in his clip. “You can’t fake the performance. You still need the actors for the acting part.” And though there were initial fears that images of users’ faces were being harvested by the Chinese authorities, that, too, seems unfounded: Chinese citizens are already called upon to provide photographs of themselves in so many other contexts that an app such as this is by the by. “There are much easier ways to get that data,” the Chinese tech expert Matthew Brennan told Wired magazine.

The rise of the deepfake and the threat to democracy

It is also unlikely that Zao’s technology could be used to plant images of third parties into footage without their knowledge: the app requires a selfie from a front-facing camera (in other words, you can’t use a pre-existing picture; the subject has to be there in the room). Of far greater concern to those not wowed by Zao is the small print.

There, among the terms and conditions, lurks the kicker: the user is giving “free, irrevocable, permanent, transferable, and relicenseable” rights to any content generated by the app. The outcry manifested itself in a wave of one-star user reviews. Zao, which is owned by the dating service Momo Inc, had no option but to issue a statement insisting that “we understand the concerns about privacy” and promising to “fix the issues that we didn’t take into consideration, which will take some time”.

The irony is not lost on Xia that he is currently the only person whose privacy is currently at stake. “Because it’s so popular in China, everyone there is sharing it and there’s no single iconic Zao clip. I made the initial post with the assumption that people in western media would try the app themselves, but that didn’t happen and I’ve ended up becoming the face for this whole thing. I feel like I might be arguably the only person in the world right now who really is experiencing a loss of privacy from using Zao.”

The technology journalist Chris Stokel-Walker was not taken aback by the speed of the backlash so much as where it came from. “China is usually much more comfortable about giving away stuff like facial recognition than we are,” he tells me. “You would expect this reaction outside China, but now there seems to be more of a widespread consciousness in the post-FaceApp world.”

FaceApp, which came to prominence this year, instantly ages users’ photographs to predict how they would look at pensionable age – lending crows’ feet and turkey necks to spring chickens. But much like its users’ pictures, that app got old real quick – right around the time that people realised they had surrendered their images for all eternity to a company based in Russia. “FaceApp is very much the precedent here,” says Stokel-Walker. “It’s the one that made us circumspect about apps like these. Without FaceApp, you wouldn’t have people meticulously going through Zao’s terms and conditions.”

Alhough fears that FaceApp would turn us all into Putin’s playthings turned out to be misplaced, the controversy tapped into a pervasive paranoia about the exploitation of images willingly surrendered. “It’s the trade-off we make in any tech we use,” says Stokel-Walker.

Couldn’t all these problems be solved in an instant if we could only break this compulsion to upload our faces to every available platform? “It’s a difficult one,” says Stokel-Walker. “If we did hold back, it would force tech companies to produce more equitable terms and conditions. But people won’t do that. We’re lazy and we like new things, and most of us don’t want to wade through pages and pages of terms and conditions. For all the fire and fury we saw toward FaceApp, people will still try that stuff because they’re impressed by it. It’s a cool toy.”

It says something unflattering about us that we want constantly to insinuate ourselves into the centre of every narrative. A generous interpretation might be that Zao is the benign digital equivalent of a green-screen experience, such as riding a broomstick at Harry Potter World. And perhaps it’s not really any different to those personalised books in which the names of children and their friends and family members feature among the text of anything from Alice in Wonderland to Paw Patrol. Children love to find themselves in the company of their favourite characters.

The sense that adults demand the same thrill is a sight more worrying, and suggests an empathy shortfall or a failure of imagination – a reluctance to process any scenario unless it in some way overlaps with our own life or experience. Surely a phenomenon such as Zao can only inflame that tendency.

Of course, this app is no different in essence to any social media fad with a narcissistic component: the Snapchat filter that gives cat’s ears or bunny noses to people who have never even heard of Ovid, or the ice-bucket challenge, a charitable endeavour that also doubled as another excuse to upload footage of ourselves to social media. It’s hard in this context not to have some sympathy with the actor Armie Hammer, who responded to the death of Stan Lee by objecting to the onslaught of celebrity selfies. “So touched by all of the celebrities posting pictures of themselves with Stan Lee … no better way to commemorate an absolute legend than putting up a picture of yourself,” he tweeted. Hammer later apologised, and pledged to work on his “Twitter impulse control”, but the point surely stands: it doesn’t reflect well on us if the first thing we do when a celebrity dies is elbow our way to the front of the online mourners to claim our space at the wake, honouring a rich and exemplary life by marking the fleeting moment at which it intersected with our own.

Zao can turn us all into screen gods but contentment and wisdom surely lie elsewhere. We may think we are the stars of our own movie. True maturity, though, can only come once we realise that we may not even rank as supporting actors – and make our peace with that.

Источник: []

What’s New in the deepfake video app Archives?

Screen Shot

System Requirements for Deepfake video app Archives

Add a Comment

Your email address will not be published. Required fields are marked *