Techrecipe

Deep fake … The age of distrust that AI will bring

Problems arise as artificial intelligence technology improves. Last year, I used artificial intelligence technology to learn the faces of famous actresses and collected fake adults combined with existing adult objects. It is a fake video that movie star Wonder Woman starring Galgadot came out. The fact is that it was created using machine learning tools like tensor flow. FakeApp, an app that makes it easy for anyone to easily create fake adult content as the technology spreads quickly. Anyone can create such a fake video without programming or technical background.

Of course, in the case of images combined with such AI, it is possible that anyone can have a CG job or digital editing that is a big SFX company or a monopoly now, even if there is no expert knowledge. On the one hand, it raises ethical issues, but on the other hand, if the algorithm develops, it also predicts the possibility of innovation in the field of image production.

That’s not all. In addition to the video, AI development is also in progress to enable people to talk to people over the phone. This is Google Duplex, announced during Google I / O 2018, held in May. Google has developed WaveNet, a neural network that generates artificial speech through deep learning, and has built it on Google Assist. It has been able to produce a natural voice that is close to humanity with considerable precision, but still has a ‘computerized’ tone and tone.

Google Duplex is an AI that lets you talk natural conversations over the phone. Such as reserving a beauty salon or a restaurant, can be performed. Of course, by limiting the learning data and the purpose of use, it is possible to communicate like a human being. It is an interactive AI that can not be used for general conversation such as chatter and is specialized for a specific purpose.

Google Duplex first communicates the intent of a phone call so that the other party (and of course people) can talk on the same topic. In fact, the natural conversation between humans is more complicated than it seems. Repeating stories or skipping stories that the other person might know. There are many things to increase the error rate in the AI ​​position, such as noise during the call. Google Duplex not only recognizes the other party’s voice, but also produces the best response based on information such as conversation history, conversation purpose, and time with the other party. Analyze the context to estimate the meaning and continue the conversation.

Google Duplex is basically meaningful in that you can talk and work autonomously without human intervention. Not only does it do things like scheduling, but it can also help people with hearing impairments or people who do not know the local language.

Anyway, this kind of artificial intelligence technology can now create fake contents such as video or voice. It is becoming increasingly difficult to judge which images or voices are real. Actually, there has been a technique of transferring different image expressions to the image of former US President Barack Obama. Internet Media Buzzfeed made in April.

More sophisticated image editing technology has emerged since then. Deep Video technology, developed by Stanford University researchers, is a technology that allows deep-running algorithms to edit people in existing videos. To allow the character or figure in the video to be transferred to another person.

This technique can transplant not only facial expressions, but also hair position, movement, eye movement and even eye blinking. Of course, we need a video that shows actors who will act as a source. Not only facial expressions, but also face movements and movements. If you have voice only (voice is also developing like the Google Duplex introduced above), you can manipulate it as if it is what you are talking about. If you have only voice files, you can match your mouth movements. Up to now, video editing has been the process of adjusting the background of a person’s movements, but Deep Video can optimize to the background. The details and movements appear to be real, so it is difficult to judge which one is fake.

As I said before, there is a risk that if you combine technologies like DeepVideo only with a voice synthesis system, you can use it to slander someone or make fake news. Because of this, there are also voices that need to be watermarked to tell whether it is real or to be fake again with AI technology. In fact, last year, there were projects like the Fake News Challenge to eradicate fake news. Using machine learning, natural language processing, and artificial intelligence technology, it is an attempt to search for possibilities to identify hidden manipulations and misinformation in news articles.

Professor Robert Chesney of the University of Texas and Daniel Citron of the University of Maryland announced a Deep Fakes (A Looming Challenge for Privacy, Democracy, and National Security) on the new Deep Fake technology.

Deep fake is a combination of deep learning and fake. It is a coined word that means to make false information and fake news using deep learning technology. In Deepfake’s paper, Deepfake technology using artificial intelligence will continue to evolve, and the speed with which fake news and false information spreads will also increase. The proliferation of fake news, which is still a problem now, is expected to accelerate.

Here, Deeppeak technology makes it possible to spread fake news all over the world in such a way that it makes it easy to believe (shorthand) like video or voice and to use the means that spread in SNS. It can cause enormous damage to various aspects such as election, security, and journalism. It is also mentioned that such false information is more dangerous than the claim of “this happened” that it takes a negative form such as “no such fact” or “(specific) presentation is false”. It is also possible to manipulate contents that are contradictory to the actual announcement with video or voice to make a certain person a liar.

It is not easy for ordinary people to judge whether they are still fake news. Moreover, if there is evidence (?) That is seen and heard in front of the rice field such as video or voice, it is more so. Press credibility is important, and effective democracy can function. It is pointed out, however, that shaking confidence in such information may create a gap in the creation of an authoritarian regime. Deep-fake technology can be basically an aggressive purpose. As mentioned earlier, you may need a guideline to be watermarked or whatever, but you may need to be mindful of what source you are reading carefully.

lswcap

lswcap

Through the monthly AHC PC and HowPC magazine era, he has watched 'technology age' in online IT media such as ZDNet, electronic newspaper Internet manager, editor of Consumer Journal Ivers, TechHolic publisher, and editor of Venture Square. I am curious about this market that is still full of vitality.

Add comment

Follow us

Don't be shy, get in touch. We love meeting interesting people and making new friends.

Most discussed

%d 블로거가 이것을 좋아합니다: