DeepFake, which uses AI to synthesize fake portraits, is a problematic technology that is easy to use for fake news and causes multiple adult damage. As deepfake technology develops, the need for technology to identify deepfakes is also increasing. However, as one of these methods, Facebook announced that it is developing a technology to determine the characteristics of AI models that have created new deepfakes.
Since 2018, fake videos created by machine learning of famous actress faces with AI and combining them with existing adult videos have been pouring in. Since then, the technology to generate images or videos of real humans using machine learning tools has been developing dazzlingly, and there are many cases of psychological harm to people. Therefore, there are regions where regulations are introduced.
As AI makes it possible to create precise fake images, there is also a problem that information reliability is greatly compromised. This is because, in addition to fake adult material, you can create deepfakes of celebrities such as Facebook CEO Mark Zuckerberg and former President Trump to make up statements or events that are not real. The more public the deepfake target is, the greater the impact.
For this reason, recently, it is urgent to develop a technology that can discriminate between fake images and fake images. The method of reverse engineering the deepfake newly announced by Facebook is to analyze the image generated by AI to determine the characteristics of the machine learning model that generated the image.
Past studies have also been able to identify known machine learning models from fake images. But, according to Facebook, the deepfake software is easily customizable. Thus, deepfake artifacts can customize the model to erase traces. On the other hand, the point of the new method announced by Facebook is that it can capture features from generated images even for machine learning models that have not been announced so far.
The new technology is believed to be able to show evidence that a suspect’s PC is the culprit when the suspect’s PC is confiscated, in addition to detecting that the author of the images posted in multiple places is the same. However, caution is needed as this technology is still in the research stage. Of course, according to the report, some point out that this technology is not reliable enough, given that the detection rate of the algorithm that won the deepfake detection contest held in 2020 was 65.18%. Related information can be found here.
The Facebook AI research team also developed TextStyleBrush, an AI that can change the font of existing sentences or replace words in pictures without discomfort.
The Text Style Brush can recognize handwriting and fonts from words in pictures and apply them to existing sentences. Difficult fonts or fonts with various colors can be applied. The Text Style Brush can imitate various fonts, white characters, and even fonts that are close to cursive. In addition, not only can fonts be applied to pre-entered sentences, but also words in the photograph can be replaced while maintaining the font.
Facebook explains that the text style brush is an AI under study, but in the future, it can be used for various purposes, such as translating text from images into other languages or facilitating the linguistics of road signs using augmented reality. Related information can be found here.
Meanwhile, Facebook announced that it will begin testing ads on its Oculus Quest platform, its virtual reality device. Ads will appear in several apps, including Blaston, of virtual reality games in the coming weeks.
Oculus VR, a subsidiary of Facebook, changed its privacy policy in 2019 to use virtual reality device data for advertising. In May 2021, they announced an ad extension on their mobile app, but this is the first time they actually run ads on a virtual reality platform. Oculus VR said on its blog that it will release information about when the Oculus platform and mobile app advertising will begin in full, after testing begins and introducing developer and community feedback.
According to Facebook and Oculus VR, settings such as blocking specific advertisements are possible in virtual reality, just like the Facebook app. In addition, the collection and analysis of user information is carried out based on the existing Facebook advertising policy. He explained that biometric information such as photos taken by the Oculus headset camera or weight and height obtained by the fitness tracker Oculus Move remains only on the device and is not sent to the Facebook server. It also said there are no plans to use the data recorded by Facebook’s voice assistant and the movement data recorded by the device for advertising targeting.
In its announcement, Facebook emphasizes that the purpose of ad extensions is to create new ways for developers to monetize. Facebook has so far blocked the use of third-party advertising services in its Oculus app policy, which has blocked developer monetization, but this has changed according to this test. Virtual reality can provide users with a powerful advertising experience that is different from conventional advertising using TV and Ube sites as media. The same is true if we think of advertisements for movies or TV shows, or advertisements for physical products. Integrating advertising into a virtual reality platform can be a huge incentive for developers to work in the Facebook ecosystem. Related information can be found here.
Add comment