A Google software engineer created a system that enables natural movement with a virtual YouTuber VTuber using a neural network using a photo and camera recognition.
This deep learning vtuber system is a system that inputs the face image of an animated character into an algorithm and adjusts the head movement, eye movement, and mouth movement with a slider, and the face image that changes accordingly is output.
By connecting this system with a face tracker, one image is moved to mimic the movement of the face captured by the camera. Of course, it is also possible to transmit facial movements from existing images as well as a webcam. Of course, there are still some limitations, but applying this system allows you to create interactive animations without creating a character model. Something that can help drastically reduce the cost of creating vtubers or animations. In addition, it has the advantage of being easy to use as it can be used by directly inputting a character image and intuitively controlling the character. Related information can be found here .
Add comment