A deep learning model that creates a water flow image with a single photo

A research team at the University of Washington in the U.S. has developed a technology that can create images from a single photo using deep learning without reference to a video taken at the same place.

Machine learning has shown potential in the field of image augmentation. A company called My Heritage, for example, is already working on a technology to animate old portraits. Meanwhile, a new method developed by the University of Washington research team focuses on the flow of water, clouds, and smoke in the natural world.

The technical principle is as follows. First, the pain points in the deep learning model are trained by reading a large amount of data. In this case, any video that expresses the movement of water, such as rivers, waterfalls, and the sea, is possible. Only the first frame of the video representing the water flow is cut and the model predicts the next water movement with the model. In addition, it is said that the deep learning model learns the movement of water little by little by comparing the predicted result with the actual video.

After learning this, if one picture is input, the model predicts the movement of water in units of pixels per frame and creates a short video. However, one problem here is that real rivers and waterfalls are constantly flowing, so moving the pixel positions to reproduce the water movement needs to preserve the new water image so that the water does not empty its original place. Therefore, the technology developed by the research team (symmetric splattering) predicts how water will behave with the passage of time, and at the same time predicts how the water moves against the passage of time, so that it is possible to create a realistic loop image by exquisitely mixing the two images. do.

It is said that the results of actually making a video using this model were different. There are some that are reproduced perfectly like the real thing, but there are some that feel a little strange. One reason is that deep learning learning models do not take into account the effects of water movement and smoke on light at all. The complex reflection of light on the surface of the water creates a sense of incongruity when even the slightest unnatural movement is made. The same goes for smoke or fog that hides or distorts the scenery behind you.

However, since it is a problem that can be overcome soon if more training videos are read into the deep learning model, it seems that it can be processed using the image editing function already installed on smartphones. Related information can be found here.



Through the monthly AHC PC and HowPC magazine era, he has watched 'technology age' in online IT media such as ZDNet, electronic newspaper Internet manager, editor of Consumer Journal Ivers, TechHolic publisher, and editor of Venture Square. I am curious about this market that is still full of vitality.

Add comment

Follow us

Don't be shy, get in touch. We love meeting interesting people and making new friends.

Most discussed

%d 블로거가 이것을 좋아합니다: