Techrecipe

Will Chinese AI pre-learning models outperform Google and Open AI models?

On June 1 (local time), a research team led by the Beijing Institute of Artificial Intelligence, which is funded by the Chinese government, developed a new pre-learning model, WuDao 2.0. announced Undao 2.0 uses 1.75 trillion parameters, which is more than the pre-learning model developed by OpenAI or Google Brain under Google.

Undao 2.0 is a deep learning model developed by more than 100 researchers affiliated with various institutions, centered on the Beijing Institute of Artificial Intelligence, a non-profit research institute. The number of parameters is 1.75 trillion, 175 billion GPT-3, a language processing model announced by OpenAI in June 2020, and 1.6 trillion Switch Transformer, a language processing model developed by Google Brain. claiming to exceed the figure.

A parameter is a variable defined by a machine learning model, and for model evolution, the parameter allows more refined and accurate results to be obtained. Therefore, in general, the more parameters included in the model, the more refined the machine learning model tends to be.

Undao 2.0 is trained with 4.9TB of text and image data, and this training data contains 1.2TB of Chinese and English texts, respectively. In addition, unlike the deep generation model specialized for specific tasks such as image generation and face recognition, it is said that it is possible to write essays and poems and generate supplementary sentences according to still images or images according to sentence descriptions.

The researchers found that these sophisticated models, trained on large datasets, do not require small amounts of new data to use certain features because, like humans, knowledge once learned can be dedicated to new tasks. Undao 2.0 is said to be partnering with 22 companies, including smartphone maker Xiaomi.

The research team said that a large-scale pre-learning model is one of the best shortcuts to general-purpose AI, suggesting that Undao 2.0 takes general-purpose AI into account. In addition, the Chinese government is investing heavily in the Beijing Artificial Intelligence Research Institute, providing 340 million yuan in funding in 2018 and 2019 alone. The U.S.-China technology competition is intensifying, with the U.S. government also announcing an investment of 1 trillion won in AI and quantum computing in 2020.

The report points out that not only the number of parameters is necessarily important for AI performance, but also the amount and content of the dataset for the announcement of Undao 2.0. For example, GPT-3 was trained with only 570 GB of data, but this data was data extracted from a 45 TB dataset by pre-processing. Therefore, while the figures shown by Undao 2.0 are impressive, some argue that it may be difficult to think of the model performance as it is. Related information can be found here.

lswcap

lswcap

Through the monthly AHC PC and HowPC magazine era, he has watched 'technology age' in online IT media such as ZDNet, electronic newspaper Internet manager, editor of Consumer Journal Ivers, TechHolic publisher, and editor of Venture Square. I am curious about this market that is still full of vitality.

Add comment

Follow us

Don't be shy, get in touch. We love meeting interesting people and making new friends.

Most discussed

%d 블로거가 이것을 좋아합니다: