A child learns a language by starting to listen to his or her parents or other people talking to him, to look at it, to try to understand things and situations together. Understanding the grammar and word order that make up the language also develops greatly in this period.
Even in the same language, the language covered by the computer is understood by the program that interprets the source code syntax. It understands the language syntax and the annotation of each language, and makes it possible to execute.
However, it is difficult to try to teach a human language to computers in this way. Artificial intelligence is difficult to escape from unnatural language because it is not a matter of putting comments in advance into the interpretation method of syntax. For this reason, MIT researchers have developed parsers that act like a human child, adding words and contexts to detect, say observe, and learn.
The artificial intelligence with this parser observes the caption image and understands the language with the individual behavior of the image to know how to assemble the language. Artificial intelligence, which has been studied for a certain amount of time, can be understood and interpreted with potential meanings by using what is learned when a new sentence is given. This approach offers the same flexibility as children learn languages. The system observes situations rather than rigid phrases, so you can learn the right words.
MIT researchers are trying to build robots that can respond to the habits of people around them, and to create artificial intelligence that can learn how to remember words like a child, or learn languages that are difficult to prepare phrases and annotations.
As the research progresses, it may be possible that artificial intelligence as well as video can teach each other language. On the other hand, there is a possibility that the AI will study the way the child learns the world through the language. For more information, please click here .