By using artificial intelligence to automatically identify the person or object in the picture, it is now possible to perform various tasks that only humans could do until now with a machine. However, even cutting-edge AI is pointing out unexpected weaknesses in image recognition AI, such as discovering that it can be tricked simply with handwritten notes.
OpenAI, an AI development research institute, has released an image classification model, CLIP. Clips are characterized by learning not only images but also image expressions in natural language. AI is also possible for humans who think of the image of an apple by looking at the word apple.
However, there is a weakness that it is easy to be directly connected to the image literally, which helps in classifying the tag and recalling the image from the text. Therefore, Open AI also acknowledges that a typography attack that can easily trick AI with handwriting is effective for clips.
For example, you can properly recognize an apple in an image, but if you add a handwritten note called an iPod, the apple is almost recognized as an iPod. OpenAI explains that this misrecognition is caused by the classification of clips through advanced abstraction.
How to deceive AI image recognition is still being studied a lot. For example, a Google researcher announced in 2018 how to confuse image recognition AI with just one sticker. In addition, there are reports that the Tesla autonomous driving system, which recognizes images from surrounding images taken with a camera and performs automatic driving, has succeeded in changing lanes at will simply by attaching a sticker that resembles a white line on the road.
The presence of hostile images that will deceive AI image recognition can become a system vulnerability that applies image recognition as it is in the future. In addition, there was a case in which the developer apologized for the Google image recognition AI recognizing a black man as a gorilla. Tagging directly from images like this leads to easy image combination, and AI has the potential to imply bias.
In fact, the nerves connecting terrorism and the Middle East, the nerves connecting Latin America and immigration, and the nerves connecting dark-skinned people and gorillas are found in the clip. OpenAani says that the connections that lead to these biases are hardly visible, making it difficult to predict in advance and can be difficult to correct.
Clip is an experimental system to the last, and Open AI says that understanding of Clip is still in the development stage. In this regard, it is pointed out that it is necessary to understand the structure of AI by disassembling it before leaving life to AI. Related information can be found here.
Add comment