Technical Program

Paper Detail

Paper IDC-3-3.5
Paper Title SUPPORTIVE AND SELF ATTENTIONS FOR IMAGE CAPTION
Authors Jen-Tzung Chien, Ting-An Lin, National Chiao Tung University, Taiwan
Session C-3-3: Machine Learning for Small-sample Data Analysis
TimeThursday, 10 December, 17:30 - 19:30
Presentation Time:Thursday, 10 December, 18:30 - 18:45 Check your Time Zone
All times are in New Zealand Time (UTC +13)
Topic Machine Learning and Data Analytics (MLDA): Special Session: Machine Learning for Small-sample Data Analysis
Abstract Attention over an observed image or natural sentence is run by spotting or locating the region or position of interest for pattern classification. The attention parameter is seen as a latent variable, which was indirectly calculated by minimizing the classification loss. Using such an attention mechanism, the target information may not be correctly identified. Therefore, in addition to minimizing the classification error, we can directly attend the region of interest by minimizing the reconstruction error due to supporting data. Our idea is to learn how to attend through the so-called supportive attention when the supporting information is available. A new attention mechanism is developed to conduct the attentive learning for translation invariance which is applied for image caption. The derived information is helpful for generating caption from input image. Moreover, this paper presents an association network which does not only implement the word-to-image attention, but also carry out the image-to-image attention via self attention. The relations between image and text are sufficiently represented. Experiments on MS-COCO task show the benefit of the proposed supportive and self attentions for image caption with the key-value memory network.