Table-to-text generation has been widely studied in the Natural Language Processing community in the recent years. Given the significant increase in multimodal data, it is critical to incorporate signals from both images as well as tables to generate relevant text. While tables contain a structured list of facts, images are a rich source of unstructured visual information. For example, in tourism domain, images can be used to infer knowledge such as the type of landmark (e.g., Church), its architecture (e.g., Ancient Roman), and composition (e.g., White marble). However, previous work has largely focused on table-to-text without images. In this paper, we introduce the novel task of Vision-augmented Table-To-Text Generation (VisToT), defined as follows: given a table and an associated image, produce a descriptive sentence conditioned on the multimodal input. For the task, we present a novel multimodal table-to-text dataset, WikiLandmarks, covering 73,084 unique world landmarks. Further, we also present a novel competitive multimodal architecture, namely, VT3 that generates accurate sentences conditioned on the image and table pairs. Through extensive analyses and experiments, we show that visual cues from images are helpful in (i) inferring missing information from incomplete tables, and (ii) strengthening the importance of useful information from noisy tables for natural language generation.
Please cite our work as follows:
@inproceedings{vistot2022emnlp, author = "Gatti, Prajwal and Mishra, Anand and Gupta, Manish and Das Gupta, Mithun" title = "VisToT: Vision-Augmented Table-to-Text Generation", booktitle = "EMNLP", year = "2022", }