Show Me the World in My Language: Establishing the First Baseline
for Scene-Text to Scene-Text Translation
1 IIT Jodhpur | 2 University of Bristol |
(*: Equal Contribution) |
| Paper | Code | Dataset | Short Talk | Poster |
Update: Dataset and Initial Release of code are up now.
Abstract
In this work, we study the task of ``visually'' translating scene text from a source language (e.g., Hindi) to a target language (e.g., English). Visual translation involves not just the recognition and translation of scene text but also the generation of the translated image that preserves visual features of the source scene text, such as font, size, and background. There are several challenges associated with this task, such as translation with limited context, deciding between translation and transliteration, accommodating varying text lengths within fixed spatial boundaries, and preserving the font and background styles of the source scene text in the target language. To address this problem, we make the following contributions: (i) We study visual translation as a standalone problem for the first time in the literature. (ii) We present a cascaded framework for visual translation that combines state-of-the-art modules for scene text recognition, machine translation, and scene text synthesis as a baseline for the task. (iii) We propose a set of task-specific design enhancements to design a variant of the baseline to obtain performance improvements. (iv) Currently, the existing related literature lacks any comprehensive performance evaluation for this novel task. To fill this gap, we introduce several automatic and user-assisted evaluation metrics designed explicitly for evaluating visual translation. Further, we evaluate presented baselines for translating scene text between Hindi and English. Our experiments demonstrate that although we can effectively perform visual translation over a large collection of scene text images, the presented baseline only partially addresses challenges posed by visual translation tasks. We firmly believe that this new task and the limitations of existing models, as reported in this paper, should encourage further research in visual translation.
Keywords: Visual Translation, Scene Text Synthesis, Evaluation Metrics, Cross-lingual Scene Text Editing.
The Visual Translation Problem
Suppose you are visiting Delhi, India and arrive at the Rithala (Hindi: रिठाला) metro station. If you are not familiar with Hindi, the signboard on the left might be incomprehensible. The result of our proposed baseline solution, shown on the right, seamlessly transliterates the station name रिठाला to English. In our work, we aim to visually translate (or transliterate, when necessary, as in this case) text from the source language to the target language while preserving visual attributes of the source scene text. Specifically, we focus on visual translation between Hindi and English, and vice versa.
Dataset
VT-Real: Real Scene Image Dataset for evaluating Visual Translation between Hindi and English
VT-Synth: Synthetic Training Data for Visual Translation between Hindi and English
VT-Syn is a synthetically generated corpus of ~600K visually diverse paired bilingual word images in pairs of English-Hindi. It can be used for training visual translation or cross-lingual scene text editing or scene text removal or scene text binarization.Short Talk
Paper
Show Me the World in My Language: Establishing the First Baseline |
Bibtex