Deep Context-Encoding Network For Retinal Image Captioning

Jia-Hong Huang, Ting-Wei Wu, Chao-Han Huck Yang, Marcel Worring

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Automatically generating medical reports for retinal images is one of the promising ways to help ophthalmologists reduce their workload and improve work efficiency. In this work, we propose a new context-driven encoding network to automatically generate medical reports for retinal images. The proposed model is mainly composed of a multi-modal input encoder and a fused-feature decoder. Our experimental results show that our proposed method is capable of effectively leveraging the interactive information between the input image and context, i.e., keywords in our case. The proposed method creates more accurate and meaningful reports for retinal images than baseline models and achieves state-of-the-art performance. This performance is shown in several commonly used metrics for the medical report generation task: BLEUavg (+16%), CIDEr (+10.2%), and ROUGE (+8.6%).
Original languageEnglish (US)
Title of host publication2021 IEEE International Conference on Image Processing (ICIP)
PublisherIEEE
DOIs
StatePublished - Aug 23 2021
Externally publishedYes

Cite this