This paper summarizes our contribution to the Technicolor Rich Multimedia Retrieval from Input Videos Grand Challenge. We hold the view that semantic analysis of a given news video is best performed in the text domain. Starting with a noisy text obtained from applying Automatic Speech Recognition (ASR), a graph-based approach is then used to enrich the text by propagating labels from visually similar videos culled from parallel (YouTube) News sources. From the enriched text, we next extract salient keywords to form a query to a news video search engine, retrieving a larger corpus of related news video. Compared to a baseline method that only uses the ASR text, significant improvement in precision has been obtained,
indicating that retrieval has benefited from the ingestion of the external labels. Capitalizing on the enriched metadata, we find that videos are more amenable to the Wikipedia-based Explicit Semantic Analysis (ESA), resulting in better support for subtopic news video retrieval. We apply our methods to an in-house live news search portal, and report on several best practices.