In this paper we describe our automatic approach for the NTCIR-13 Lifelog Semantic Access Task. The task is to query relevant lifelog images from user's daily life given an event topic. A major challenge is how to bridge the semantic gap between lifelog images and event-level topics. We propose a general framework to address this problem, with key components of various CNNs to translate lifelog images to object and scene features, relevant object/scene concepts searching for events, feature weighting adapted to events, and temporal smoothing to incorporate semantic coherence into the similarity between each image and query event. We
achieved an o cial result 57.6% in terms of mean precision over 20 topics. We also analyze the effect of key components to the retrieval system.