Life-logging applications generate a vast amount of personalized data that provides vital insights into the user’s
daily life. One such key insight is the people whom the user has come across/interacted with during regular life. This can be obtained from the faces extracted from images acquired by a wearable life-logging camera. However, manual inspection and tagging of the life-logging images is cumbersome and highly subjective. Therefore, in this paper, a fully automatic method to extract and cluster the faces from the images obtained from a life-logging camera is designed and evaluated. It is shown that such a practical system designed using commercial off-the shelf devices and commercially available face recognition APIs is able to obtain human like precision, while the recall may be lower compared to human performance.