While much effort has been devoted to deep learning object detection, relatively limited attention has been paid to object detection in bad weather, e.g. rain, snow or haze. In heavy rain, the raindrop on the front windshield can make it difficult to detect object from an in-car camera. The conventional way to cope with this has been to use radar as the main detection sensor. However, radar is highly susceptible to false positives. Furthermore, many entry level radar sensors only return the centroid of each detected object, rather than its
size and extent. In addition, due to lack of texture input, radar cannot discriminate a vehicle from a non-vehicle object, e.g. roadside pole. This motivates us to detect vehicle by fusing radar and vision. In this paper, we first calibrate the radar and camera with respect to the ground plane. The radar detections are then projected to the camera image for target width estimation. Empirical evaluation on a large database shows that there is a natural synergy in both sensors, as the image based estimation is found to be greatly facilitated by the accuracy of the radar detection.