Our work on domain adapation of object detectors using unlabeled video is accepted at CVPR 2019. It provides a straight-forward way to adapt to novel domains with minimal dependence on hyper-parameters or a labeled validation set. This is a follow up of our earlier work at ECCV 2018 that mined unlabeled videos to automatically collect hard examples for object detectors.
A big shout-out to everyone who made this possible - Prithvijit, Ashish, Huaizu, SouYoung and of course our professors Liangliang and Erik.
For details please check the project page