In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
We present a principled approach to uncover the structure of visual data by solving a novel deep learning task coined visual permutation learning. Moreover, we propose DeepPermNet, an end-to-end CNN model for this task. The utility of our proposed approach is demonstrated on two challenging computer vision problems, namely, relative attributes learning and self-supervised representation learning.
We combine motion features to the Aggregated Channel Features (ACF) pedestrian detector. We demonstrate that motion features can provide more accurate results and reduce false alarms.
In this technical report we collect some results on differentiating argmin and argmax optimization problems with and without constraints and provide some insightful motivating applications. Such results are very useful for developing end-to-end gradient based learning methods.
We demonstrate that it is possible to exactly evaluate Bayesian model averaging (BMA) over the exponentially-sized powerset of Naive Bayes (NB) feature models in linear-time in the number of features; this yields an algorithm about as expensive to train as a single NB model with all features, but yet provably converges to the globally optimal feature subset in the asymptotic limit of data.
We present an Evolutionary algorithm to tackle simultaneously the regenerator placement and link capacity optimization problems in translucent optical networks. Our proposed method can assist a network designer to manage resources balancing cost and performance.