We obtained the state-of-art results by integrating global motion features.
[paper]
We show that high performance optical flow estimation can be achieved without using dense cost volumes.
[paper]
We theoretically solve the vanishing/exploding gradients problem in neural networks. Key idea: constrain signal norm in both directions via a new class of activation functions and orthogonal weight matrices.
[paper]
We pointed out the problem of image warping in estimating optical flow and proposed the deformable cost volume to solve the problem.
[paper]
We proposed a new neural network module, Contrast Association Units, to model the relations between two sets of input variables.
[paper]
We proposed a new matrix approximation method which allows efficient matrix inversion. We then applied the method to second order optimization algorithms for training neural networks.
[paper]
We proposed a new method for privacy-preserving predictions with trained
neural networks.
[paper]
We designed a fast graph matching algorithm with time complexity
O(n^3) per iteration, where n is the size of a graph. We
proved its convergence rate. It takes within 10 seconds to match two graphs of
1000 nodes on a PC.
[paper]
We found visual attributes of object classes can be unsuperisedly learned by applying Independent Component Analysis on the softmax outputs of a trained ConvNet. We showed such attributes can be useful for object recognition by performing zero-shot learning experiments on the ImageNet dataset of over 20,000 object classes.
[paper]
We designed a fast Maximum Common Subgraph (MCS) algorithm for Planar Triangulation Graphs. Its time complexity is O(mnk), where n is the size of one graph, m is the size of the other graph and k is the size of their MCS.
[paper]
We found a unique traveling bump solution, which was unknown to exist before, in a set of two-dimensional neural network equations.
[paper]