Blog

Discrete de RhamHodge cohomology theory: application to game theory and statistical ranking (Part I)
Cohomology is a central concept in algebraic topology and geometry. Oddly enough, this theory applies to certain concrete problems arising in machine learning (e.g globally consistent ranking of items in an online shop like Amazon or NetFlix) and game theory (e.g approximating multiplayer noncooperative games with potential games, in view of computing approximate Nash equilibria of the former).

Beware of standardizing data before running PCA!
Standardization is important in PCA since the latter is a variance maximizing exercise. It projects your original data onto directions which maximize the variance. If your features have different scales, then this projection may get screwed!

Variational autoencoder for "Frey faces" using keras
In this post, I’ll demo variational autoencoders [Kingma et al. 2014] on the “Frey faces” dataset, using the keras deeplearning Python library.

Computing Nashequilibria in incomplete information games
In our OPT2016 NIPS workshop paper, we propose a simple projectionfree primaldual algorithm for computing approxi mate Nashequilibria in twoperson zerosum sequential games with incomplete information and perfect recall (like Texas Hold’em Poker). Our algorithm is numer ically stable, performs only basic iterations (i.e matvec multiplications, clipping, etc., and no calls to external firstorder oracles, no matrix inversions, etc.), and is applicable to a broad class of twoperson zerosum games including simultaneous games and sequential games with incomplete information and perfect recall. The ap plicability to the latter kind of games is thanks to the sequenceform representation which allows one to encode such a game as a matrix game with convex polytopial strategy profiles. We prove that the number of iterations needed to produce a Nash equilibrium with a given precision is inversely proportional to the precision. We present experimental results on matrix games on simplexes and Kuhn Poker.

Learning brain regions from largescale online structured sparse DL
In our NIPS 2016, paper, we propose a multivariate online dictionarylearning method for obtaining decompositions of brain images with structured and sparse components (aka atoms). Sparsity is to be understood in the usual sense: the dictionary atoms are constrained to contain mostly zeros. This is imposed via an L1norm constraint. By “structured”, we mean that the atoms are piecewise smooth and compact, thus making up blobs, as opposed to scattered patterns of activation. We propose to use a Sobolev (Laplacian) penalty to impose this type of structure. Combining the two penalties, we obtain decompositions that properly delineate brain structures from functional images. This nontrivially extends the online dictionarylearning work of Mairal et al. (2010), at the price of only a factor of 2 or 3 on the overall running time. Just like the Mairal et al. (2010) reference method, the online nature of our proposed algorithm allows it to scale to arbitrarily sized datasets. Experiments on brain data show that our proposed method extracts structured and denoised dictionaries that are more intepretable and better capture intersubject variability in small medium, and largescale regimes alike, compared to stateoftheart models.
subscribe via RSS