Compressed representations in deep learning: From algorithmic information to autoencoders and hypernetworks

Broadcast soon

TFM supervisors: Lluis Arola-Fernández, Lucas Lacasa

TFM Jury: Miguel C. Soriano, Massimiliano Zanin, Lluis Arola-Fernández



Abstract: The connection between intelligence and compression is explored, emphasizing how compression principles give rise to the principles of deep learning. First an overview of ideal compression is provided, based on the theory of computation, with particular focus on Solomonoff theory of induction. From there the minimum description length principle can be derived. This leads to two part codes, where in virtue of the concentration of measure, lead to the minimization of a bound on Shannon entropy thus leading to lossless compression. Two more contributions are present in this work. First, compressed representations in the context of lossy compression, are investigated in latent variable generative models using variational inference. A particular focus is taken in the analysis of the quantized representations of the vector quantized variational autoencoders (VQ-VAEs), where it is shown that there is a connection to Hopfield networks through a continuous relaxation of the quantization procedure. This connection can be used to provide an improvement in the training dynamics when compared to previous methods, by learning richer representations.[cut to 1500 chars]



Detalles de contacto:

Lucas Lacasa

Contact form


Esta web utiliza cookies para la recolección de datos con un propósito estadístico. Si continúas navegando, significa que aceptas la instalación de las cookies.


Más información De acuerdo