Always on my mind-tuning: The intuition mechanism

Broadcast soon



Whether large predictive models imitate their training data or develop genuine reasoning lacks a physical explanation. I present mind-tuning, a variational learning principle that balances next-token prediction with causal path-entropy maximization, controlled by a temperature-like parameter λ. Tested in toy mazes with random walks as data trajectories, this sandbox abstracts a reasoning task without intelligent guidance or reward and unveils a rich phase diagram. At low λ, predictive models parrot their training data, performing constrained random walks; at high λ, they hallucinate and break through walls. In a critical λ range, a goal-directed, ‘intuitive’ strategy spontaneously emerges as a fragile metastable phase, dependent on maze complexity, model capacity, data quality, and the λ-tuning protocol. The mechanism can be analytically explained and predicts the emergence of intuition when learning at a critical balance between memorizing ‘what is’ and wondering ‘what could be’.



Presential in the seminar room. Zoom stream: 



https://us06web.zoom.us/j/98286706234?pwd=bm1JUFVYcTJkaVl1VU55L0FiWDRIUT09



 



Note the start time 12:00.



Detalles de contacto:

Tobias Galla

Contact form


Esta web utiliza cookies para la recolección de datos con un propósito estadístico. Si continúas navegando, significa que aceptas la instalación de las cookies.


Más información De acuerdo