Foundation models are becoming ubiquitous, making it essential to understand their capabilities and limitations when applied to behavioural data. This talk synthesises two recent works into a unified view of foundation models as behavioural forecasters and state estimators, and highlights open questions relevant to both science and deployment: evaluation under distribution shift, controlled specification of context, and interpretability of model-driven behavioural inferences.
I focus on behavioural inference from sequential traces in two domains. First, human mobility: I show how large language models (LLMs) can predict an individual’s next visited location and, even without task-specific training, outperform strong deep-learning baselines in data-scarce settings.
Second, online consumption: I show that LLM-backed agentic systems can improve prediction of purchasing behaviour when equipped with task-specific retrieval and structured memory, including mechanisms for seasonality analysis and product-relation graphs. Beyond accuracy gains, this setup helps identify which information sources are needed to reconstruct latent needs and constraints that are only partially observable in transaction logs. I close by discussing risks related to gender bias and stereotyping, and how they can affect both performance and the ethical profile of behavioural prediction systems.
Detalls de contacte:
Juan Fernández Gracia Contact form