How cognitive science helps evaluate the reasoning capabilities of LLMs
Plus: AI that helps humans in democratic deliberation — video lecture
Greetings, fellow humans! With the DeepSeek r1 explosion — it’s a good time to discuss reasoning capabilities of language models.
Join our next BuzzRobot talk, where Andrew Lampinen from Google DeepMind, whose research work is at the intersection of cognitive science and AI, will walk us through the question: do LLMs truly reason, or do they just repeat familiar patterns?
In this talk, our guest will explore how approaches in cognitive science can be applied to evaluate reasoning capabilities of language models. Specifically, he will focus on comparative methods (comparing capabilities across different systems) and rational analysis (analyzing behaviors as a rational adaptation to an environment).
Learn the details of the talk and sign up to attend. The talk is virtual.
Last year, we started organizing BuzzRobot talks in an in-person format – and it was a blast. This year, we are planning to do it more frequently (currently only in the SF Bay Area). Even though our content is research-heavy, I've been thinking of making our in-person talks more practical – for example, inviting engineers who have deployed AI agents in production to share their experience.
So, if you are working on a cool technical project you'd like to share the practical details of, or can recommend an interesting project, I'd appreciate a note (just reply to this email).
Speaking of in-person talks — watch the lecture from our recent offline talk about the AI mediator and how it helped humans find common ground on controversial topics in politics and socio-economic issues.