#AcademicRunPlaylist - 7/8/24

A selfie of me in front of looming trees on a sunny day. I'm a bald, middle-aged, white man with a red beard flecked with white. I'm wearing glasses with a steel rim on the top and a blue shirt with the words "Enjoy Gradient Descent" on it in white

I had a busy day shuttling around the kids, but at least with that driving I was able to get in some talks for my #AcademicRunPlaylist!

First was a fantastic talk by Josh Tenenbaum on understanding natural language by translating into a probabilistic language of thought at UQAM | Université du Québec à Montréal. This is both a rigorous examination of LLMs and their limitations and an exploration of different approaches to modeling the world. Highly recommend https://www.youtube.com/watch?v=mvDxzmMpvl8

Next was an interesting talk by Xavier Gabaix on macroeconomic networks at the Faculty of Economics, University of Cambridge https://www.youtube.com/watch?v=TDNX0M2VkSk

Next was an excellent conversation with Chinmayi Sharma (👋) on how regulators should think about different degrees of open AI models and technology more broadly on the Lawfare Institute podcast https://www.youtube.com/watch?v=GBDi58s3EGE

Next was an incredible talk by Sanmi Koyejo on predictability and surprise in language model benchmarks at ACM FAccT. Koyejo brings the 🔥, starting with one of the lines of the year: "The new NLP is language models trained by mad libs" 😂, then thoroughly dismantles the notion of "emergence" in these models through rigorous empirical and theoretical work. Highly recommend https://www.youtube.com/watch?v=27J904Y2JGk

Next was a great talk by Hancheng Cao on measuring multitasking using communication metadata and predicting team satisfaction with chat logs at Stanford University https://www.youtube.com/watch?v=Wy45BdDVrMs

Next was an enlightening talk by Richard Futrell on situating language models in the information-theoretic science of language at UQAM. This is probably the best examination of LLMs from a linguistics perspective that I've heard, illustrating why certain features of human language are uniquely suited to modeling through next token prediction and why LLMs still struggle to capture certain aspects of language. Highly recommend https://www.youtube.com/watch?v=NJ2d2DIjmRg

Next was a short talk by Lukas B. Freund on the implications of superstar teams for the wage distribution at the University of Cambridge's Faculty of Economics https://www.youtube.com/watch?v=LE1RdMrPdlg

Last was an engaging panel on LLMs and language understanding at UQAM with Eva Portelance, Judit Gervain, Virginia Valian, Roni Katzir, Charles Yang, and Friedmann Pulvermueller https://www.youtube.com/watch?v=oCBNmY9Bahw