#AcademicRunPlaylist - 6/21/24

A New Hampshire historical marker sign that reads: Atlantic Cable Station and Sunken Forest. The receiving station for the first Atlantic cable, laid in 1974, is located on Old Beach Road opposite this location. The remains of the Sunken Forest (remnants of the Ice Age) may be seen at low tide. Intermingled with these gnarled stumps is the original Atlantic cable.

I saw this sign on my run a few days ago and it was too cool not to post. In other news, I also listened to a bunch of talks today for my #AcademicRunPlaylist!

First was an interesting talk by Sergey Levine on the role of prior data in rapid learning of motor skills at the Simons Institute for the Theory of Computing https://www.youtube.com/watch?v=ZEtI1OlF1Hc

Next was a fascinating talk by Andrew Gordon on Japan's industrial heritage sites - focusing on the Ashio Copper Mine and Refinery and the Fukushima Dai-ichi Nuclear Power Plant - at Tokyo College (with comments from Junko Okahashi Onodera) https://www.youtube.com/watch?v=8lvc1VV3XXQ

Next was a nice talk by Jitendra MALIK on child development-inspired robot learning at the Simons Institute https://www.youtube.com/watch?v=MM3JeLTs_Mc

Next was a thought-provoking talk by Adaeze Okoye on the utility-responsibility dichotomy in corporate law at the University of London School of Advanced Study https://www.youtube.com/watch?v=o30eU2EznS8

Next was an incredible talk by Phillip Isola on the convergence of different large models to similar representations at the Simons Institute. Isola starts with a bold hypothesis - that all of the large models we're developing, from LLMs to diffusion models, are converging to extremely similar local representations. Isola argues that this converging is headed to the true real-world distribution of features (I would argue that it's the internet represented real-world distribution, but many of the points still stand here), not just the distribution of the training data. This work has profound implications, both near term (it implies that training LLMs on image data makes sense) and long term (things that don't have analogues in the internet filtered real-world can probably never be captured by current approaches). Highly recommend https://www.youtube.com/watch?v=1_xH2mUFpZw

Next was a great talk by Laura Dabbish on transparency and open collaboration environments at the SONIC Research Group https://www.youtube.com/watch?v=S90dbS1qrNA

Last was an intriguing talk by Andrew Zisserman on training models to synchronize audio and video and identify the speaker in a video at the Simons Institute https://www.youtube.com/watch?v=HKa17CupqhE