Winamp Logo
Machine Learning Street Talk (MLST) Cover
Machine Learning Street Talk (MLST) Profile

Machine Learning Street Talk (MLST)

English, Technology, 1 season, 139 episodes, 2 days, 7 hours, 31 minutes
About
Welcome! The team at MLST is inspired by academic research and each week we engage in dynamic discussion with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field without succumbing to hype. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/)
Episode Artwork

Showdown Between e/acc Leader And Doomer - Connor Leahy + Beff Jezos

The world's second-most famous AI doomer Connor Leahy sits down with Beff Jezos, the founder of the e/acc movement debating technology, AI policy, and human values. As the two discuss technology, AI safety, civilization advancement, and the future of institutions, they clash on their opposing perspectives on how we steer humanity towards a more optimal path. Watch behind the scenes, get early access and join the private Discord by supporting us on Patreon. We have some amazing content going up there with Max Bennett and Kenneth Stanley this week! https://patreon.com/mlst (public discord) https://discord.gg/aNPkGUQtc5 https://twitter.com/MLStreetTalk Leahy, known for his critical perspectives on AI and technology, challenges Jezos on a variety of assertions related to the accelerationist movement, market dynamics, and the need for regulation in the face of rapid technological advancements. Jezos, on the other hand, provides insights into the e/acc movement's core philosophies, emphasizing growth, adaptability, and the dangers of over-legislation and centralized control in current institutions. Throughout the discussion, both speakers explore the concept of entropy, the role of competition in fostering innovation, and the balance needed to mediate order and chaos to ensure the prosperity and survival of civilization. They weigh up the risks and rewards of AI, the importance of maintaining a power equilibrium in society, and the significance of cultural and institutional dynamism. Beff Jezos (Guillaume Verdon): https://twitter.com/BasedBeffJezos https://twitter.com/GillVerd Connor Leahy: https://twitter.com/npcollapse YT: https://www.youtube.com/watch?v=0zxi0xSBOaQ TOC: 00:00:00 - Intro 00:03:05 - Society library reference 00:03:35 - Debate starts 00:05:08 - Should any tech be banned? 00:20:39 - Leaded Gasoline 00:28:57 - False vacuum collapse method? 00:34:56 - What if there are dangerous aliens? 00:36:56 - Risk tolerances 00:39:26 - Optimizing for growth vs value 00:52:38 - Is vs ought 01:02:29 - AI discussion 01:07:38 - War / global competition 01:11:02 - Open source F16 designs 01:20:37 - Offense vs defense 01:28:49 - Morality / value 01:43:34 - What would Conor do 01:50:36 - Institutions/regulation 02:26:41 - Competition vs. Regulation Dilemma 02:32:50 - Existential Risks and Future Planning 02:41:46 - Conclusion and Reflection Note from Tim: I baked the chapter metadata into the mp3 file this time, does that help the chapters show up in your app? Let me know. Also I accidentally exported a few minutes of dead audio at the end of the file - sorry about that just skip on when the episode finishes.
2/3/20243 hours, 18 seconds
Episode Artwork

Mahault Albarracin - Cognitive Science

Watch behind the scenes, get early access and join the private Discord by supporting us on Patreon: https://patreon.com/mlst (public discord) https://discord.gg/aNPkGUQtc5 https://twitter.com/MLStreetTalk YT version: https://youtu.be/n8G50ynU0Vg In this interview on MLST, Dr. Tim Scarfe interviews Mahault Albarracin, who is the director of product for R&D at VERSES and also a PhD student in cognitive computing at the University of Quebec in Montreal. They discuss a range of topics related to consciousness, cognition, and machine learning. Throughout the conversation, they touch upon various philosophical and computational concepts such as panpsychism, computationalism, and materiality. They consider the "hard problem" of consciousness, which is the question of how and why we have subjective experiences. Albarracin shares her views on the controversial Integrated Information Theory and the open letter of opposition it received from the scientific community. She reflects on the nature of scientific critique and rivalry, advising caution in declaring entire fields of study as pseudoscientific. A substantial part of the discussion is dedicated to the topic of science itself, where Albarracin talks about thresholds between legitimate science and pseudoscience, the role of evidence, and the importance of validating scientific methods and claims. They touch upon language models, discussing whether they can be considered as having a "theory of mind" and the implications of assigning such properties to AI systems. Albarracin challenges the idea that there is a pure form of intelligence independent of material constraints and emphasizes the role of sociality in the development of our cognitive abilities. Albarracin offers her thoughts on scientific endeavors, the predictability of systems, the nature of intelligence, and the processes of learning and adaptation. She gives insights into the concept of using degeneracy as a way to increase resilience within systems and the role of maintaining a degree of redundancy or extra capacity as a buffer against unforeseen events. The conversation concludes with her discussing the potential benefits of collective intelligence, likening the adaptability and resilience of interconnected agent systems to those found in natural ecosystems. https://www.linkedin.com/in/mahault-albarracin-1742bb153/ 00:00:00 - Intro / IIT scandal 00:05:54 - Gaydar paper / What makes good science 00:10:51 - Language 00:18:16 - Intelligence 00:29:06 - X-risk 00:40:49 - Self modelling 00:43:56 - Anthropomorphisation 00:46:41 - Mediation and subjectivity 00:51:03 - Understanding 00:56:33 - Resiliency Technical topics: 1. Integrated Information Theory (IIT) - Giulio Tononi 2. The "hard problem" of consciousness - David Chalmers 3. Panpsychism and Computationalism in philosophy of mind 4. Active Inference Framework - Karl Friston 5. Theory of Mind and its computation in AI systems 6. Noam Chomsky's views on language models and linguistics 7. Daniel Dennett's Intentional Stance theory 8. Collective intelligence and system resilience 9. Redundancy and degeneracy in complex systems 10. Michael Levin's research on bioelectricity and pattern formation 11. The role of phenomenology in cognitive science
1/14/20241 hour, 7 minutes, 7 seconds
Episode Artwork

$450M AI Startup In 3 Years | Chai AI

Chai AI is the leading platform for conversational chat artificial intelligence. Note: this is a sponsored episode of MLST. William Beauchamp is the founder of two $100M+ companies - Chai Research, an AI startup, and Seamless Capital, a hedge fund based in Cambridge, UK. Chaiverse is the Chai AI developer platform, where developers can train, submit and evaluate on millions of real users to win their share of $1,000,000. https://www.chai-research.com https://www.chaiverse.com https://twitter.com/chai_research https://facebook.com/chairesearch/ https://www.instagram.com/chairesearch/ Download the app on iOS and Android (https://onelink.to/kqzhy9 ) #chai #chai_ai #chai_research #chaiverse #generative_ai #LLMs
1/9/202429 minutes, 47 seconds
Episode Artwork

DOES AI HAVE AGENCY? With Professor. Karl Friston and Riddhi J. Pitliya

Watch behind the scenes, get early access and join the private Discord by supporting us on Patreon: https://patreon.com/mlst (public discord) https://discord.gg/aNPkGUQtc5 https://twitter.com/MLStreetTalk DOES AI HAVE AGENCY? With Professor. Karl Friston and Riddhi J. Pitliya Agency in the context of cognitive science, particularly when considering the free energy principle, extends beyond just human decision-making and autonomy. It encompasses a broader understanding of how all living systems, including non-human entities, interact with their environment to maintain their existence by minimising sensory surprise. According to the free energy principle, living organisms strive to minimize the difference between their predicted states and the actual sensory inputs they receive. This principle suggests that agency arises as a natural consequence of this process, particularly when organisms appear to plan ahead many steps in the future. Riddhi J. Pitliya is based in the computational psychopathology lab doing her Ph.D at the University of Oxford and works with Professor Karl Friston at VERSES. https://twitter.com/RiddhiJP References: THE FREE ENERGY PRINCIPLE—A PRECIS [Ramstead] https://www.dialecticalsystems.eu/contributions/the-free-energy-principle-a-precis/ Active Inference: The Free Energy Principle in Mind, Brain, and Behavior [Thomas Parr, Giovanni Pezzulo, Karl J. Friston] https://direct.mit.edu/books/oa-monograph/5299/Active-InferenceThe-Free-Energy-Principle-in-Mind The beauty of collective intelligence, explained by a developmental biologist | Michael Levin https://www.youtube.com/watch?v=U93x9AWeuOA Growing Neural Cellular Automata https://distill.pub/2020/growing-ca Carcinisation https://en.wikipedia.org/wiki/Carcinisation Prof. KENNETH STANLEY - Why Greatness Cannot Be Planned https://www.youtube.com/watch?v=lhYGXYeMq_E On Defining Artificial Intelligence [Pei Wang] https://sciendo.com/article/10.2478/jagi-2019-0002 Why? The Purpose of the Universe [Goff] https://amzn.to/4aEqpfm Umwelt https://en.wikipedia.org/wiki/Umwelt An Immense World: How Animal Senses Reveal the Hidden Realms [Yong] https://amzn.to/3tzzTb7 What's it like to be a bat [Nagal] https://www.sas.upenn.edu/~cavitch/pdf-library/Nagel_Bat.pdf COUNTERFEIT PEOPLE. DANIEL DENNETT. (SPECIAL EDITION) https://www.youtube.com/watch?v=axJtywd9Tbo We live in the infosphere [FLORIDI] https://www.youtube.com/watch?v=YLNGvvgq3eg Mark Zuckerberg: First Interview in the Metaverse | Lex Fridman Podcast #398 https://www.youtube.com/watch?v=MVYrJJNdrEg Black Mirror: Rachel, Jack and Ashley Too | Official Trailer | Netflix https://www.youtube.com/watch?v=-qIlCo9yqpY
1/7/20241 hour, 2 minutes, 39 seconds
Episode Artwork

Understanding Deep Learning - Prof. SIMON PRINCE [STAFF FAVOURITE]

Watch behind the scenes, get early access and join private Discord by supporting us on Patreon: https://patreon.com/mlst https://discord.gg/aNPkGUQtc5 https://twitter.com/MLStreetTalk In this comprehensive exploration of the field of deep learning with Professor Simon Prince who has just authored an entire text book on Deep Learning, we investigate the technical underpinnings that contribute to the field's unexpected success and confront the enduring conundrums that still perplex AI researchers. Key points discussed include the surprising efficiency of deep learning models, where high-dimensional loss functions are optimized in ways which defy traditional statistical expectations. Professor Prince provides an exposition on the choice of activation functions, architecture design considerations, and overparameterization. We scrutinize the generalization capabilities of neural networks, addressing the seeming paradox of well-performing overparameterized models. Professor Prince challenges popular misconceptions, shedding light on the manifold hypothesis and the role of data geometry in informing the training process. Professor Prince speaks about how layers within neural networks collaborate, recursively reconfiguring instance representations that contribute to both the stability of learning and the emergence of hierarchical feature representations. In addition to the primary discussion on technical elements and learning dynamics, the conversation briefly diverts to audit the implications of AI advancements with ethical concerns. Follow Prof. Prince: https://twitter.com/SimonPrinceAI https://www.linkedin.com/in/simon-prince-615bb9165/ Get the book now! https://mitpress.mit.edu/9780262048644/understanding-deep-learning/ https://udlbook.github.io/udlbook/ Panel: Dr. Tim Scarfe - https://www.linkedin.com/in/ecsquizor/ https://twitter.com/ecsquendor TOC: [00:00:00] Introduction [00:11:03] General Book Discussion [00:15:30] The Neural Metaphor [00:17:56] Back to Book Discussion [00:18:33] Emergence and the Mind [00:29:10] Computation in Transformers [00:31:12] Studio Interview with Prof. Simon Prince [00:31:46] Why Deep Neural Networks Work: Spline Theory [00:40:29] Overparameterization in Deep Learning [00:43:42] Inductive Priors and the Manifold Hypothesis [00:49:31] Universal Function Approximation and Deep Networks [00:59:25] Training vs Inference: Model Bias [01:03:43] Model Generalization Challenges [01:11:47] Purple Segment: Unknown Topic [01:12:45] Visualizations in Deep Learning [01:18:03] Deep Learning Theories Overview [01:24:29] Tricks in Neural Networks [01:30:37] Critiques of ChatGPT [01:42:45] Ethical Considerations in AI References on YT version VD: https://youtu.be/sJXn4Cl4oww
12/26/20232 hours, 6 minutes, 38 seconds
Episode Artwork

Prof. BERT DE VRIES - ON ACTIVE INFERENCE

Watch behind the scenes with Bert on Patreon: https://www.patreon.com/posts/bert-de-vries-93230722 https://discord.gg/aNPkGUQtc5 https://twitter.com/MLStreetTalk Note, there is some mild background music on chapter 1 (Least Action), 3 (Friston) and 5 (Variational Methods) - please skip ahead if annoying. It's a tiny fraction of the overall podcast. YT version: https://youtu.be/2wnJ6E6rQsU Bert de Vries is Professor in the Signal Processing Systems group at Eindhoven University. His research focuses on the development of intelligent autonomous agents that learn from in-situ interactions with their environment. His research draws inspiration from diverse fields including computational neuroscience, Bayesian machine learning, Active Inference and signal processing. Bert believes that development of signal processing systems will in the future be largely automated by autonomously operating agents that learn purposeful from situated environmental interactions. Bert received nis M.Sc. (1986) and Ph.D. (1991) degrees in Electrical Engineering from Eindhoven University of Technology (TU/e) and the University of Florida, respectively. From 1992 to 1999, he worked as a research scientist at Sarnoff Research Center in Princeton (NJ, USA). Since 1999, he has been employed in the hearing aids industry, both in engineering and managerial positions. De Vries was appointed part-time professor in the Signal Processing Systems Group at TU/e in 2012. Contact: https://twitter.com/bertdv0 https://www.tue.nl/en/research/researchers/bert-de-vries https://www.verses.ai/about-us Panel: Dr. Tim Scarfe / Dr. Keith Duggar TOC: [00:00:00] Principle of Least Action [00:05:10] Patreon Teaser [00:05:46] On Friston [00:07:34] Capm Peterson (VERSES) [00:08:20] Variational Methods [00:16:13] Dan Mapes (VERSES) [00:17:12] Engineering with Active Inference [00:20:23] Jason Fox (VERSES) [00:20:51] Riddhi Jain Pitliya [00:21:49] Hearing Aids as Adaptive Agents [00:33:38] Steven Swanson (VERSES) [00:35:46] Main Interview Kick Off, Engineering and Active Inference [00:43:35] Actor / Streaming / Message Passing [00:56:21] Do Agents Lose Flexibility with Maturity? [01:00:50] Language Compression [01:04:37] Marginalisation to Abstraction [01:12:45] Online Structural Learning [01:18:40] Efficiency in Active Inference [01:26:25] SEs become Neuroscientists [01:35:11] Building an Automated Engineer [01:38:58] Robustness and Design vs Grow [01:42:38] RXInfer [01:51:12] Resistance to Active Inference? [01:57:39] Diffusion of Responsibility in a System [02:10:33] Chauvinism in "Understanding" [02:20:08] On Becoming a Bayesian Refs: RXInfer https://biaslab.github.io/rxinfer-website/ Prof. Ariel Caticha https://www.albany.edu/physics/faculty/ariel-caticha Pattern recognition and machine learning (Bishop) https://www.microsoft.com/en-us/research/uploads/prod/2006/01/Bishop-Pattern-Recognition-and-Machine-Learning-2006.pdf Data Analysis: A Bayesian Tutorial (Sivia) https://www.amazon.co.uk/Data-Analysis-Bayesian-Devinderjit-Sivia/dp/0198568320 Probability Theory: The Logic of Science (E. T. Jaynes) https://www.amazon.co.uk/Probability-Theory-Principles-Elementary-Applications/dp/0521592712/ #activeinference #artificialintelligence
11/20/20232 hours, 27 minutes, 39 seconds
Episode Artwork

MULTI AGENT LEARNING - LANCELOT DA COSTA

Please support us https://www.patreon.com/mlst https://discord.gg/aNPkGUQtc5 https://twitter.com/MLStreetTalk Lance Da Costa aims to advance our understanding of intelligent systems by modelling cognitive systems and improving artificial systems. He's a PhD candidate with Greg Pavliotis and Karl Friston jointly at Imperial College London and UCL, and a student in the Mathematics of Random Systems CDT run by Imperial College London and the University of Oxford. He completed an MRes in Brain Sciences at UCL with Karl Friston and Biswa Sengupta, an MASt in Pure Mathematics at the University of Cambridge with Oscar Randal-Williams, and a BSc in Mathematics at EPFL and the University of Toronto. Summary: Lance did pure math originally but became interested in the brain and AI. He started working with Karl Friston on the free energy principle, which claims all intelligent agents minimize free energy for perception, action, and decision-making. Lance has worked to provide mathematical foundations and proofs for why the free energy principle is true, starting from basic assumptions about agents interacting with their environment. This aims to justify the principle from first physics principles. Dr. Scarfe and Da Costa discuss different approaches to AI - the free energy/active inference approach focused on mimicking human intelligence vs approaches focused on maximizing capability like deep reinforcement learning. Lance argues active inference provides advantages for explainability and safety compared to black box AI systems. It provides a simple, sparse description of intelligence based on a generative model and free energy minimization. They discuss the need for structured learning and acquiring core knowledge to achieve more human-like intelligence. Lance highlights work from Josh Tenenbaum's lab that shows similar learning trajectories to humans in a simple Atari-like environment. Incorporating core knowledge constraints the space of possible generative models the agent can use to represent the world, making learning more sample efficient. Lance argues active inference agents with core knowledge can match human learning capabilities. They discuss how to make generative models interpretable, such as through factor graphs. The goal is to be able to understand the representations and message passing in the model that leads to decisions. In summary, Lance argues active inference provides a principled approach to AI with advantages for explainability, safety, and human-like learning. Combining it with core knowledge and structural learning aims to achieve more human-like artificial intelligence. https://www.lancelotdacosta.com/ https://twitter.com/lancelotdacosta Interviewer: Dr. Tim Scarfe TOC 00:00:00 - Start 00:09:27 - Intelligence 00:12:37 - Priors / structure learning 00:17:21 - Core knowledge 00:29:05 - Intelligence is specialised 00:33:21 - The magic of agents 00:39:30 - Intelligibility of structure learning #artificialintelligence #activeinference
11/5/202349 minutes, 56 seconds
Episode Artwork

THE HARD PROBLEM OF OBSERVERS - WOLFRAM & FRISTON [SPECIAL EDITION]

Please support us! https://www.patreon.com/mlst https://discord.gg/aNPkGUQtc5 https://twitter.com/MLStreetTalk YT version (with intro not found here) https://youtu.be/6iaT-0Dvhnc This is the epic special edition show you have been waiting for! With two of the most brilliant scientists alive today. Atoms, things, agents, ... observers. What even defines an "observer" and what properties must all observers share? How do objects persist in our universe given that their material composition changes over time? What does it mean for a thing to be a thing? And do things supervene on our lower-level physical reality? What does it mean for a thing to have agency? What's the difference between a complex dynamical system with and without agency? Could a rock or an AI catflap have agency? Can the universe be factorised into distinct agents, or is agency diffused? Have you ever pondered about these deep questions about reality? Prof. Friston and Dr. Wolfram have spent their entire careers, some 40+ years each thinking long and hard about these very questions and have developed significant frameworks of reference on their respective journeys (the Wolfram Physics project and the Free Energy principle). Panel: MIT Ph.D Keith Duggar Production: Dr. Tim Scarfe Refs: TED Talk with Stephen: https://www.ted.com/talks/stephen_wolfram_how_to_think_computationally_about_ai_the_universe_and_everything https://writings.stephenwolfram.com/2023/10/how-to-think-computationally-about-ai-the-universe-and-everything/ TOC 00:00:00 - Show kickoff 00:02:38 - Wolfram gets to grips with FEP 00:27:08 - How much control does an agent/observer have 00:34:52 - Observer persistence, what universe seems like to us 00:40:31 - Black holes 00:45:07 - Inside vs outside 00:52:20 - Moving away from the predictable path 00:55:26 - What can observers do 01:06:50 - Self modelling gives agency 01:11:26 - How do you know a thing has agency? 01:22:48 - Deep link between dynamics, ruliad and AI 01:25:52 - Does agency entail free will? Defining Agency 01:32:57 - Where do I probe for agency? 01:39:13 - Why is the universe the way we see it? 01:42:50 - Alien intelligence 01:43:40 - The hard problem of Observers 01:46:20 - Summary thoughts from Wolfram 01:49:35 - Factorisability of FEP 01:57:05 - Patreon interview teaser
10/29/20231 hour, 59 minutes, 29 seconds
Episode Artwork

DR. JEFF BECK - THE BAYESIAN BRAIN

Support us! https://www.patreon.com/mlst MLST Discord: https://discord.gg/aNPkGUQtc5 YT version: https://www.youtube.com/watch?v=c4praCiy9qU Dr. Jeff Beck is a computational neuroscientist studying probabilistic reasoning (decision making under uncertainty) in humans and animals with emphasis on neural representations of uncertainty and cortical implementations of probabilistic inference and learning. His line of research incorporates information theoretic and hierarchical statistical analysis of neural and behavioural data as well as reinforcement learning and active inference. https://www.linkedin.com/in/jeff-beck... https://scholar.google.com/citations?... Interviewer: Dr. Tim Scarfe TOC 00:00:00 Intro 00:00:51 Bayesian / Knowledge 00:14:57 Active inference 00:18:58 Mediation 00:23:44 Philosophy of mind / science 00:29:25 Optimisation 00:42:54 Emergence 00:56:38 Steering emergent systems 01:04:31 Work plan 01:06:06 Representations/Core knowledge #activeinference
10/16/20231 hour, 10 minutes, 6 seconds
Episode Artwork

Prof. Melanie Mitchell 2.0 - AI Benchmarks are Broken!

Patreon: https://www.patreon.com/mlst Discord: https://discord.gg/ESrGqhf5CB Prof. Melanie Mitchell argues that the concept of "understanding" in AI is ill-defined and multidimensional - we can't simply say an AI system does or doesn't understand. She advocates for rigorously testing AI systems' capabilities using proper experimental methods from cognitive science. Popular benchmarks for intelligence often rely on the assumption that if a human can perform a task, an AI that performs the task must have human-like general intelligence. But benchmarks should evolve as capabilities improve. Large language models show surprising skill on many human tasks but lack common sense and fail at simple things young children can do. Their knowledge comes from statistical relationships in text, not grounded concepts about the world. We don't know if their internal representations actually align with human-like concepts. More granular testing focused on generalization is needed. There are open questions around whether large models' abilities constitute a fundamentally different non-human form of intelligence based on vast statistical correlations across text. Mitchell argues intelligence is situated, domain-specific and grounded in physical experience and evolution. The brain computes but in a specialized way honed by evolution for controlling the body. Extracting "pure" intelligence may not work. Other key points: - Need more focus on proper experimental method in AI research. Developmental psychology offers examples for rigorous testing of cognition. - Reporting instance-level failures rather than just aggregate accuracy can provide insights. - Scaling laws and complex systems science are an interesting area of complexity theory, with applications to understanding cities. - Concepts like "understanding" and "intelligence" in AI force refinement of fuzzy definitions. - Human intelligence may be more collective and social than we realize. AI forces us to rethink concepts we apply anthropomorphically. The overall emphasis is on rigorously building the science of machine cognition through proper experimentation and benchmarking as we assess emerging capabilities. TOC: [00:00:00] Introduction and Munk AI Risk Debate Highlights [05:00:00] Douglas Hofstadter on AI Risk [00:06:56] The Complexity of Defining Intelligence [00:11:20] Examining Understanding in AI Models [00:16:48] Melanie's Insights on AI Understanding Debate [00:22:23] Unveiling the Concept Arc [00:27:57] AI Goals: A Human vs Machine Perspective [00:31:10] Addressing the Extrapolation Challenge in AI [00:36:05] Brain Computation: The Human-AI Parallel [00:38:20] The Arc Challenge: Implications and Insights [00:43:20] The Need for Detailed AI Performance Reporting [00:44:31] Exploring Scaling in Complexity Theory Eratta: Note Tim said around 39 mins that a recent Stanford/DM paper modelling ARC “on GPT-4 got around 60%”. This is not correct and he misremembered. It was actually davinci3, and around 10%, which is still extremely good for a blank slate approach with an LLM and no ARC specific knowledge. Folks on our forum couldn’t reproduce the result. See paper linked below. Books (MUST READ): Artificial Intelligence: A Guide for Thinking Humans (Melanie Mitchell) https://www.amazon.co.uk/Artificial-Intelligence-Guide-Thinking-Humans/dp/B07YBHNM1C/?&_encoding=UTF8&tag=mlst00-21&linkCode=ur2&linkId=44ccac78973f47e59d745e94967c0f30&camp=1634&creative=6738 Complexity: A Guided Tour (Melanie Mitchell) https://www.amazon.co.uk/Audible-Complexity-A-Guided-Tour?&_encoding=UTF8&tag=mlst00-21&linkCode=ur2&linkId=3f8bd505d86865c50c02dd7f10b27c05&camp=1634&creative=6738 Show notes (transcript, full references etc) https://atlantic-papyrus-d68.notion.site/Melanie-Mitchell-2-0-15e212560e8e445d8b0131712bad3000?pvs=25 YT version: https://youtu.be/29gkDpR2orc
9/10/20231 hour, 1 minute, 47 seconds
Episode Artwork

Autopoitic Enactivism and the Free Energy Principle - Prof. Friston, Prof Buckley, Dr. Ramstead

We explore connections between FEP and enactivism, including tensions raised in a paper critiquing FEP from an enactivist perspective. Dr. Maxwell Ramstead provides background on enactivism emerging from autopoiesis, with a focus on embodied cognition and rejecting information processing/computational views of mind. Chris shares his journey from robotics into FEP, starting as a skeptic but becoming convinced it's the right framework. He notes there are both "high road" and "low road" versions, ranging from embodied to more radically anti-representational stances. He doesn't see a definitive fork between dynamical systems and information theory as the source of conflict. Rather, the notion of operational closure in enactivism seems to be the main sticking point. The group explores definitional issues around structure/organization, boundaries, and operational closure. Maxwell argues the generative model in FEP captures organizational dependencies akin to operational closure. The Markov blanket formalism models structural interfaces. We discuss the concept of goals in cognitive systems - Chris advocates an intentional stance perspective - using notions of goals/intentions if they help explain system dynamics. Goals emerge from beliefs about dynamical trajectories. Prof Friston provides an elegant explanation of how goal-directed behavior naturally falls out of the FEP mathematics in a particular "goldilocks" regime of system scale/dynamics. The conversation explores the idea that many systems simply act "as if" they have goals or models, without necessarily possessing explicit representations. This helps resolve tensions between enactivist and computational perspectives. Throughout the dialogue, Maxwell presses philosophical points about the FEP abolishing what he perceives as false dichotomies in cognitive science such as internalism/externalism. He is critical of enactivists' commitment to bright line divides between subject areas. Prof. Karl Friston - Inventor of the free energy principle https://scholar.google.com/citations?user=q_4u0aoAAAAJ Prof. Chris Buckley - Professor of Neural Computation at Sussex University https://scholar.google.co.uk/citations?user=nWuZ0XcAAAAJ&hl=en Dr. Maxwell Ramstead - Director of Research at VERSES https://scholar.google.ca/citations?user=ILpGOMkAAAAJ&hl=fr We address critique in this paper: Laying down a forking path: Tensions between enaction and the free energy principle (Ezequiel A. Di Paolo, Evan Thompson, Randall D. Beere) https://philosophymindscience.org/index.php/phimisci/article/download/9187/8975 Other refs: Multiscale integration: beyond internalism and externalism (Maxwell J D Ramstead) https://pubmed.ncbi.nlm.nih.gov/33627890/ MLST panel: Dr. Tim Scarfe and Dr. Keith Duggar TOC (auto generated): 0:00 - Introduction 0:41 - Defining enactivism and its variants 6:58 - The source of the conflict between dynamical systems and information theory 8:56 - Operational closure in enactivism 10:03 - Goals and intentions 12:35 - The link between dynamical systems and information theory 15:02 - Path integrals and non-equilibrium dynamics 18:38 - Operational closure defined 21:52 - Structure vs. organization in enactivism 24:24 - Markov blankets as interfaces 28:48 - Operational closure in FEP 30:28 - Structure and organization again 31:08 - Dynamics vs. information theory 33:55 - Goals and intentions emerge in the FEP mathematics 36:58 - The Good Regulator Theorem 49:30 - enactivism and its relation to ecological psychology 52:00 - Goals, intentions and beliefs 55:21 - Boundaries and meaning 58:55 - Enactivism's rejection of information theory 1:02:08 - Beliefs vs goals 1:05:06 - Ecological psychology and FEP 1:08:41 - The Good Regulator Theorem 1:18:38 - How goal-directed behavior emerges 1:23:13 - Ontological vs metaphysical boundaries 1:25:20 - Boundaries as maps 1:31:08 - Connections to the maximum entropy principle 1:33:45 - Relations to quantum and relational physics
9/5/20231 hour, 34 minutes, 46 seconds
Episode Artwork

The Lottery Ticket Hypothesis with Jonathan Frankle

In this episode of Machine Learning Street Talk, we chat with Jonathan Frankle, author of The Lottery Ticket Hypothesis. Frankle has continued researching Sparse Neural Networks, Pruning, and Lottery Tickets leading to some really exciting follow-on papers! This chat discusses some of these papers such as Linear Mode Connectivity, Comparing and Rewinding and Fine-tuning in Neural Network Pruning, and more (full list of papers linked below). We also chat about how Jonathan got into Deep Learning research, his Information Diet, and work on developing Technology Policy for Artificial Intelligence!  This was a really fun chat, I hope you enjoy listening to it and learn something from it! Thanks for watching and please subscribe! Huge thanks to everyone on r/MachineLearning who asked questions! Paper Links discussed in the chat: The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks: https://arxiv.org/abs/1803.03635 Linear Mode Connectivity and the Lottery Ticket Hypothesis: https://arxiv.org/abs/1912.05671 Dissecting Pruned Neural Networks: https://arxiv.org/abs/1907.00262 Training BatchNorm and Only BatchNorm: On the Expressive Power of Random Features in CNNs: https://arxiv.org/abs/2003.00152 What is the State of Neural Network Pruning? https://arxiv.org/abs/2003.03033 The Early Phase of Neural Network Training: https://arxiv.org/abs/2002.10365 Comparing Rewinding and Fine-tuning in Neural Network Pruning: https://arxiv.org/abs/2003.02389 (Also Mentioned) Block-Sparse GPU Kernels: https://openai.com/blog/block-sparse-gpu-kernels/ Balanced Sparsity for Efficient DNN Inference on GPU: https://arxiv.org/pdf/1811.00206.pdf Playing the Lottery with Rewards and Multiple Languages: Lottery Tickets in RL and NLP: https://arxiv.org/pdf/1906.02768.pdf r/MachineLearning question list: https://www.reddit.com/r/MachineLearning/comments/g9jqe0/d_lottery_ticket_hypothesis_ask_the_author_a/ (edited)  #machinelearning #deeplearning
5/19/20201 hour, 26 minutes, 43 seconds