Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.
BI 182: John Krakauer Returns… Again
Support the show to get full episodes and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
John Krakauer has been on the podcast multiple times (see links below). Today we discuss some topics framed around what he's been working on and thinking about lately. Things like
Whether brains actually reorganize after damage
The role of brain plasticity in general
The path toward and the path not toward understanding higher cognition
How to fix motor problems after strokes
AGI
Functionalism, consciousness, and much more.
Relevant links:
John's Lab.
Twitter: @blamlab
Related papers
What are we talking about? Clarifying the fuzzy concept of representation in neuroscience and beyond.
Against cortical reorganisation.
Other episodes with John:
BI 025 John Krakauer: Understanding Cognition
BI 077 David and John Krakauer: Part 1
BI 078 David and John Krakauer: Part 2
BI 113 David Barack and John Krakauer: Two Views On Cognition
Time stamps
0:00 - Intro
2:07 - It's a podcast episode!
6:47 - Stroke and Sherrington neuroscience
19:26 - Thinking vs. moving, representations
34:15 - What's special about humans?
56:35 - Does cortical reorganization happen?
1:14:08 - Current era in neuroscience
1/19/2024 • 1 hour, 25 minutes, 42 seconds
BI 181 Max Bennett: A Brief History of Intelligence
Support the show to get full episodes and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
By day, Max Bennett is an entrepreneur. He has cofounded and CEO'd multiple AI and technology companies. By many other countless hours, he has studied brain related sciences. Those long hours of research have payed off in the form of this book, A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains.
Three lines of research formed the basis for how Max synthesized knowledge into the ideas in his current book: findings from comparative psychology (comparing brains and minds of different species), evolutionary neuroscience (how brains have evolved), and artificial intelligence, especially the algorithms developed to carry out functions. We go through I think all five of the breakthroughs in some capacity. A recurring theme is that each breakthrough may explain multiple new abilities. For example, the evolution of the neocortex may have endowed early mammals with the ability to simulate or imagine what isn't immediately present, and this ability might further explain mammals' capacity to engage in vicarious trial and error (imagining possible actions before trying them out), the capacity to engage in counterfactual learning (what would have happened if things went differently than they did), and the capacity for episodic memory and imagination.
The book is filled with unifying accounts like that, and it makes for a great read. Strap in, because Max gives a sort of masterclass about many of the ideas in his book.
Twitter:
@maxsbennett
Book:
A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains.
0:00 - Intro
5:26 - Why evolution is important
7:22 - Maclean's triune brain
14:59 - Breakthrough 1: Steering
29:06 - Fish intelligence
40:38 - Breakthrough 3: Mentalizing
52:44 - How could we improve the human brain?
1:00:44 - What is intelligence?
1:13:50 - Breakthrough 5: Speaking
12/25/2023 • 1 hour, 27 minutes, 30 seconds
BI 180 Panel Discussion: Long-term Memory Encoding and Connectome Decoding
Support the show to get full episodes and join the Discord community.
Welcome to another special panel discussion episode.
I was recently invited to moderate at discussion amongst 6 people at the annual Aspirational Neuroscience meetup. Aspirational Neuroscience is a nonprofit community run by Kenneth Hayworth. Ken has been on the podcast before on episode 103. Ken helps me introduce the meetup and panel discussion for a few minutes. The goal in general was to discuss how current and developing neuroscience technologies might be used to decode a nontrivial memory from a static connectome - what the obstacles are, how to surmount those obstacles, and so on.
There isn't video of the event, just audio, and because we were all sharing microphones and they were being passed around, you'll hear some microphone type noise along the way - but I did my best to optimize the audio quality, and it turned out mostly quite listenable I believe.
Aspirational Neuroscience
Panelists:
Anton Arkhipov, Allen Institute for Brain Science.
@AntonSArkhipov
Konrad Kording, University of Pennsylvania.
@KordingLab
Tomás Ryan, Trinity College Dublin.
@TJRyan_77
Srinivas Turaga, Janelia Research Campus.
Dong Song, University of Southern California.
@dongsong
Zhihao Zheng, Princeton University.
@zhihaozheng
0:00 - Intro
1:45 - Ken Hayworth
14:09 - Panel Discussion
12/11/2023 • 1 hour, 29 minutes, 27 seconds
BI 179 Laura Gradowski: Include the Fringe with Pluralism
Support the show to get full episodes and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
Laura Gradowski is a philosopher of science at the University of Pittsburgh. Pluralism is roughly the idea that there is no unified account of any scientific field, that we should be tolerant of and welcome a variety of theoretical and conceptual frameworks, and methods, and goals, when doing science. Pluralism is kind of a buzz word right now in my little neuroscience world, but it's an old and well-trodden notion... many philosophers have been calling for pluralism for many years. But how pluralistic should we be in our studies and explanations in science? Laura suggests we should be very, very pluralistic, and to make her case, she cites examples in the history of science of theories and theorists that were once considered "fringe" but went on to become mainstream accepted theoretical frameworks. I thought it would be fun to have her on to share her ideas about fringe theories, mainstream theories, pluralism, etc.
We discuss a wide range of topics, but also discuss some specific to the brain and mind sciences. Laura goes through an example of something and someone going from fringe to mainstream - the Garcia effect, named after John Garcia, whose findings went agains the grain of behaviorism, the dominant dogma of the day in psychology. But this overturning only happened after Garcia had to endure a long scientific hell of his results being ignored and shunned. So, there are multiple examples like that, and we discuss a handful. This has led Laura to the conclusion we should accept almost all theoretical frameworks, We discuss her ideas about how to implement this, where to draw the line, and much more.
Laura's page at the Center for the Philosophy of Science at the University of Pittsburgh.
Facing the Fringe.
Garcia's reflections on his troubles: Tilting at the Paper Mills of Academe
0:00 - Intro
3:57 - What is fringe?
10:14 - What makes a theory fringe?
14:31 - Fringe to mainstream
17:23 - Garcia effect
28:17 - Fringe to mainstream: other examples
32:38 - Fringe and consciousness
33:19 - Words meanings change over time
40:24 - Pseudoscience
43:25 - How fringe becomes mainstream
47:19 - More fringe characteristics
50:06 - Pluralism as a solution
54:02 - Progress
1:01:39 - Encyclopedia of theories
1:09:20 - When to reject a theory
1:20:07 - How fringe becomes fringe
1:22:50 - Marginilization
1:27:53 - Recipe for fringe theorist
11/27/2023 • 1 hour, 39 minutes, 6 seconds
BI 178 Eric Shea-Brown: Neural Dynamics and Dimensions
Support the show to get full episodes and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
Eric Shea-Brown is a theoretical neuroscientist and principle investigator of the working group on neural dynamics at the University of Washington. In this episode, we talk a lot about dynamics and dimensionality in neural networks... how to think about them, why they matter, how Eric's perspectives have changed through his career. We discuss a handful of his specific research findings about dynamics and dimensionality, like how dimensionality changes when one is performing a task versus when you're just sort of going about your day, what we can say about dynamics just by looking at different structural connection motifs, how different modes of learning can rely on different dimensionalities, and more.We also talk about how he goes about choosing what to work on and how to work on it. You'll hear in our discussion how much credit Eric gives to those surrounding him and those who came before him - he drops tons of references and names, so get ready if you want to follow up on some of the many lines of research he mentions.
Eric's website.
Related papers
Predictive learning as a network mechanism for extracting low-dimensional latent space representations.
A scale-dependent measure of system dimensionality.
From lazy to rich to exclusive task representations in neural networks and neural codes.
Feedback through graph motifs relates structure and function in complex networks.
0:00 - Intro
4:15 - Reflecting on the rise of dynamical systems in neuroscience
11:15 - DST view on macro scale
15:56 - Intuitions
22:07 - Eric's approach
31:13 - Are brains more or less impressive to you now?
38:45 - Why is dimensionality important?
50:03 - High-D in Low-D
54:14 - Dynamical motifs
1:14:56 - Theory for its own sake
1:18:43 - Rich vs. lazy learning
1:22:58 - Latent variables
1:26:58 - What assumptions give you most pause?
11/13/2023 • 1 hour, 35 minutes, 31 seconds
BI 177 Special: Bernstein Workshop Panel
Support the show to get full episodes and join the Discord community.
I was recently invited to moderate a panel at the Annual Bernstein conference - this one was in Berlin Germany. The panel I moderated was at a satellite workshop at the conference called How can machine learning be used to generate insights and theories in neuroscience? Below are the panelists. I hope you enjoy the discussion!
Program: How can machine learning be used to generate insights and theories in neuroscience?
Panelists:
Katrin Franke
Lab website.
Twitter: @kfrankelab.
Ralf Haefner
Haefner lab.
Twitter: @haefnerlab.
Martin Hebart
Hebart Lab.
Twitter: @martin_hebart.
Johannes Jaeger
Yogi's website.
Twitter: @yoginho.
Fred Wolf
Fred's university webpage.
Organizers:
Alexander Ecker | University of Göttingen, Germany
Fabian Sinz | University of Göttingen, Germany
Mohammad Bashiri, Pavithra Elumalai, Michaela Vystrcilová | University of Göttingen, Germany
10/30/2023 • 1 hour, 13 minutes, 54 seconds
BI 176 David Poeppel Returns
Support the show to get full episodes and join the Discord community.
David runs his lab at NYU, where they stud`y auditory cognition, speech perception, language, and music. On the heels of the episode with David Glanzman, we discuss the ongoing mystery regarding how memory works, how to study and think about brains and minds, and the reemergence (perhaps) of the language of thought hypothesis.
David has been on the podcast a few times... once by himself, and again with Gyorgy Buzsaki.
Poeppel lab
Twitter: @davidpoeppel.
Related papers
We don’t know how the brain stores anything, let alone words.
Memory in humans and deep language models: Linking hypotheses for model augmentation.
The neural ingredients for a language of thought are available.
0:00 - Intro
11:17 - Across levels
14:598 - Nature of memory
24:12 - Using the right tools for the right question
35:46 - LLMs, what they need, how they've shaped David's thoughts
44:55 - Across levels
54:07 - Speed of progress
1:02:21 - Neuroethology and mental illness - patreon
1:24:42 - Language of Thought
10/14/2023 • 1 hour, 23 minutes, 57 seconds
BI 175 Kevin Mitchell: Free Agents
Support the show to get full episodes and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
Kevin Mitchell is professor of genetics at Trinity College Dublin. He's been on the podcast before, and we talked a little about his previous book, Innate – How the Wiring of Our Brains Shapes Who We Are. He's back today to discuss his new book Free Agents: How Evolution Gave Us Free Will. The book is written very well and guides the reader through a wide range of scientific knowledge and reasoning that undergirds Kevin's main take home: our free will comes from the fact that we are biological organisms, biological organisms have agency, and as that agency evolved to become more complex and layered, so does our ability to exert free will. We touch on a handful of topics in the book, like the idea of agency, how it came about at the origin of life, and how the complexity of kinds of agency, the richness of our agency, evolved as organisms became more complex.
We also discuss Kevin's reliance on the indeterminacy of the universe to tell his story, the underlying randomness at fundamental levels of physics. Although indeterminacy isn't necessary for ongoing free will, it is responsible for the capacity for free will to exist in the first place. We discuss the brain's ability to harness its own randomness when needed, creativity, whether and how it's possible to create something new, artificial free will, and lots more.
Kevin's website.
Twitter: @WiringtheBrain
Book: Free Agents: How Evolution Gave Us Free Will
4:27 - From Innate to Free Agents
9:14 - Thinking of the whole organism
15:11 - Who the book is for
19:49 - What bothers Kevin
27:00 - Indeterminacy
30:08 - How it all began
33:08 - How indeterminacy helps
43:58 - Libet's free will experiments
50:36 - Creativity
59:16 - Selves, subjective experience, agency, and free will
1:10:04 - Levels of agency and free will
1:20:38 - How much free will can we have?
1:28:03 - Hierarchy of mind constraints
1:36:39 - Artificial agents and free will
1:42:57 - Next book?
10/3/2023 • 1 hour, 46 minutes, 32 seconds
BI 174 Alicia Juarrero: Context Changes Everything
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes and join the Discord community.
Alicia Juarrero is a philosopher and has been interested in complexity since before it was cool.
In this episode, we discuss many of the topics and ideas in her new book, Context Changes Everything: How Constraints Create Coherence, which makes the thorough case that constraints should be given way more attention when trying to understand complex systems like brains and minds - how they're organized, how they operate, how they're formed and maintained, and so on. Modern science, thanks in large part to the success of physics, focuses on a single kind of causation - the kind involved when one billiard ball strikes another billiard ball. But that kind of causation neglects what Alicia argues are the most important features of complex systems the constraints that shape the dynamics and possibility spaces of systems. Much of Alicia's book describes the wide range of types of constraints we should be paying attention to, and how they interact and mutually influence each other. I highly recommend the book, and you may want to read it before, during, and after our conversation. That's partly because, if you're like me, the concepts she discusses still aren't comfortable to think about the way we're used to thinking about how things interact. Thinking across levels of organization turns out to be hard. You might also want her book handy because, hang on to your hats, we jump around a lot among those concepts. Context Changes everything comes about 25 years after her previous classic, Dynamics In Action, which we also discuss and which I also recommend if you want more of a primer to her newer more expansive work. Alicia's work touches on all things complex, from self-organizing systems like whirlpools, to ecologies, businesses, societies, and of course minds and brains.
Book:
Context Changes Everything: How Constraints Create Coherence
0:00 - Intro
3:37 - 25 years thinking about constraints
8:45 - Dynamics in Action and eliminativism
13:08 - Efficient and other kinds of causation
19:04 - Complexity via context independent and dependent constraints
25:53 - Enabling and limiting constraints
30:55 - Across scales
36:32 - Temporal constraints
42:58 - A constraint cookbook?
52:12 - Constraints in a mechanistic worldview
53:42 - How to explain using constraints
56:22 - Concepts and multiple realizabillity
59:00 - Kevin Mitchell question
1:08:07 - Mac Shine Question
1:19:07 - 4E
1:21:38 - Dimensionality across levels
1:27:26 - AI and constraints
1:33:08 - AI and life
9/13/2023 • 1 hour, 45 minutes
BI 173 Justin Wood: Origins of Visual Intelligence
Support the show to get full episodes and join the Discord community.
In the intro, I mention the Bernstein conference workshop I'll participate in, called How can machine learning be used to generate insights and theories in neuroscience?. Follow that link to learn more, and register for the conference here. Hope to see you there in late September in Berlin!
Justin Wood runs the Wood Lab at Indiana University, and his lab's tagline is "building newborn minds in virtual worlds." In this episode, we discuss his work comparing the visual cognition of newborn chicks and AI models. He uses a controlled-rearing technique with natural chicks, whereby the chicks are raised from birth in completely controlled visual environments. That way, Justin can present designed visual stimuli to test what kinds of visual abilities chicks have or can immediately learn. Then he can building models and AI agents that are trained on the same data as the newborn chicks. The goal is to use the models to better understand natural visual intelligence, and use what we know about natural visual intelligence to help build systems that better emulate biological organisms. We discuss some of the visual abilities of the chicks and what he's found using convolutional neural networks. Beyond vision, we discuss his work studying the development of collective behavior, which compares chicks to a model that uses CNNs, reinforcement learning, and an intrinsic curiosity reward function. All of this informs the age-old nature (nativist) vs. nurture (empiricist) debates, which Justin believes should give way to embrace both nature and nurture.
Wood lab.
Related papers:
Controlled-rearing studies of newborn chicks and deep neural networks.
Development of collective behavior in newborn artificial agents.
A newborn embodied Turing test for view-invariant object recognition.
Justin mentions these papers:
Untangling invariant object recognition (Dicarlo & Cox 2007)
0:00 - Intro
5:39 - Origins of Justin's current research
11:17 - Controlled rearing approach
21:52 - Comparing newborns and AI models
24:11 - Nativism vs. empiricism
28:15 - CNNs and early visual cognition
29:35 - Smoothness and slowness
50:05 - Early biological development
53:27 - Naturalistic vs. highly controlled
56:30 - Collective behavior in animals and machines
1:02:34 - Curiosity and critical periods
1:09:05 - Controlled rearing vs. other developmental studies
1:13:25 - Breaking natural rules
1:16:33 - Deep RL collective behavior
1:23:16 - Bottom-up and top-down
8/30/2023 • 1 hour, 35 minutes, 45 seconds
BI 172 David Glanzman: Memory All The Way Down
Support the show to get full episodes and join the Discord community.
David runs his lab at UCLA where he's also a distinguished professor. David used to believe what is currently the mainstream view, that our memories are stored in our synapses, those connections between our neurons. So as we learn, the synaptic connections strengthen and weaken until their just right, and that serves to preserve the memory. That's been the dominant view in neuroscience for decades, and is the fundamental principle that underlies basically all of deep learning in AI. But because of his own and others experiments, which he describes in this episode, David has come to the conclusion that memory must be stored not at the synapse, but in the nucleus of neurons, likely by some epigenetic mechanism mediated by RNA molecules. If this sounds familiar, I had Randy Gallistel on the the podcast on episode 126 to discuss similar ideas, and David discusses where he and Randy differ in their thoughts. This episode starts out pretty technical as David describes the series of experiments that changed his mind, but after that we broaden our discussion to a lot of the surrounding issues regarding whether and if his story about memory is true. And we discuss meta-issues like how old discarded ideas in science often find their way back, what it's like studying non-mainstream topic, including challenges trying to get funded for it, and so on.
David's Faculty Page.
Related papers
The central importance of nuclear mechanisms in the storage of memory.
David mentions Arc and virus-like transmission:
The Neuronal Gene Arc Encodes a Repurposed Retrotransposon Gag Protein that Mediates Intercellular RNA Transfer.
Structure of an Arc-ane virus-like capsid.
David mentions many of the ideas from the Pushing the Boundaries: Neuroscience, Cognition, and Life Symposium.
Related episodes:
BI 126 Randy Gallistel: Where Is the Engram?
BI 127 Tomás Ryan: Memory, Instinct, and Forgetting
8/7/2023 • 1 hour, 30 minutes, 58 seconds
BI 171 Mike Frank: Early Language and Cognition
Support the show to get full episodes and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
My guest is Michael C. Frank, better known as Mike Frank, who runs the Language and Cognition lab at Stanford. Mike's main interests center on how children learn language - in particular he focuses a lot on early word learning, and what that tells us about our other cognitive functions, like concept formation and social cognition.
We discuss that, his love for developing open data sets that anyone can use,
The dance he dances between bottom-up data-driven approaches in this big data era, traditional experimental approaches, and top-down theory-driven approaches
How early language learning in children differs from LLM learning
Mike's rational speech act model of language use, which considers the intentions or pragmatics of speakers and listeners in dialogue.
Language & Cognition Lab
Twitter: @mcxfrank.
I mentioned Mike's tweet thread about saying LLMs "have" cognitive functions:
Related papers:
Pragmatic language interpretation as probabilistic inference.
Toward a “Standard Model” of Early Language Learning.
The pervasive role of pragmatics in early language.
The Structure of Developmental Variation in Early Childhood.
Relational reasoning and generalization using non-symbolic neural networks.
Unsupervised neural network models of the ventral visual stream.
7/22/2023 • 1 hour, 24 minutes, 40 seconds
BI 170 Ali Mohebi: Starting a Research Lab
Support the show to get full episodes and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
In this episode I have a casual chat with Ali Mohebi about his new faculty position and his plans for the future.
Ali's website.
Twitter: @mohebial
7/11/2023 • 1 hour, 17 minutes, 15 seconds
BI 169 Andrea Martin: Neural Dynamics and Language
Support the show to get full episodes and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
My guest today is Andrea Martin, who is the Research Group Leader in the department of Language and Computation in Neural Systems at the Max Plank Institute and the Donders Institute. Andrea is deeply interested in understanding how our biological brains process and represent language. To this end, she is developing a theoretical model of language. The aim of the model is to account for the properties of language, like its structure, its compositionality, its infinite expressibility, while adhering to physiological data we can measure from human brains.
Her theoretical model of language, among other things, brings in the idea of low-dimensional manifolds and neural dynamics along those manifolds. We've discussed manifolds a lot on the podcast, but they are a kind of abstract structure in the space of possible neural population activity - the neural dynamics. And that manifold structure defines the range of possible trajectories, or pathways, the neural dynamics can take over time.
One of Andrea's ideas is that manifolds might be a way for the brain to combine two properties of how we learn and use language. One of those properties is the statistical regularities found in language - a given word, for example, occurs more often near some words and less often near some other words. This statistical approach is the foundation of how large language models are trained. The other property is the more formal structure of language: how it's arranged and organized in such a way that gives it meaning to us. Perhaps these two properties of language can come together as a single trajectory along a neural manifold. But she has lots of ideas, and we discuss many of them. And of course we discuss large language models, and how Andrea thinks of them with respect to biological cognition. We talk about modeling in general and what models do and don't tell us, and much more.
Andrea's website.
Twitter: @andrea_e_martin.
Related papers
A Compositional Neural Architecture for Language
An oscillating computational model can track pseudo-rhythmic speech by using linguistic predictions
Neural dynamics differentially encode phrases and sentences during spoken language comprehension
Hierarchical structure in language and action: A formal comparison
Andrea mentions this book: The Geometry of Biological Time.
6/28/2023 • 1 hour, 41 minutes, 30 seconds
BI 168 Frauke Sandig and Eric Black w Alex Gomez-Marin: AWARE: Glimpses of Consciousness
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes and join the Discord community.
This is one in a periodic series of episodes with Alex Gomez-Marin, exploring how the arts and humanities can impact (neuro)science. Artistic creations, like cinema, have the ability to momentarily lower our ever-critical scientific mindset and allow us to imagine alternate possibilities and experience emotions outside our normal scientific routines. Might this feature of art potentially change our scientific attitudes and perspectives?
Frauke Sandig and Eric Black recently made the documentary film AWARE: Glimpses of Consciousness, which profiles six researchers studying consciousness from different perspectives. The film is filled with rich visual imagery and conveys a sense of wonder and awe in trying to understand subjective experience, while diving deep into the reflections of the scientists and thinkers approaching the topic from their various perspectives.
This isn't a "normal" Brain Inspired episode, but I hope you enjoy the discussion!
AWARE: Glimpses of Consciousness
Umbrella Films
0:00 - Intro
19:42 - Mechanistic reductionism
45:33 - Changing views during lifetime
53:49 - Did making the film alter your views?
57:49 - ChatGPT
1:04:20 - Materialist assumption
1:11:00 - Science of consciousness
1:20:49 - Transhumanism
1:32:01 - Integrity
1:36:19 - Aesthetics
1:39:50 - Response to the film
6/2/2023 • 1 hour, 54 minutes, 42 seconds
BI 167 Panayiota Poirazi: AI Brains Need Dendrites
Support the show to get full episodes and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
Panayiota Poirazi runs the Poirazi Lab at the FORTH Institute of Molecular Biology and Biotechnology, and Yiota loves dendrites, those branching tree-like structures sticking out of all your neurons, and she thinks you should love dendrites, too, whether you study biological or artificial intelligence. In neuroscience, the old story was that dendrites just reach out and collect incoming signals for the all-important neuron cell body to process. Yiota, and people Like Matthew Larkum, with whom I chatted in episode 138, are continuing to demonstrate that dendrites are themselves computationally complex and powerful, doing many varieties of important signal transformation before signals reach the cell body. For example, in 2003, Yiota showed that because of dendrites, a single neuron can act as a two-layer artificial neural network, and since then others have shown single neurons can act as deeper and deeper multi-layer networks. In Yiota's opinion, an even more important function of dendrites is increased computing efficiency, something evolution favors and something artificial networks need to favor as well moving forward.
Poirazi Lab
Twitter: @YiotaPoirazi.
Related papers
Drawing Inspiration from Biological Dendrites to Empower Artificial Neural Networks.
Illuminating dendritic function with computational models.
Introducing the Dendrify framework for incorporating dendrites to spiking neural networks.
Pyramidal Neuron as Two-Layer Neural Network
0:00 - Intro
3:04 - Yiota's background
6:40 - Artificial networks and dendrites
9:24 - Dendrites special sauce?
14:50 - Where are we in understanding dendrite function?
20:29 - Algorithms, plasticity, and brains
29:00 - Functional unit of the brain
42:43 - Engrams
51:03 - Dendrites and nonlinearity
54:51 - Spiking neural networks
56:02 - Best level of biological detail
57:52 - Dendrify
1:05:41 - Experimental work
1:10:58 - Dendrites across species and development
1:16:50 - Career reflection
1:17:57 - Evolution of Yiota's thinking
5/27/2023 • 1 hour, 27 minutes, 43 seconds
BI 166 Nick Enfield: Language vs. Reality
Support the show to get full episodes and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
Nick Enfield is a professor of linguistics at the University of Sydney. In this episode we discuss topics in his most recent book, Language vs. Reality: Why Language Is Good for Lawyers and Bad for Scientists. A central question in the book is what is language for? What's the function of language. You might be familiar with the debate about whether language evolved for each of us thinking our wonderful human thoughts, or for communicating those thoughts between each other. Nick would be on the communication side of that debate, but if by communication we mean simply the transmission of thoughts or information between people - I have a thought, I send it to you in language, and that thought is now in your head - then Nick wouldn't take either side of that debate. He argues the function language goes beyond the transmission of information, and instead is primarily an evolved solution for social coordination - coordinating our behaviors and attention. When we use language, we're creating maps in our heads so we can agree on where to go.
For example, when I say, "This is brain inspired," I'm pointing you to a place to meet me on a conceptual map, saying, "Get ready, we're about to have a great time again!" In any case, with those 4 words, "This is brain inspired," I'm not just transmitting information from my head into your head. I'm providing you with a landmark so you can focus your attention appropriately.
From that premise, that language is about social coordination, we talk about a handful of topics in his book, like the relationship between language and reality, the idea that all language is framing- that is, how we say something influences how to think about it. We discuss how our language changes in different social situations, the role of stories, and of course, how LLMs fit into Nick's story about language.
Nick's website
Twitter: @njenfield
Book:
Language vs. Reality: Why Language Is Good for Lawyers and Bad for Scientists.
Papers:
Linguistic concepts are self-generating choice architectures
0:00 - Intro
4:23 - Is learning about language important?
15:43 - Linguistic Anthropology
28:56 - Language and truth
33:57 - How special is language
46:19 - Choice architecture and framing
48:19 - Language for thinking or communication
52:30 - Agency and language
56:51 - Large language models
1:16:18 - Getting language right
1:20:48 - Social relationships and language
5/9/2023 • 1 hour, 27 minutes, 12 seconds
BI 165 Jeffrey Bowers: Psychology Gets No Respect
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes and join the Discord community.
Jeffrey Bowers is a psychologist and professor at the University of Bristol. As you know, many of my previous guests are in the business of comparing brain activity to the activity of units in artificial neural network models, when humans or animals and the models are performing the same tasks. And a big story that has emerged over the past decade or so is that there's a remarkable similarity between the activities and representations in brains and models. This was originally found in object categorization tasks, where the goal is to name the object shown in a given image, where researchers have compared the activity in the models good at doing that to the activity in the parts of our brains good at doing that. It's been found in various other tasks using various other models and analyses, many of which we've discussed on previous episodes, and more recently a similar story has emerged regarding a similarity between language-related activity in our brains and the activity in large language models. Namely, the ability of our brains to predict an upcoming word can been correlated with the models ability to predict an upcoming word. So the word is that these deep learning type models are the best models of how our brains and cognition work.
However, this is where Jeff Bowers comes in and raises the psychology flag, so to speak. His message is that these predictive approaches to comparing artificial and biological cognition aren't enough, and can mask important differences between them. And what we need to do is start performing more hypothesis driven tests like those performed in psychology, for example, to ask whether the models are indeed solving tasks like our brains and minds do. Jeff and his group, among others, have been doing just that are discovering differences in models and minds that may be important if we want to use models to understand minds. We discuss some of his work and thoughts in this regard, and a lot more.
Website
Twitter: @jeffrey_bowers
Related papers:
Deep Problems with Neural Network Models of Human Vision.
Parallel Distributed Processing Theory in the Age of Deep Networks.
Successes and critical failures of neural networks in capturing human-like speech recognition.
0:00 - Intro
3:52 - Testing neural networks
5:35 - Neuro-AI needs psychology
23:36 - Experiments in AI and neuroscience
23:51 - Why build networks like our minds?
44:55 - Vision problem spaces, solution spaces, training data
55:45 - Do we implement algorithms?
1:01:33 - Relational and combinatorial cognition
1:06:17 - Comparing representations in different networks
1:12:31 - Large language models
1:21:10 - Teaching LLMs nonsense languages
4/12/2023 • 1 hour, 38 minutes, 45 seconds
BI 164 Gary Lupyan: How Language Affects Thought
Support the show to get full episodes and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
Gary Lupyan runs the Lupyan Lab at University of Wisconsin, Madison, where he studies how language and cognition are related. In some ways, this is a continuation of the conversation I had last episode with Ellie Pavlick, in that we partly continue to discuss large language models. But Gary is more focused on how language, and naming things, categorizing things, changes our cognition related those things. How does naming something change our perception of it, and so on. He's interested in how concepts come about, how they map onto language. So we talk about some of his work and ideas related to those topics.
And we actually start the discussion with some of Gary's work related the variability of individual humans' phenomenal experience, and how that affects our individual cognition. For instance, some people are more visual thinkers, others are more verbal, and there seems to be an appreciable spectrum of differences that Gary is beginning to experimentally test.
Lupyan Lab.
Twitter: @glupyan.
Related papers:
Hidden Differences in Phenomenal Experience.
Verbal interference paradigms: A systematic review investigating the role of language in cognition.
Gary mentioned Richard Feynman's Ways of Thinking video.
Gary and Andy Clark's Aeon article: Super-cooperators.
0:00 - Intro
2:36 - Words and communication
14:10 - Phenomenal variability
26:24 - Co-operating minds
38:11 - Large language models
40:40 - Neuro-symbolic AI, scale
44:43 - How LLMs have changed Gary's thoughts about language
49:26 - Meaning, grounding, and language
54:26 - Development of language
58:53 - Symbols and emergence
1:03:20 - Language evolution in the LLM era
1:08:05 - Concepts
1:11:17 - How special is language?
1:18:08 - AGI
4/1/2023 • 1 hour, 31 minutes, 54 seconds
BI 163 Ellie Pavlick: The Mind of a Language Model
Support the show to get full episodes and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
Ellie Pavlick runs her Language Understanding and Representation Lab at Brown University, where she studies lots of topics related to language. In AI, large language models, sometimes called foundation models, are all the rage these days, with their ability to generate convincing language, although they still make plenty of mistakes. One of the things Ellie is interested in is how these models work, what kinds of representations are being generated in them to produce the language they produce. So we discuss how she's going about studying these models. For example, probing them to see whether something symbolic-like might be implemented in the models, even though they are the deep learning neural network type, which aren't suppose to be able to work in a symbol-like manner. We also discuss whether grounding is required for language understanding - that is, whether a model that produces language well needs to connect with the real world to actually understand the text it generates. We talk about what language is for, the current limitations of large language models, how the models compare to humans, and a lot more.
Language Understanding and Representation Lab
Twitter: @Brown_NLP
Related papers
Semantic Structure in Deep Learning.
Pretraining on Interactions for Learning Grounded Affordance Representations.
Mapping Language Models to Grounded Conceptual Spaces.
0:00 - Intro
2:34 - Will LLMs make us dumb?
9:01 - Evolution of language
17:10 - Changing views on language
22:39 - Semantics, grounding, meaning
37:40 - LLMs, humans, and prediction
41:19 - How to evaluate LLMs
51:08 - Structure, semantics, and symbols in models
1:00:08 - Dimensionality
1:02:08 - Limitations of LLMs
1:07:47 - What do linguists think?
1:14:23 - What is language for?
3/20/2023 • 1 hour, 21 minutes, 34 seconds
BI 162 Earl K. Miller: Thoughts are an Emergent Property
Support the show to get full episodes and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
Earl Miller runs the Miller Lab at MIT, where he studies how our brains carry out our executive functions, like working memory, attention, and decision-making. In particular he is interested in the role of the prefrontal cortex and how it coordinates with other brain areas to carry out these functions. During this episode, we talk broadly about how neuroscience has changed during Earl's career, and how his own thoughts have changed. One thing we focus on is the increasing appreciation of brain oscillations for our cognition.
Recently on BI we've discussed oscillations quite a bit. In episode 153, Carolyn Dicey-Jennings discussed her philosophical ideas relating attention to the notion of the self, and she leans a lot on Earl's research to make that argument. In episode 160, Ole Jensen discussed his work in humans showing that low frequency oscillations exert a top-down control on incoming sensory stimuli, and this is directly in agreement with Earl's work over many years in nonhuman primates. So we continue that discussion relating low-frequency oscillations to executive control. We also discuss a new concept Earl has developed called spatial computing, which is an account of how brain oscillations can dictate where in various brain areas neural activity be on or off, and hence contribute or not to ongoing mental function. We also discuss working memory in particular, and a host of related topics.
Miller lab.
Twitter: @MillerLabMIT.
Related papers:
An integrative theory of prefrontal cortex function. Annual Review of Neuroscience.
Working Memory Is Complex and Dynamic, Like Your Thoughts.
Traveling waves in the prefrontal cortex during working memory.
0:00 - Intro
6:22 - Evolution of Earl's thinking
14:58 - Role of the prefrontal cortex
25:21 - Spatial computing
32:51 - Homunculus problem
35:34 - Self
37:40 - Dimensionality and thought
46:13 - Reductionism
47:38 - Working memory and capacity
1:01:45 - Capacity as a principle
1:05:44 - Silent synapses
1:10:16 - Subspaces in dynamics
3/8/2023 • 1 hour, 23 minutes, 27 seconds
BI 161 Hugo Spiers: Navigation and Spatial Cognition
Support the show to get full episodes and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
Hugo Spiers runs the Spiers Lab at University College London. In general Hugo is interested in understanding spatial cognition, like navigation, in relation to other processes like planning and goal-related behavior, and how brain areas like the hippocampus and prefrontal cortex coordinate these cognitive functions. So, in this episode, we discuss a range of his research and thoughts around those topics. You may have heard about the studies he's been involved with for years, regarding London taxi drivers and how their hippocampus changes as a result of their grueling efforts to memorize how to best navigate London. We talk about that, we discuss the concept of a schema, which is roughly an abstracted form of knowledge that helps you know how to behave in different environments. Probably the most common example is that we all have a schema for eating at a restaurant, independent of which restaurant we visit, we know about servers, and menus, and so on. Hugo is interested in spatial schemas, for things like navigating a new city you haven't visited. Hugo describes his work using reinforcement learning methods to compare how humans and animals solve navigation tasks. And finally we talk about the video game Hugo has been using to collect vast amount of data related to navigation, to answer questions like how our navigation ability changes over our lifetimes, the different factors that seem to matter more for our navigation skills, and so on.
Spiers Lab.
Twitter: @hugospiers.
Related papers
Predictive maps in rats and humans for spatial navigation.
From cognitive maps to spatial schemas.
London taxi drivers: A review of neurocognitive studies and an exploration of how they build their cognitive map of London.
Explaining World-Wide Variation in Navigation Ability from Millions of People: Citizen Science Project Sea Hero Quest.
2/24/2023 • 1 hour, 34 minutes, 38 seconds
BI 160 Ole Jensen: Rhythms of Cognition
Support the show to get full episodes and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
Ole Jensen is co-director of the Centre for Human Brain Health at University of Birmingham, where he runs his Neuronal Oscillations Group lab. Ole is interested in how the oscillations in our brains affect our cognition by helping to shape the spiking patterns of neurons, and by helping to allocate resources to parts of our brains that are relevant for whatever ongoing behaviors we're performing in different contexts. People have been studying oscillations for decades, finding that different frequencies of oscillations have been linked to a bunch of different cognitive functions. Some of what we discuss today is Ole's work on alpha oscillations, which are around 10 hertz, so 10 oscillations per second. The overarching story is that alpha oscillations are thought to inhibit or disrupt processing in brain areas that aren't needed during a given behavior. And therefore by disrupting everything that's not needed, resources are allocated to the brain areas that are needed. We discuss his work in the vein on attention - you may remember the episode with Carolyn Dicey-Jennings, and her ideas about how findings like Ole's are evidence we all have selves. We also talk about the role of alpha rhythms for working memory, for moving our eyes, and for previewing what we're about to look at before we move our eyes, and more broadly we discuss the role of oscillations in cognition in general, and of course what this might mean for developing better artificial intelligence.
The Neuronal Oscillations Group.
Twitter: @neuosc.
Related papers
Shaping functional architecture by oscillatory alpha activity: gating by inhibition
FEF-Controlled Alpha Delay Activity Precedes Stimulus-Induced Gamma-Band Activity in Visual Cortex
The theta-gamma neural code
A pipelining mechanism supporting previewing during visual exploration and reading.
Specific lexico-semantic predictions are associated with unique spatial and temporal patterns of neural activity.
0:00 - Intro
2:58 - Oscillations import over the years
5:51 - Oscillations big picture
17:62 - Oscillations vs. traveling waves
22:00 - Oscillations and algorithms
28:53 - Alpha oscillations and working memory
44:46 - Alpha as the controller
48:55 - Frequency tagging
52:49 - Timing of attention
57:41 - Pipelining neural processing
1:03:38 - Previewing during reading
1:15:50 - Previewing, prediction, and large language models
1:24:27 - Dyslexia
2/7/2023 • 1 hour, 28 minutes, 39 seconds
BI 159 Chris Summerfield: Natural General Intelligence
Support the show to get full episodes and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
Chris Summerfield runs the Human Information Processing Lab at University of Oxford, and he's a research scientist at Deepmind. You may remember him from episode 95 with Sam Gershman, when we discussed ideas around the usefulness of neuroscience and psychology for AI. Since then, Chris has released his book, Natural General Intelligence: How understanding the brain can help us build AI. In the book, Chris makes the case that inspiration and communication between the cognitive sciences and AI is hindered by the different languages each field speaks. But in reality, there has always been and still is a lot of overlap and convergence about ideas of computation and intelligence, and he illustrates this using tons of historical and modern examples.
Human Information Processing Lab.
Twitter: @summerfieldlab.
Book: Natural General Intelligence: How understanding the brain can help us build AI.
Other books mentioned:
Are We Smart Enough to Know How Smart Animals Are? by Frans de Waal
The Mind is Flat by Nick Chater.
0:00 - Intro
2:20 - Natural General Intelligence
8:05 - AI and Neuro interaction
21:42 - How to build AI
25:54 - Umwelts and affordances
32:07 - Different kind of intelligence
39:16 - Ecological validity and AI
48:30 - Is reward enough?
1:05:14 - Beyond brains
1:15:10 - Large language models and brains
1/26/2023 • 1 hour, 28 minutes, 53 seconds
BI 158 Paul Rosenbloom: Cognitive Architectures
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes and join the Discord community.
Paul Rosenbloom is Professor Emeritus of Computer Science at the University of Southern California. In the early 1980s, Paul , along with John Laird and the early AI pioneer Alan Newell, developed one the earliest and best know cognitive architectures called SOAR. A cognitive architecture, as Paul defines it, is a model of the fixed structures and processes underlying minds, and in Paul's case the human mind. And SOAR was aimed at generating general intelligence. He doesn't work on SOAR any more, although SOAR is still alive and well in the hands of his old partner John Laird. He did go on to develop another cognitive architecture, called Sigma, and in the intervening years between those projects, among other things Paul stepped back and explored how our various scientific domains are related, and how computing itself should be considered a great scientific domain. That's in his book On Computing: The Fourth Great Scientific Domain.
He also helped develop the Common Model of Cognition, which isn't a cognitive architecture itself, but instead a theoretical model meant to generate consensus regarding the minimal components for a human-like mind. The idea is roughly to create a shared language and framework among cognitive architecture researchers, so the field can , so that whatever cognitive architecture you work on, you have a basis to compare it to, and can communicate effectively among your peers.
All of what I just said, and much of what we discuss, can be found in Paul's memoir, In Search of Insight: My Life as an Architectural Explorer.
Paul's website.
Related papers
Working memoir: In Search of Insight: My Life as an Architectural Explorer.
Book: On Computing: The Fourth Great Scientific Domain.
A Standard Model of the Mind: Toward a Common Computational Framework across Artificial Intelligence, Cognitive Science, Neuroscience, and Robotics.
Analysis of the human connectome data supports the notion of a “Common Model of Cognition” for human and human-like intelligence across domains.
Common Model of Cognition Bulletin.
0:00 - Intro
3:26 - A career of exploration
7:00 - Alan Newell
14:47 - Relational model and dichotomic maps
24:22 - Cognitive architectures
28:31 - SOAR cognitive architecture
41:14 - Sigma cognitive architecture
43:58 - SOAR vs. Sigma
53:06 - Cognitive architecture community
55:31 - Common model of cognition
1:11:13 - What's missing from the common model
1:17:48 - Brains vs. cognitive architectures
1:21:22 - Mapping the common model onto the brain
1:24:50 - Deep learning
1:30:23 - AGI
1/16/2023 • 1 hour, 35 minutes, 12 seconds
BI 157 Sarah Robins: Philosophy of Memory
Support the show to get full episodes and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
Sarah Robins is a philosopher at the University of Kansas, one a growing handful of philosophers specializing in memory. Much of her work focuses on memory traces, which is roughly the idea that somehow our memories leave a trace in our minds. We discuss memory traces themselves and how they relate to the engram (see BI 126 Randy Gallistel: Where Is the Engram?, and BI 127 Tomás Ryan: Memory, Instinct, and Forgetting).
Psychology has divided memories into many categories - the taxonomy of memory. Sarah and I discuss how memory traces may cross-cut those categories, suggesting we may need to re-think our current ontology and taxonomy of memory.
We discuss a couple challenges to the idea of a stable memory trace in the brain. Neural dynamics is the notion that all our molecules and synapses are constantly changing and being recycled. Memory consolidation refers to the process of transferring our memory traces from an early unstable version to a more stable long-term version in a different part of the brain. Sarah thinks neither challenge poses a real threat to the idea
We also discuss the impact of optogenetics on the philosophy and neuroscience and memory, the debate about whether memory and imagination are essentially the same thing, whether memory's function is future oriented, and whether we want to build AI with our often faulty human-like memory or with perfect memory.
Sarah's website.
Twitter: @SarahKRobins.
Related papers:
Her Memory chapter, with Felipe de Brigard, in the book Mind, Cognition, and Neuroscience: A Philosophical Introduction.
Memory and Optogenetic Intervention: Separating the engram from the ecphory.
Stable Engrams and Neural Dynamics.
0:00 - Intro
4:18 - Philosophy of memory
5:10 - Making a move
6:55 - State of philosophy of memory
11:19 - Memory traces or the engram
20:44 - Taxonomy of memory
25:50 - Cognitive ontologies, neuroscience, and psychology
29:39 - Optogenetics
33:48 - Memory traces vs. neural dynamics and consolidation
40:32 - What is the boundary of a memory?
43:00 - Process philosophy and memory
45:07 - Memory vs. imagination
49:40 - Constructivist view of memory and imagination
54:05 - Is memory for the future?
58:00 - Memory errors and intelligence
1:00:42 - Memory and AI
1:06:20 - Creativity and memory errors
1/2/2023 • 1 hour, 20 minutes, 59 seconds
BI 156 Mariam Aly: Memory, Attention, and Perception
Support the show to get full episodes and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
Mariam Aly runs the Aly lab at Columbia University, where she studies the interaction of memory, attention, and perception in brain regions like the hippocampus. The short story is that memory affects our perceptions, attention affects our memories, memories affect our attention, and these effects have signatures in neural activity measurements in our hippocampus and other brain areas. We discuss her experiments testing the nature of those interactions. We also discuss a particularly difficult stretch in Mariam's graduate school years, and how she now prioritizes her mental health.
Aly Lab.
Twitter: @mariam_s_aly.
Related papers
Attention promotes episodic encoding by stabilizing hippocampal representations.
The medial temporal lobe is critical for spatial relational perception.
Cholinergic modulation of hippocampally mediated attention and perception.
Preparation for upcoming attentional states in the hippocampus and medial prefrontal cortex.
How hippocampal memory shapes, and is shaped by, attention.
Attentional fluctuations and the temporal organization of memory.
0:00 - Intro
3:50 - Mariam's background
9:32 - Hippocampus history and current science
12:34 - hippocampus and perception
13:42 - Relational information
18:30 - How much memory is explicit?
22:32 - How attention affects hippocampus
32:40 - fMRI levels vs. stability
39:04 - How is hippocampus necessary for attention
57:00 - How much does attention affect memory?
1:02:24 - How memory affects attention
1:06:50 - Attention and memory relation big picture
1:07:42 - Current state of memory and attention
1:12:12 - Modularity
1:17:52 - Practical advice to improve attention/memory
1:21:22 - Mariam's challenges
12/23/2022 • 1 hour, 40 minutes, 45 seconds
BI 155 Luiz Pessoa: The Entangled Brain
Support the show to get full episodes and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
Luiz Pessoa runs his Laboratory of Cognition and Emotion at the University of Maryland, College Park, where he studies how emotion and cognition interact. On this episode, we discuss many of the topics from his latest book, The Entangled Brain: How Perception, Cognition, and Emotion Are Woven Together, which is aimed at a general audience. The book argues we need to re-think how to study the brain. Traditionally, cognitive functions of the brain have been studied in a modular fashion: area X does function Y. However, modern research has revealed the brain is highly complex and carries out cognitive functions in a much more interactive and integrative fashion: a given cognitive function results from many areas and circuits temporarily coalescing (for similar ideas, see also BI 152 Michael L. Anderson: After Phrenology: Neural Reuse). Luiz and I discuss the implications of studying the brain from a complex systems perspective, why we need go beyond thinking about anatomy and instead think about functional organization, some of the brain's principles of organization, and a lot more.
Laboratory of Cognition and Emotion.
Twitter: @PessoaBrain.
Book: The Entangled Brain: How Perception, Cognition, and Emotion Are Woven Together
0:00 - Intro
2:47 - The Entangled Brain
16:24 - How to think about complex systems
23:41 - Modularity thinking
28:16 - How to train one's mind to think complex
33:26 - Problem or principle?
44:22 - Complex behaviors
47:06 - Organization vs. structure
51:09 - Principles of organization: Massive Combinatorial Anatomical Connectivity
55:15 - Principles of organization: High Distributed Functional Connectivity
1:00:50 - Principles of organization: Networks as Functional Units
1:06:15 - Principles of Organization: Interactions via Cortical-Subcortical Loops
1:08:53 - Open and closed loops
1:16:43 - Principles of organization: Connectivity with the Body
1:21:28 - Consciousness
1:24:53 - Emotions
1:32:49 - Emottions and AI
1:39:47 - Emotion as a concept
1:43:25 - Complexity and functional organization in AI
12/10/2022 • 1 hour, 54 minutes, 26 seconds
BI 154 Anne Collins: Learning with Working Memory
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes and join the Discord community.
Anne Collins runs her Computational Cognitive Neuroscience Lab at the University of California, Berkley One of the things she's been working on for years is how our working memory plays a role in learning as well, and specifically how working memory and reinforcement learning interact to affect how we learn, depending on the nature of what we're trying to learn. We discuss that interaction specifically. We also discuss more broadly how segregated and how overlapping and interacting our cognitive functions are, what that implies about our natural tendency to think in dichotomies - like MF vs MB-RL, system-1 vs system-2, etc., and we dive into plenty other subjects, like how to possibly incorporate these ideas into AI.
Computational Cognitive Neuroscience Lab.
Twitter: @ccnlab or @Anne_On_Tw.
Related papers:
How Working Memory and Reinforcement Learning Are Intertwined: A Cognitive, Neural, and Computational Perspective.
Beyond simple dichotomies in reinforcement learning.
The Role of Executive Function in Shaping Reinforcement Learning.
What do reinforcement learning models measure? Interpreting model parameters in cognition and neuroscience.
0:00 - Intro
5:25 - Dimensionality of learning
11:19 - Modularity of function and computations
16:51 - Is working memory a thing?
19:33 - Model-free model-based dichotomy
30:40 - Working memory and RL
44:43 - How working memory and RL interact
50:50 - Working memory and attention
59:37 - Computations vs. implementations
1:03:25 - Interpreting results
1:08:00 - Working memory and AI
11/29/2022 • 1 hour, 22 minutes, 27 seconds
BI 153 Carolyn Dicey-Jennings: Attention and the Self
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes and join the Discord community.
Carolyn Dicey Jennings is a philosopher and a cognitive scientist at University of California, Merced. In her book The Attending Mind, she lays out an attempt to unify the concept of attention. Carolyn defines attention roughly as the mental prioritization of some stuff over other stuff based on our collective interests. And one of her main claims is that attention is evidence of a real, emergent self or subject, that can't be reduced to microscopic brain activity. She does connect attention to more macroscopic brain activity, suggesting slow longer-range oscillations in our brains can alter or entrain the activity of more local neural activity, and this is a candidate for mental causation. We unpack that more in our discussion, and how Carolyn situates attention among other cognitive functions, like consciousness, action, and perception.
Carolyn's website.
Books:
The Attending Mind.
Aeon article:
I Attend, Therefore I Am.
Related papers
The Subject of Attention.
Consciousness and Mind.
Practical Realism about the Self.
0:00 - Intro
12:15 - Reconceptualizing attention
16:07 - Types of attention
19:02 - Predictive processing and attention
23:19 - Consciousness, identity, and self
30:39 - Attention and the brain
35:47 - Integrated information theory
42:05 - Neural attention
52:08 - Decoupling oscillations from spikes
57:16 - Selves in other organisms
1:00:42 - AI and the self
1:04:43 - Attention, consciousness, conscious perception
1:08:36 - Meaning and attention
1:11:12 - Conscious entrainment
1:19:57 - Is attention a switch or knob?
11/18/2022 • 1 hour, 25 minutes, 30 seconds
BI 152 Michael L. Anderson: After Phrenology: Neural Reuse
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes and join the Discord community.
Michael L. Anderson is a professor at the Rotman Institute of Philosophy, at Western University. His book, After Phrenology: Neural Reuse and the Interactive Brain, calls for a re-conceptualization of how we understand and study brains and minds. Neural reuse is the phenomenon that any given brain area is active for multiple cognitive functions, and partners with different sets of brain areas to carry out different cognitive functions. We discuss the implications for this, and other topics in Michael's research and the book, like evolution, embodied cognition, and Gibsonian perception. Michael also fields guest questions from John Krakauer and Alex Gomez-Marin, about representations and metaphysics, respectively.
Michael's website.
Twitter: @mljanderson.
Book:
After Phrenology: Neural Reuse and the Interactive Brain.
Related papers
Neural reuse: a fundamental organizational principle of the brain.
Some dilemmas for an account of neural representation: A reply to Poldrack.
Debt-free intelligence: Ecological information in minds and machines
Describing functional diversity of brain regions and brain networks.
0:00 - Intro
3:02 - After Phrenology
13:18 - Typical neuroscience experiment
16:29 - Neural reuse
18:37 - 4E cognition and representations
22:48 - John Krakauer question
27:38 - Gibsonian perception
36:17 - Autoencoders without representations
49:22 - Pluralism
52:42 - Alex Gomez-Marin question - metaphysics
1:01:26 - Stimulus-response historical neuroscience
1:10:59 - After Phrenology influence
1:19:24 - Origins of neural reuse
1:35:25 - The way forward
11/8/2022 • 1 hour, 45 minutes, 11 seconds
BI 151 Steve Byrnes: Brain-like AGI Safety
Support the show to get full episodes and join the Discord community.
Steve Byrnes is a physicist turned AGI safety researcher. He's concerned that when we create AGI, whenever and however that might happen, we run the risk of creating it in a less than perfectly safe way. AGI safety (AGI not doing something bad) is a wide net that encompasses AGI alignment (AGI doing what we want it to do). We discuss a host of ideas Steve writes about in his Intro to Brain-Like-AGI Safety blog series, which uses what he has learned about brains to address how we might safely make AGI.
Steve's website.Twitter: @steve47285Intro to Brain-Like-AGI Safety.
10/30/2022 • 1 hour, 31 minutes, 17 seconds
BI 150 Dan Nicholson: Machines, Organisms, Processes
Support the show to get full episodes and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
Dan Nicholson is a philosopher at George Mason University. He incorporates the history of science and philosophy into modern analyses of our conceptions of processes related to life and organisms. He is also interested in re-orienting our conception of the universe as made fundamentally of things/substances, and replacing it with the idea the universe is made fundamentally of processes (process philosophy). In this episode, we both of those subjects, the why the "machine conception of the organism" is incorrect, how to apply these ideas to topics like neuroscience and artificial intelligence, and much more.
Dan's website. Google Scholar.Twitter: @NicholsonHPBioBookEverything Flows: Towards a Processual Philosophy of Biology.Related papersIs the Cell Really a Machine?The Machine Conception of the Organism in Development and Evolution: A Critical Analysis.On Being the Right Size, Revisited: The Problem with Engineering Metaphors in Molecular Biology.Related episode: BI 118 Johannes Jäger: Beyond Networks.
0:00 - Intro
2:49 - Philosophy and science
16:37 - Role of history
23:28 - What Is Life? And interaction with James Watson
38:37 - Arguments against the machine conception of organisms
49:08 - Organisms as streams (processes)
57:52 - Process philosophy
1:08:59 - Alfred North Whitehead
1:12:45 - Process and consciousness
1:22:16 - Artificial intelligence and process
1:31:47 - Language and symbols and processes
10/15/2022 • 1 hour, 38 minutes, 29 seconds
BI 149 William B. Miller: Cell Intelligence
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes and join the Discord community.
William B. Miller is an ex-physician turned evolutionary biologist. In this episode, we discuss topics related to his new book, Bioverse: How the Cellular World Contains the Secrets to Life's Biggest Questions. The premise of the book is that all individual cells are intelligent in their own right, and possess a sense of self. From this, Bill makes the case that cells cooperate with other cells to engineer whole organisms that in turn serve as wonderful hosts for the myriad cell types. Further, our bodies are collections of our own cells (with our DNA), and an enormous amount and diversity of foreign cells - our microbiome - that communicate and cooperate with each other and with our own cells. We also discuss how cell intelligence compares to human intelligence, what Bill calls the "era of the cell" in science, how the future of medicine will harness the intelligence of cells and their cooperative nature, and much more.
William's website.Twitter: @BillMillerMD.Book: Bioverse: How the Cellular World Contains the Secrets to Life's Biggest Questions.
0:00 - Intro
3:43 - Bioverse
7:29 - Bill's cell appreciation origins
17:03 - Microbiomes
27:01 - Complexity of microbiomes and the "Era of the cell"
46:00 - Robustness
55:05 - Cell vs. human intelligence
1:10:08 - Artificial intelligence
1:21:01 - Neuro-AI
1:25:53 - Hard problem of consciousness
10/5/2022 • 1 hour, 33 minutes, 54 seconds
BI 148 Gaute Einevoll: Brain Simulations
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes and join the Discord community.
Gaute Einevoll is a professor at the University of Oslo and Norwegian University of Life Sciences. Use develops detailed models of brain networks to use as simulations, so neuroscientists can test their various theories and hypotheses about how networks implement various functions. Thus, the models are tools. The goal is to create models that are multi-level, to test questions at various levels of biological detail; and multi-modal, to predict that handful of signals neuroscientists measure from real brains (something Gaute calls "measurement physics"). We also discuss Gaute's thoughts on Carina Curto's "beautiful vs ugly models", and his reaction to Noah Hutton's In Silico documentary about the Blue Brain and Human Brain projects (Gaute has been funded by the Human Brain Project since its inception).
Gaute's website.Twitter: @GauteEinevoll.Related papers:The Scientific Case for Brain Simulations.Brain signal predictions from multi-scale networks using a linearized framework.Uncovering circuit mechanisms of current sinks and sources with biophysical simulations of primary visual cortexLFPy: a Python module for calculation of extracellular potentials from multicompartment neuron models.Gaute's Sense and Science podcast.
0:00 - Intro
3:25 - Beautiful and messy models
6:34 - In Silico
9:47 - Goals of human brain project
15:50 - Brain simulation approach
21:35 - Degeneracy in parameters
26:24 - Abstract principles from simulations
32:58 - Models as tools
35:34 - Predicting brain signals
41:45 - LFPs closer to average
53:57 - Plasticity in simulations
56:53 - How detailed should we model neurons?
59:09 - Lessons from predicting signals
1:06:07 - Scaling up
1:10:54 - Simulation as a tool
1:12:35 - Oscillations
1:16:24 - Manifolds and simulations
1:20:22 - Modeling cortex like Hodgkin and Huxley
9/25/2022 • 1 hour, 28 minutes, 48 seconds
BI 147 Noah Hutton: In Silico
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes and join the Discord community.
Noah Hutton writes, directs, and scores documentary and narrative films. On this episode, we discuss his documentary In Silico. In 2009, Noah watched a TED talk by Henry Markram, in which Henry claimed it would take 10 years to fully simulate a human brain. This claim inspired Noah to chronicle the project, visiting Henry and his team periodically throughout. The result was In Silico, which tells the science, human, and social story of Henry's massively funded projects - the Blue Brain Project and the Human Brain Project.
In Silico website.Rent or buy In Silico.Noah's website.Twitter: @noah_hutton.
0:00 - Intro
3:36 - Release and premier
7:37 - Noah's background
9:52 - Origins of In Silico
19:39 - Recurring visits
22:13 - Including the critics
25:22 - Markram's shifting outlook and salesmanship
35:43 - Promises and delivery
41:28 - Computer and brain terms interchange
49:22 - Progress vs. illusion of progress
52:19 - Close to quitting
58:01 - Salesmanship vs bad at estimating timelines
1:02:12 - Brain simulation science
1:11:19 - AGI
1:14:48 - Brain simulation vs. neuro-AI
1:21:03 - Opinion on TED talks
1:25:16 - Hero worship
1:29:03 - Feedback on In Silico
9/13/2022 • 1 hour, 37 minutes, 8 seconds
BI 146 Lauren Ross: Causal and Non-Causal Explanation
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes and join the Discord community.
Lauren Ross is an Associate Professor at the University of California, Irvine. She studies and writes about causal and non-causal explanations in philosophy of science, including distinctions among causal structures. Throughout her work, Lauren employs Jame's Woodward's interventionist approach to causation, which Jim and I discussed in episode 145. In this episode, we discuss Jim's lasting impact on the philosophy of causation, the current dominance of mechanistic explanation and its relation to causation, and various causal structures of explanation, including pathways, cascades, topology, and constraints.
Lauren's website.Twitter: @ProfLaurenRossRelated papersA call for more clarity around causality in neuroscience.The explanatory nature of constraints: Law-based, mathematical, and causal.Causal Concepts in Biology: How Pathways Differ from Mechanisms and Why It Matters.Distinguishing topological and causal explanation.Multiple Realizability from a Causal Perspective.Cascade versus mechanism: The diversity of causal structure in science.
0:00 - Intro
2:46 - Lauren's background
10:14 - Jim Woodward legacy
15:37 - Golden era of causality
18:56 - Mechanistic explanation
28:51 - Pathways
31:41 - Cascades
36:25 - Topology
41:17 - Constraint
50:44 - Hierarchy of explanations
53:18 - Structure and function
57:49 - Brain and mind
1:01:28 - Reductionism
1:07:58 - Constraint again
1:14:38 - Multiple realizability
9/7/2022 • 1 hour, 22 minutes, 51 seconds
BI 145 James Woodward: Causation with a Human Face
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes and join the Discord community.
James Woodward is a recently retired Professor from the Department of History and Philosophy of Science at the University of Pittsburgh. Jim has tremendously influenced the field of causal explanation in the philosophy of science. His account of causation centers around intervention - intervening on a cause should alter its effect. From this minimal notion, Jim has described many facets and varieties of causal structures. In this episode, we discuss topics from his recent book, Causation with a Human Face: Normative Theory and Descriptive Psychology. In the book, Jim advocates that how we should think about causality - the normative - needs to be studied together with how we actually do think about causal relations in the world - the descriptive. We discuss many topics around this central notion, epistemology versus metaphysics, the the nature and varieties of causal structures.
Jim's website.Making Things Happen: A Theory of Causal Explanation.Causation with a Human Face: Normative Theory and Descriptive Psychology.
0:00 - Intro
4:14 - Causation with a Human Face & Functionalist approach
6:16 - Interventionist causality; Epistemology and metaphysics
9:35 - Normative and descriptive
14:02 - Rationalist approach
20:24 - Normative vs. descriptive
28:00 - Varying notions of causation
33:18 - Invariance
41:05 - Causality in complex systems
47:09 - Downward causation
51:14 - Natural laws
56:38 - Proportionality
1:01:12 - Intuitions
1:10:59 - Normative and descriptive relation
1:17:33 - Causality across disciplines
1:21:26 - What would help our understanding of causation
8/28/2022 • 1 hour, 25 minutes, 52 seconds
BI 144 Emily M. Bender and Ev Fedorenko: Large Language Models
Check out my short video series about what's missing in AI and Neuroscience.
Support the show to get full episodes and join the Discord community.
Large language models, often now called "foundation models", are the model de jour in AI, based on the transformer architecture. In this episode, I bring together Evelina Fedorenko and Emily M. Bender to discuss how language models stack up to our own language processing and generation (models and brains both excel at next-word prediction), whether language evolved in humans for complex thoughts or for communication (communication, says Ev), whether language models grasp the meaning of the text they produce (Emily says no), and much more.
Evelina Fedorenko is a cognitive scientist who runs the EvLab at MIT. She studies the neural basis of language. Her lab has amassed a large amount of data suggesting language did not evolve to help us think complex thoughts, as Noam Chomsky has argued, but rather for efficient communication. She has also recently been comparing the activity in language models to activity in our brain's language network, finding commonality in the ability to predict upcoming words.
Emily M. Bender is a computational linguist at University of Washington. Recently she has been considering questions about whether language models understand the meaning of the language they produce (no), whether we should be scaling language models as is the current practice (not really), how linguistics can inform language models, and more.
EvLab.Emily's website.Twitter: @ev_fedorenko; @emilymbender.Related papersLanguage and thought are not the same thing: Evidence from neuroimaging and neurological patients. (Fedorenko)The neural architecture of language: Integrative modeling converges on predictive processing. (Fedorenko)On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? (Bender)Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data (Bender)
0:00 - Intro
4:35 - Language and cognition
15:38 - Grasping for meaning
21:32 - Are large language models producing language?
23:09 - Next-word prediction in brains and models
32:09 - Interface between language and thought
35:18 - Studying language in nonhuman animals
41:54 - Do we understand language enough?
45:51 - What do language models need?
51:45 - Are LLMs teaching us about language?
54:56 - Is meaning necessary, and does it matter how we learn language?
1:00:04 - Is our biology important for language?
1:04:59 - Future outlook
8/17/2022 • 1 hour, 11 minutes, 41 seconds
BI 143 Rodolphe Sepulchre: Mixed Feedback Control
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes and join the Discord community.
Rodolphe Sepulchre is a control engineer and theorist at Cambridge University. He focuses on applying feedback control engineering principles to build circuits that model neurons and neuronal circuits. We discuss his work on mixed feedback control - positive and negative - as an underlying principle of the mixed digital and analog brain signals,, the role of neuromodulation as a controller, applying these principles to Eve Marder's lobster/crab neural circuits, building mixed-feedback neuromorphics, some feedback control history, and how "If you wish to contribute original work, be prepared to face loneliness," among other topics.
Rodolphe's website.Related papersSpiking Control Systems.Control Across Scales by Positive and Negative Feedback.Neuromorphic control. (arXiv version)Related episodes:BI 130 Eve Marder: Modulation of NetworksBI 119 Henry Yin: The Crisis in Neuroscience
0:00 - Intro
4:38 - Control engineer
9:52 - Control vs. dynamical systems
13:34 - Building vs. understanding
17:38 - Mixed feedback signals
26:00 - Robustness
28:28 - Eve Marder
32:00 - Loneliness
37:35 - Across levels
44:04 - Neuromorphics and neuromodulation
52:15 - Barrier to adopting neuromorphics
54:40 - Deep learning influence
58:04 - Beyond energy efficiency
1:02:02 - Deep learning for neuro
1:14:15 - Role of philosophy
1:16:43 - Doing it right
8/5/2022 • 1 hour, 24 minutes, 53 seconds
BI 142 Cameron Buckner: The New DoGMA
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes and join the Discord community.
Cameron Buckner is a philosopher and cognitive scientist at The University of Houston. He is writing a book about the age-old philosophical debate on how much of our knowledge is innate (nature, rationalism) versus how much is learned (nurture, empiricism). In the book and his other works, Cameron argues that modern AI can help settle the debate. In particular, he suggests we focus on what types of psychological "domain-general faculties" underlie our own intelligence, and how different kinds of deep learning models are revealing how those faculties may be implemented in our brains. The hope is that by building systems that possess the right handful of faculties, and putting those systems together in a way they can cooperate in a general and flexible manner, it will result in cognitive architectures we would call intelligent. Thus, what Cameron calls The New DoGMA: Domain-General Modular Architecture. We also discuss his work on mental representation and how representations get their content - how our thoughts connect to the natural external world.
Cameron's Website.Twitter: @cameronjbuckner.Related papersEmpiricism without Magic: Transformational Abstraction in Deep Convolutional Neural Networks.A Forward-Looking Theory of Content.Other sources Cameron mentions:Innateness, AlphaZero, and Artificial Intelligence (Gary Marcus).Radical Empiricism and Machine Learning Research (Judea Pearl).Fodor’s guide to the Humean mind (Tamás Demeter).
0:00 - Intro
4:55 - Interpreting old philosophy
8:26 - AI and philosophy
17:00 - Empiricism vs. rationalism
27:09 - Domain-general faculties
33:10 - Faculty psychology
40:28 - New faculties?
46:11 - Human faculties
51:15 - Cognitive architectures
56:26 - Language
1:01:40 - Beyond dichotomous thinking
1:04:08 - Lower-level faculties
1:10:16 - Animal cognition
1:14:31 - A Forward-Looking Theory of Content
7/26/2022 • 1 hour, 43 minutes, 16 seconds
BI 141 Carina Curto: From Structure to Dynamics
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes and join the Discord community.
Carina Curto is a professor in the Department of Mathematics at The Pennsylvania State University. She uses her background skills in mathematical physics/string theory to study networks of neurons. On this episode, we discuss the world of topology in neuroscience - the study of the geometrical structures mapped out by active populations of neurons. We also discuss her work on "combinatorial linear threshold networks" (CLTNs). Unlike the large deep learning models popular today as models of brain activity, the CLTNs Carina builds are relatively simple, abstracted graphical models. This property is important to Carina, whose goal is to develop mathematically tractable neural network models. Carina has worked out how the structure of many CLTNs allows prediction of the model's allowable dynamics, how motifs of model structure can be embedded in larger models while retaining their dynamical features, and more. The hope is that these elegant models can tell us more about the principles our messy brains employ to generate the robust and beautiful dynamics underlying our cognition.
Carina's website.The Mathematical Neuroscience Lab.Related papersA major obstacle impeding progress in brain science is the lack of beautiful models.What can topology tells us about the neural code?Predicting neural network dynamics via graphical analysis
0:00 - Intro
4:25 - Background: Physics and math to study brains
20:45 - Beautiful and ugly models
35:40 - Topology
43:14 - Topology in hippocampal navigation
56:04 - Topology vs. dynamical systems theory
59:10 - Combinatorial linear threshold networks
1:25:26 - How much more math do we need to invent?
7/12/2022 • 1 hour, 31 minutes, 40 seconds
BI 140 Jeff Schall: Decisions and Eye Movements
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes and join the Discord community.
Jeff Schall is the director of the Center for Visual Neurophysiology at York University, where he runs the Schall Lab. His research centers around studying the mechanisms of our decisions, choices, movement control, and attention within the saccadic eye movement brain systems and in mathematical psychology models- in other words, how we decide where and when to look. Jeff was my postdoctoral advisor at Vanderbilt University, and I wanted to revisit a few guiding principles he instills in all his students. Linking Propositions by Davida Teller are a series of logical statements to ensure we rigorously connect the brain activity we record to the psychological functions we want to explain. Strong Inference by John Platt is the scientific method on steroids - a way to make our scientific practice most productive and efficient. We discuss both of these topics in the context of Jeff's eye movement and decision-making science. We also discuss how neurophysiology has changed over the past 30 years, we compare the relatively small models he employs with the huge deep learning models, many of his current projects, and plenty more. If you want to learn more about Jeff's work and approach, I recommend reading in order two of his review papers we discuss as well. One was written 20 years ago (On Building a Bridge Between Brain and Behavior), and the other 2-ish years ago (Accumulators, Neurons, and Response Time).
Schall Lab.Twitter: @LabSchall.Related papersLinking Propositions.Strong Inference.On Building a Bridge Between Brain and Behavior.Accumulators, Neurons, and Response Time.
0:00 - Intro
6:51 - Neurophysiology old and new
14:50 - Linking propositions
24:18 - Psychology working with neurophysiology
35:40 - Neuron doctrine, population doctrine
40:28 - Strong Inference and deep learning
46:37 - Model mimicry
51:56 - Scientific fads
57:07 - Current projects
1:06:38 - On leaving academia
1:13:51 - How academia has changed for better and worse
6/30/2022 • 1 hour, 20 minutes, 22 seconds
BI 139 Marc Howard: Compressed Time and Memory
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes and join the Discord community.
Marc Howard runs his Theoretical Cognitive Neuroscience Lab at Boston University, where he develops mathematical models of cognition, constrained by psychological and neural data. In this episode, we discuss the idea that a Laplace transform and its inverse may serve as a unified framework for memory. In short, our memories are compressed on a continuous log-scale: as memories get older, their representations "spread out" in time. It turns out this kind of representation seems ubiquitous in the brain and across cognitive functions, suggesting it is likely a canonical computation our brains use to represent a wide variety of cognitive functions. We also discuss some of the ways Marc is incorporating this mathematical operation in deep learning nets to improve their ability to handle information at different time scales.
Theoretical Cognitive Neuroscience Lab.
Twitter: @marcwhoward777.
Related papers:
Memory as perception of the past: Compressed time in mind and brain.
Formal models of memory based on temporally-varying representations.
Cognitive computation using neural representations of time and space in the Laplace domain.
Time as a continuous dimension in natural and artificial networks.
DeepSITH: Efficient learning via decomposition of what and when across time scales.
0:00 - Intro
4:57 - Main idea: Laplace transforms
12:00 - Time cells
20:08 - Laplace, compression, and time cells
25:34 - Everywhere in the brain
29:28 - Episodic memory
35:11 - Randy Gallistel's memory idea
40:37 - Adding Laplace to deep nets
48:04 - Reinforcement learning
1:00:52 - Brad Wyble Q: What gets filtered out?
1:05:38 - Replay and complementary learning systems
1:11:52 - Howard Goldowsky Q: Gyorgy Buzsaki
1:15:10 - Obstacles
6/20/2022 • 1 hour, 20 minutes, 11 seconds
BI 138 Matthew Larkum: The Dendrite Hypothesis
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes and join the Discord community.
Matthew Larkum runs his lab at Humboldt University of Berlin, where his group studies how dendrites contribute to computations within and across layers of the neocortex. Since the late 1990s, Matthew has continued to uncover key properties of the way pyramidal neurons stretch across layers of the cortex, their dendrites receiving inputs from those different layers - and thus different brain areas. For example, layer 5 pyramidal neurons have a set of basal dendrites near the cell body that receives feedforward-like input, and a set of apical dendrites all the way up in layer 1 that receives feedback--like input. Depending on which set of dendrites is receiving input, or neither or both, the neuron's output functions in different modes- silent, regular spiking, or burst spiking. Matthew realized the different sets of dendritic inputs could signal different operations, often pairing feedforward sensory--like signals and feedback context-like signals. His research has shown this kind of coincidence detection is important for cognitive functions like perception, memory, learning, and even wakefulness. We discuss many of his ideas and research findings, why dendrites have long been neglected in favor of neuron cell bodies, the possibility of learning about computations by studying implementation-level phenomena, and much more.
Larkum Lab.Twitter: @mattlark.Related papersCellular Mechanisms of Conscious Processing.Perirhinal input to neocortical layer 1 controls learning. (bioRxiv link: https://www.biorxiv.org/content/10.1101/713883v1)Are dendrites conceptually useful?Memories off the top of your head.Do Action Potentials Cause Consciousness?Blake Richard's episode discussing back-propagation in the brain (based on Matthew's experiments)
0:00 - Intro
5:31 - Background: Dendrites
23:20 - Cortical neuron bodies vs. branches
25:47 - Theories of cortex
30:49 - Feedforward and feedback hierarchy
37:40 - Dendritic integration hypothesis
44:32 - DIT vs. other consciousness theories
51:30 - Mac Shine Q1
1:04:38 - Are dendrites conceptually useful?
1:09:15 - Insights from implementation level
1:24:44 - How detailed to model?
1:28:15 - Do action potentials cause consciousness?
1:40:33 - Mac Shine Q2
6/6/2022 • 1 hour, 51 minutes, 42 seconds
BI 137 Brian Butterworth: Can Fish Count?
Check out my free video series about what's missing in AI and Neuroscience
Support the show to get full episodes and join the Discord community.
Brian Butterworth is Emeritus Professor of Cognitive Neuropsychology at University College London. In his book, Can Fish Count?: What Animals Reveal About Our Uniquely Mathematical Minds, he describes the counting and numerical abilities across many different species, suggesting our ability to count is evolutionarily very old (since many diverse species can count). We discuss many of the examples in his book, the mathematical disability dyscalculia and its relation to dyslexia, how to test counting abilities in various species, how counting may happen in brains, the promise of creating artificial networks that can do math, and many more topics.
Brian's website: The Mathematical BrainTwitter: @b_butterworthThe book:Can Fish Count?: What Animals Reveal About Our Uniquely Mathematical Minds
0:00 - Intro
3:19 - Why Counting?
5:31 - Dyscalculia
12:06 - Dyslexia
19:12 - Counting
26:37 - Origins of counting vs. language
34:48 - Counting vs. higher math
46:46 - Counting some things and not others
53:33 - How to test counting
1:03:30 - How does the brain count?
1:13:10 - Are numbers real?
5/27/2022 • 1 hour, 17 minutes, 49 seconds
BI 136 Michel Bitbol and Alex Gomez-Marin: Phenomenology
Support the show to get full episodes and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
Michel Bitbol is Director of Research at CNRS (Centre National de la Recherche Scientifique). Alex Gomez-Marin is a neuroscientist running his lab, The Behavior of Organisms Laboratory, at the Instituto de Neurociencias in Alicante. We discuss phenomenology as an alternative perspective on our scientific endeavors. Although we like to believe our science is objective and explains the reality of the world we inhabit, we can't escape the fact that all of our scientific knowledge comes through our perceptions and interpretations as conscious living beings. Michel has used phenomenology to resolve many of the paradoxes that quantum mechanics generates when it is understood as a description of reality, and more recently he has applied phenomenology to the philosophy of mind and consciousness. Alex is currently trying to apply the phenomenological approach to his research on brains and behavior. Much of our conversation revolves around how phenomenology and our "normal" scientific explorations can co-exist, including the study of minds, brains, and intelligence- our own and that of other organisms. We also discuss the "blind spot" of science, the history and practice of phenomenology, various kinds of explanation, the language we use to describe things, and more.
Michel's websiteAlex's Lab: The Behavior of Organisms Laboratory.Twitter: @behaviOrganisms (Alex)Related papersThe Blind Spot of Neuroscience The Life of BehaviorA Clash of Umwelts Related events:The Future Scientist (a conversation series)
0:00 - Intro
4:32 - The Blind Spot
15:53 - Phenomenology and interpretation
22:51 - Personal stories: appreciating phenomenology
37:42 - Quantum physics example
47:16 - Scientific explanation vs. phenomenological description
59:39 - How can phenomenology and science complement each other?
1:08:22 - Neurophenomenology
1:17:34 - Use of language
1:25:46 - Mutual constraints
5/17/2022 • 1 hour, 34 minutes, 12 seconds
BI 135 Elena Galea: The Stars of the Brain
Support the show to get full episodes and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
Brains are often conceived as consisting of neurons and "everything else." As Elena discusses, the "everything else," including glial cells and in particular astrocytes, have largely been ignored in neuroscience. That's partly because the fast action potentials of neurons have been assumed to underlie computations in the brain, and because technology only recently afforded closer scrutiny of astrocyte activity. Now that we can record calcium signaling in astrocytes, it's possible to relate how astrocyte signaling with each other and with neurons may complement the cognitive roles once thought the sole domain of neurons. Although the computational role of astrocytes remains unclear, it is clear that astrocytes interact with neurons and neural circuits in dynamic and interesting ways. We talk about the historical story of astrocytes, the emerging modern story, and Elena shares her views on the path forward to understand astrocyte function in cognition, disease, homeostasis, and - Elena's favorite current hypothesis - their integrative role in negative feedback control.
Elena's website.Twitter: @elenagalea1Related papersA roadmap to integrate astrocytes into Systems Neuroscience.Elena recommended this paper: Biological feedback control—Respect the loops.
0:00 - Intro
5:23 - The changing story of astrocytes
14:58 - Astrocyte research lags neuroscience
19:45 - Types of astrocytes
23:06 - Astrocytes vs neurons
26:08 - Computational roles of astrocytes
35:45 - Feedback control
43:37 - Energy efficiency
46:25 - Current technology
52:58 - Computational astroscience
1:10:57 - Do names for things matter
5/6/2022 • 1 hour, 17 minutes, 25 seconds
BI 134 Mandyam Srinivasan: Bee Flight and Cognition
Support the show to get full episodes and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
Srini is Emeritus Professor at Queensland Brain Institute in Australia. In this episode, he shares his wide range of behavioral experiments elucidating the principles of flight and navigation in insects. We discuss how bees use optic flow signals to determine their speed, distance, proximity to objects, and to gracefully land. These abilities are largely governed via control systems, balancing incoming perceptual signals with internal reference signals. We also talk about a few of the aerial robotics projects his research has inspired, many of the other cognitive skills bees can learn, the possibility of their feeling pain , and the nature of their possible subjective conscious experience.
Srini's Website.Related papersVision, perception, navigation and 'cognition' in honeybees and applications to aerial robotics.
0:00 - Intro
3:34 - Background
8:20 - Bee experiments
14:30 - Bee flight and navigation
28:05 - Landing
33:06 - Umwelt and perception
37:26 - Bee-inspired aerial robotics
49:10 - Motion camouflage
51:52 - Cognition in bees
1:03:10 - Small vs. big brains
1:06:42 - Pain in bees
1:12:50 - Subjective experience
1:15:25 - Deep learning
1:23:00 - Path forward
4/27/2022 • 1 hour, 26 minutes, 17 seconds
BI 133 Ken Paller: Lucid Dreaming, Memory, and Sleep
Support the show to get full episodes and join the Discord community.
Check out my free video series about what's missing in AI and Neuroscience
Ken discusses the recent work in his lab that allows communication with subjects while they experience lucid dreams. This new paradigm opens many avenues to study the neuroscience and psychology of consciousness, sleep, dreams, memory, and learning, and to improve and optimize sleep for cognition. Ken and his team are developing a Lucid Dreaming App which is freely available via his lab. We also discuss much of his work on memory and learning in general and specifically related to sleep, like reactivating specific memories during sleep to improve learning.
Ken's Cognitive Neuroscience Laboratory.Twitter: @kap101.The Lucid Dreaming App.Related papersMemory and Sleep: How Sleep Cognition Can Change the Waking Mind for the Better.Does memory reactivation during sleep support generalization at the cost of memory specifics?Real-time dialogue between experimenters and dreamers during REM sleep.
0:00 - Intro
2:48 - Background and types of memory
14:44 -Consciousness and memory
23:32 - Phases and sleep and wakefulness
28:19 - Sleep, memory, and learning
33:50 - Targeted memory reactivation
48:34 - Problem solving during sleep
51:50 - 2-way communication with lucid dreamers
1:01:43 - Confounds to the paradigm
1:04:50 - Limitations and future studies
1:09:35 - Lucid dreaming app
1:13:47 - How sleep can inform AI
1:20:18 - Advice for students
4/15/2022 • 1 hour, 29 minutes, 14 seconds
BI 132 Ila Fiete: A Grid Scaffold for Memory
Announcement:
I'm releasing my Neuro-AI course April 10-13, after which it will be closed for some time. Learn more here.
Support the show to get full episodes and join the Discord community.
Ila discusses her theoretical neuroscience work suggesting how our memories are formed within the cognitive maps we use to navigate the world and navigate our thoughts. The main idea is that grid cell networks in the entorhinal cortex internally generate a structured scaffold, which gets sent to the hippocampus. Neurons in the hippocampus, like the well-known place cells, receive that scaffolding and also receive external signals from the neocortex- signals about what's happening in the world and in our thoughts. Thus, the place cells act to "pin" what's happening in our neocortex to the scaffold, forming a memory. We also discuss her background as a physicist and her approach as a "neurophysicist", and a review she's publishing all about the many brain areas and cognitive functions being explained as attractor landscapes within a dynamical systems framework.
The Fiete Lab.Related papersA structured scaffold underlies activity in the hippocampus.Attractor and integrator networks in the brain.
0:00 - Intro
3:36 - "Neurophysicist"
9:30 - Bottom-up vs. top-down
15:57 - Tool scavenging
18:21 - Cognitive maps and hippocampus
22:40 - Hopfield networks
27:56 - Internal scaffold
38:42 - Place cells
43:44 - Grid cells
54:22 - Grid cells encoding place cells
59:39 - Scaffold model: stacked hopfield networks
1:05:39 - Attractor landscapes
1:09:22 - Landscapes across scales
1:12:27 - Dimensionality of landscapes
4/3/2022 • 1 hour, 17 minutes, 20 seconds
BI 131 Sri Ramaswamy and Jie Mei: Neuromodulation-aware DNNs
Support the show to get full episodes and join the Discord community.
Sri and Mei join me to discuss how including principles of neuromodulation in deep learning networks may improve network performance. It's an ever-present question how much detail to include in models, and we are in the early stages of learning how neuromodulators and their interactions shape biological brain function. But as we continue to learn more, Sri and Mei are interested in building "neuromodulation-aware DNNs".
Neural Circuits Laboratory.Twitter: Sri: @srikipedia; Jie: @neuro_Mei.Related papersInforming deep neural networks by multiscale principles of neuromodulatory systems.
0:00 - Intro
3:10 - Background
9:19 - Bottom-up vs. top-down
14:42 - Levels of abstraction
22:46 - Biological neuromodulation
33:18 - Inventing neuromodulators
41:10 - How far along are we?
53:31 - Multiple realizability
1:09:40 -Modeling dendrites
1:15:24 - Across-species neuromodulation
3/26/2022 • 1 hour, 26 minutes, 52 seconds
BI 130 Eve Marder: Modulation of Networks
Support the show to get full episodes and join the Discord community.
Eve discusses many of the lessons she has learned studying a small nervous system, the crustacean stomatogastric nervous system (STG). The STG has only about 30 neurons and its connections and neurophysiology are well-understood. Yet Eve's work has shown it functions under a remarkable diversity of conditions, and does so is a remarkable variety of ways. We discuss her work on the STG specifically, and what her work implies about trying to study much larger nervous systems, like our human brains.
The Marder Lab.Twitter: @MarderLab.Related to our conversation:Understanding Brains: Details, Intuition, and Big Data.Emerging principles governing the operation of neural networks (Eve mentions this regarding "building blocks" of neural networks).
0:00 - Intro
3:58 - Background
8:00 - Levels of ambiguity
9:47 - Stomatogastric nervous system
17:13 - Structure vs. function
26:08 - Role of theory
34:56 - Technology vs. understanding
38:25 - Higher cognitive function
44:35 - Adaptability, resilience, evolution
50:23 - Climate change
56:11 - Deep learning
57:12 - Dynamical systems
3/13/2022 • 1 hour, 56 seconds
BI 129 Patryk Laurent: Learning from the Real World
Support the show to get full episodes and join the Discord community.
Patryk and I discuss his wide-ranging background working in both the neuroscience and AI worlds, and his resultant perspective on what's needed to move forward in AI, including some principles of brain processes that are more and less important. We also discuss his own work using some of those principles to help deep learning generalize to better capture how humans behave in and perceive the world.
Patryk's homepage.Twitter: @paklnet.Related papersUnsupervised Learning from Continuous Video in a Scalable Predictive Recurrent Network.
0:00 - Intro
2:22 - Patryk's background
8:37 - Importance of diverse skills
16:14 - What is intelligence?
20:34 - Important brain principles
22:36 - Learning from the real world
35:09 - Language models
42:51 - AI contribution to neuroscience
48:22 - Criteria for "real" AI
53:11 - Neuroscience for AI
1:01:20 - What can we ignore about brains?
1:11:45 - Advice to past self
3/2/2022 • 1 hour, 21 minutes, 1 second
BI 128 Hakwan Lau: In Consciousness We Trust
Support the show to get full episodes and join the Discord community.
Hakwan and I discuss many of the topics in his new book, In Consciousness we Trust: The Cognitive Neuroscience of Subjective Experience. Hakwan describes his perceptual reality monitoring theory of consciousness, which suggests consciousness may act as a systems check between our sensory perceptions and higher cognitive functions. We also discuss his latest thoughts on mental quality space and how it relates to perceptual reality monitoring. Among many other topics, we chat about the many confounds and challenges to empirically studying consciousness, a topic featured heavily in the first half of his book. Hakwan was on a previous episode with Steve Fleming, BI 099 Hakwan Lau and Steve Fleming: Neuro-AI Consciousness.
Hakwan's lab: Consciousness and Metacognition Lab.Twitter: @hakwanlau.Book:In Consciousness we Trust: The Cognitive Neuroscience of Subjective Experience.
0:00 - Intro
4:37 - In Consciousness We Trust
12:19 - Too many consciousness theories?
19:26 - Philosophy and neuroscience of consciousness
29:00 - Local vs. global theories
31:20 - Perceptual reality monitoring and GANs
42:43 - Functions of consciousness
47:17 - Mental quality space
56:44 - Cognitive maps
1:06:28 - Performance capacity confounds
1:12:28 - Blindsight
1:19:11 - Philosophy vs. empirical work
2/20/2022 • 1 hour, 25 minutes, 40 seconds
BI 127 Tomás Ryan: Memory, Instinct, and Forgetting
Support the show to get full episodes and join the Discord community.
Tomás and I discuss his research and ideas on how memories are encoded (the engram), the role of forgetting, and the overlapping mechanisms of memory and instinct. Tomás uses otpogenetics and other techniques to label and control neurons involved in learning and memory, and has shown that forgotten memories can be restored by stimulating "engram cells" originally associated with the forgotten memory. This line of research has led Tomás to think forgetting might be a learning mechanism itself, a adaption our brains make based on the predictability and affordances of the environment. His work on engrams has also led Tomás to think our instincts (ingrams) may share the same mechanism of our memories (engrams), and that memories may transition to instincts across generations. We begin by addressing Randy Gallistel's engram ideas from the previous episode: BI 126 Randy Gallistel: Where Is the Engram?
Ryan Lab.Twitter: @TJRyan_77.Related papersEngram cell connectivity: an evolving substrate for information storage.Forgetting as a form of adaptive engram cell plasticity.Memory and Instinct as a Continuum of Information Storage in The Cognitive Neurosciences.The Bandwagon by Claude Shannon.
0:00 - Intro
4:05 - Response to Randy Gallistel
10:45 - Computation in the brain
14:52 - Instinct and memory
19:37 - Dynamics of memory
21:55 - Wiring vs. connection strength plasticity
24:16 - Changing one's mind
33:09 - Optogenetics and memory experiments
47:24 - Forgetting as learning
1:06:35 - Folk psychological terms
1:08:49 - Memory becoming instinct
1:21:49 - Instinct across the lifetime
1:25:52 - Boundaries of memories
1:28:52 - Subjective experience of memory
1:31:58 - Interdisciplinary research
1:37:32 - Communicating science
2/10/2022 • 1 hour, 42 minutes, 39 seconds
BI 126 Randy Gallistel: Where Is the Engram?
Support the show to get full episodes and join the Discord community.
Randy and I discuss his long-standing interest in how the brain stores information to compute. That is, where is the engram, the physical trace of memory in the brain? Modern neuroscience is dominated by the view that memories are stored among synaptic connections in populations of neurons. Randy believes a more reasonable and reliable way to store abstract symbols, like numbers, is to write them into code within individual neurons. Thus, the spiking code, whatever it is, functions to write and read memories into and out of intracellular substrates, like polynucleotides (DNA, RNA, e.g.). He lays out his case in detail in his book with Adam King, Memory and the Computational Brain: Why Cognitive Science will Transform Neuroscience. We also talk about some research and theoretical work since then that support his views.
Randy's Rutger's website.Book:Memory and the Computational Brain: Why Cognitive Science will Transform Neuroscience.Related papers:The theoretical RNA paper Randy mentions: An RNA-based theory of natural universal computation.Evidence for intracellular engram in cerebellum: Memory trace and timing mechanism localized to cerebellar Purkinje cells.The exchange between Randy and John Lisman.The blog post Randy mentions about Universal function approximation:The Truth About the [Not So] Universal Approximation Theorem
0:00 - Intro
6:50 - Cognitive science vs. computational neuroscience
13:23 - Brain as computing device
15:45 - Noam Chomsky's influence
17:58 - Memory must be stored within cells
30:58 - Theoretical support for the idea
34:15 - Cerebellum evidence supporting the idea
40:56 - What is the write mechanism?
51:11 - Thoughts on deep learning
1:00:02 - Multiple memory mechanisms?
1:10:56 - The role of plasticity
1:12:06 - Trying to convince molecular biologists
1/31/2022 • 1 hour, 19 minutes, 57 seconds
BI 125 Doris Tsao, Tony Zador, Blake Richards: NAISys
Support the show to get full episodes and join the Discord community.
Doris, Tony, and Blake are the organizers for this year's NAISys conference, From Neuroscience to Artificially Intelligent Systems (NAISys), at Cold Spring Harbor. We discuss the conference itself, some history of the neuroscience and AI interface, their current research interests, and a handful of topics around evolution, innateness, development, learning, and the current and future prospects for using neuroscience to inspire new ideas in artificial intelligence.
From Neuroscience to Artificially Intelligent Systems (NAISys).Doris:@doristsao.Tsao Lab.Unsupervised deep learning identifies semantic disentanglement in single inferotemporal face patch neurons.Tony:@TonyZador.Zador Lab.A Critique of Pure Learning: What Artificial Neural Networks can Learn from Animal Brains.Blake:@tyrell_turing.The Learning in Neural Circuits Lab.The functional specialization of visual cortex emerges from training parallel pathways with self-supervised predictive learning.
0:00 - Intro
4:16 - Tony Zador
5:38 - Doris Tsao
10:44 - Blake Richards
15:46 - Deductive, inductive, abductive inference
16:32 - NAISys
33:09 - Evolution, development, learning
38:23 - Learning: plasticity vs. dynamical structures
54:13 - Different kinds of understanding
1:03:05 - Do we understand evolution well enough?
1:04:03 - Neuro-AI fad?
1:06:26 - Are your problems bigger or smaller now?
1/19/2022 • 1 hour, 11 minutes, 5 seconds
BI 124 Peter Robin Hiesinger: The Self-Assembling Brain
Support the show to get full episodes and join the Discord community.
Robin and I discuss many of the ideas in his book The Self-Assembling Brain: How Neural Networks Grow Smarter. The premise is that our DNA encodes an algorithmic growth process that unfolds information via time and energy, resulting in a connected neural network (our brains!) imbued with vast amounts of information from the "start". This contrasts with modern deep learning networks, which start with minimal initial information in their connectivity, and instead rely almost solely on learning to gain their function. Robin suggests we won't be able to create anything with close to human-like intelligence unless we build in an algorithmic growth process and an evolutionary selection process to create artificial networks.
Hiesinger Neurogenetics LaboratoryTwitter: @HiesingerLab.Book: The Self-Assembling Brain: How Neural Networks Grow Smarter
0:00 - Intro
3:01 - The Self-Assembling Brain
21:14 - Including growth in networks
27:52 - Information unfolding and algorithmic growth
31:27 - Cellular automata
40:43 - Learning as a continuum of growth
45:01 - Robustness, autonomous agents
49:11 - Metabolism vs. connectivity
58:00 - Feedback at all levels
1:05:32 - Generality vs. specificity
1:10:36 - Whole brain emulation
1:20:38 - Changing view of intelligence
1:26:34 - Popular and wrong vs. unknown and right
1/5/2022 • 1 hour, 39 minutes, 27 seconds
BI 123 Irina Rish: Continual Learning
Support the show to get full episodes and join the Discord community.
Irina is a faculty member at MILA-Quebec AI Institute and a professor at Université de Montréal. She has worked from both ends of the neuroscience/AI interface, using AI for neuroscience applications, and using neural principles to help improve AI. We discuss her work on biologically-plausible alternatives to back-propagation, using "auxiliary variables" in addition to the normal connection weight updates. We also discuss the world of lifelong learning, which seeks to train networks in an online manner to improve on any tasks as they are introduced. Catastrophic forgetting is an obstacle in modern deep learning, where a network forgets old tasks when it is trained on new tasks. Lifelong learning strategies, like continual learning, transfer learning, and meta-learning seek to overcome catastrophic forgetting, and we talk about some of the inspirations from neuroscience being used to help lifelong learning in networks.
Irina's website.Twitter: @irinarishRelated papers:Beyond Backprop: Online Alternating Minimization with Auxiliary Variables.Towards Continual Reinforcement Learning: A Review and Perspectives.Lifelong learning video tutorial: DLRL Summer School 2021 - Lifelong Learning - Irina Rish.
0:00 - Intro
3:26 - AI for Neuro, Neuro for AI
14:59 - Utility of philosophy
20:51 - Artificial general intelligence
24:34 - Back-propagation alternatives
35:10 - Inductive bias vs. scaling generic architectures
45:51 - Continual learning
59:54 - Neuro-inspired continual learning
1:06:57 - Learning trajectories
12/26/2021 • 1 hour, 18 minutes, 59 seconds
BI 122 Kohitij Kar: Visual Intelligence
Support the show to get full episodes and join the Discord community.
Ko and I discuss a range of topics around his work to understand our visual intelligence. Ko was a postdoc in James Dicarlo's lab, where he helped develop the convolutional neural network models that have become the standard for explaining core object recognition. He is starting his own lab at York University, where he will continue to expand and refine the models, adding important biological details and incorporating models for brain areas outside the ventral visual stream. He will also continue recording neural activity, and performing perturbation studies to better understand the networks involved in our visual cognition.
VISUAL INTELLIGENCE AND TECHNOLOGICAL ADVANCES LABTwitter: @KohitijKar.Related papersEvidence that recurrent circuits are critical to the ventral stream’s execution of core object recognition behavior.Neural population control via deep image synthesis.BI 075 Jim DiCarlo: Reverse Engineering Vision
0:00 - Intro
3:49 - Background
13:51 - Where are we in understanding vision?
19:46 - Benchmarks
21:21 - Falsifying models
23:19 - Modeling vs. experiment speed
29:26 - Simple vs complex models
35:34 - Dorsal visual stream and deep learning
44:10 - Modularity and brain area roles
50:58 - Chemogenetic perturbation, DREADDs
57:10 - Future lab vision, clinical applications
1:03:55 - Controlling visual neurons via image synthesis
1:12:14 - Is it enough to study nonhuman animals?
1:18:55 - Neuro/AI intersection
1:26:54 - What is intelligence?
12/12/2021 • 1 hour, 33 minutes, 18 seconds
BI 121 Mac Shine: Systems Neurobiology
Support the show to get full episodes and join the Discord community.
Mac and I discuss his systems level approach to understanding brains, and his theoretical work suggesting important roles for the thalamus, basal ganglia, and cerebellum, shifting the dynamical landscape of brain function within varying behavioral contexts. We also discuss his recent interest in the ascending arousal system and neuromodulators. Mac thinks the neocortex has been the sole focus of too much neuroscience research, and that the subcortical brain regions and circuits have a much larger role underlying our intelligence.
Shine LabTwitter: @jmacshineRelated papersThe thalamus integrates the macrosystems of the brain to facilitate complex, adaptive brain network dynamics.Computational models link cellular mechanisms of neuromodulation to large-scale neural dynamics.
0:00 - Intro
6:32 - Background
10:41 - Holistic approach
18:19 - Importance of thalamus
35:19 - Thalamus circuitry
40:30 - Cerebellum
46:15 - Predictive processing
49:32 - Brain as dynamical attractor landscape
56:48 - System 1 and system 2
1:02:38 - How to think about the thalamus
1:06:45 - Causality in complex systems
1:11:09 - Clinical applications
1:15:02 - Ascending arousal system and neuromodulators
1:27:48 - Implications for AI
1:33:40 - Career serendipity
1:35:12 - Advice
12/2/2021 • 1 hour, 43 minutes, 12 seconds
BI 120 James Fitzgerald, Andrew Saxe, Weinan Sun: Optimizing Memories
Support the show to get full episodes and join the Discord community.
James, Andrew, and Weinan discuss their recent theory about how the brain might use complementary learning systems to optimize our memories. The idea is that our hippocampus creates our episodic memories for individual events, full of particular details. And through a complementary process, slowly consolidates those memories within our neocortex through mechanisms like hippocampal replay. The new idea in their work suggests a way for the consolidated cortical memory to become optimized for generalization, something humans are known to be capable of but deep learning has yet to build. We discuss what their theory predicts about how the "correct" process depends on how much noise and variability there is in the learning environment, how their model solves this, and how it relates to our brain and behavior.
James' Janelia page.Weinan's Janelia page.Andrew's website.Twitter: Andrew: @SaxeLabWeinan: @sunw37Paper we discuss:Organizing memories for generalization in complementary learning systems.Andrew's previous episode: BI 052 Andrew Saxe: Deep Learning Theory
0:00 - Intro
3:57 - Guest Intros
15:04 - Organizing memories for generalization
26:48 - Teacher, student, and notebook models
30:51 - Shallow linear networks
33:17 - How to optimize generalization
47:05 - Replay as a generalization regulator
54:57 - Whole greater than sum of its parts
1:05:37 - Unpredictability
1:10:41 - Heuristics
1:13:52 - Theoretical neuroscience for AI
1:29:42 - Current personal thinking
11/21/2021 • 1 hour, 40 minutes, 2 seconds
BI 119 Henry Yin: The Crisis in Neuroscience
Support the show to get full episodes and join the Discord community.
Henry and I discuss why he thinks neuroscience is in a crisis (in the Thomas Kuhn sense of scientific paradigms, crises, and revolutions). Henry thinks our current concept of the brain as an input-output device, with cognition in the middle, is mistaken. He points to the failure of neuroscience to successfully explain behavior despite decades of research. Instead, Henry proposes the brain is one big hierarchical set of control loops, trying to control their output with respect to internally generated reference signals. He was inspired by control theory, but points out that most control theory for biology is flawed by not recognizing that the reference signals are internally generated. Instead, most control theory approaches, and neuroscience research in general, assume the reference signals are what gets externally supplied... by the experimenter.
Yin lab at Duke.Twitter: @HenryYin19.Related papersThe Crisis in Neuroscience.Restoring Purpose in Behavior.Achieving natural behavior in a robot using neurally inspired hierarchical perceptual control.
0:00 - Intro
5:40 - Kuhnian crises
9:32 - Control theory and cybernetics
17:23 - How much of brain is control system?
20:33 - Higher order control representation
23:18 - Prediction and control theory
27:36 - The way forward
31:52 - Compatibility with mental representation
38:29 - Teleology
45:53 - The right number of subjects
51:30 - Continuous measurement
57:06 - Artificial intelligence and control theory
11/11/2021 • 1 hour, 6 minutes, 36 seconds
BI 118 Johannes Jäger: Beyond Networks
Support the show to get full episodes and join the Discord community.
Johannes (Yogi) is a freelance philosopher, researcher & educator. We discuss many of the topics in his online course, Beyond Networks: The Evolution of Living Systems. The course is focused on the role of agency in evolution, but it covers a vast range of topics: process vs. substance metaphysics, causality, mechanistic dynamic explanation, teleology, the important role of development mediating genotypes, phenotypes, and evolution, what makes biological organisms unique, the history of evolutionary theory, scientific perspectivism, and a view toward the necessity of including agency in evolutionary theory. I highly recommend taking his course. We also discuss the role of agency in artificial intelligence, how neuroscience and evolutionary theory are undergoing parallel re-evaluations, and Yogi answers a guest question from Kevin Mitchell.
Yogi's website and blog: Untethered in the Platonic Realm.Twitter: @yoginho.His youtube course: Beyond Networks: The Evolution of Living Systems.Kevin Mitchell's previous episode: BI 111 Kevin Mitchell and Erik Hoel: Agency, Emergence, Consciousness.
0:00 - Intro
4:10 - Yogi's background
11:00 - Beyond Networks - limits of dynamical systems models
16:53 - Kevin Mitchell question
20:12 - Process metaphysics
26:13 - Agency in evolution
40:37 - Agent-environment interaction, open-endedness
45:30 - AI and agency
55:40 - Life and intelligence
59:08 - Deep learning and neuroscience
1:03:21 - Mental autonomy
1:06:10 - William Wimsatt's biopsychological thicket
1:11:23 - Limtiations of mechanistic dynamic explanation
1:18:53 - Synthesis versus multi-perspectivism
1:30:31 - Specialization versus generalization
11/1/2021 • 1 hour, 36 minutes, 8 seconds
BI 117 Anil Seth: Being You
Support the show to get full episodes and join the Discord community.
Anil and I discuss a range of topics from his book, BEING YOU A New Science of Consciousness. Anil lays out his framework for explaining consciousness, which is embedded in what he calls the "real problem" of consciousness. You know the "hard problem", which was David Chalmers term for our eternal difficulties to explain why we have subjective awareness at all instead of being unfeeling, unexperiencing machine-like organisms. Anil's "real problem" aims to explain, predict, and control the phenomenal properties of consciousness, and his hope is that, by doing so, the hard problem of consciousness will dissolve much like the mystery of explaining life dissolved with lots of good science.
Anil's account of perceptual consciousness, like seeing red, is that it's rooted in predicting our incoming sensory data. His account of our sense of self, is that it's rooted in predicting our bodily states to control them.
We talk about that and a lot of other topics from the book, like consciousness as "controlled hallucinations", free will, psychedelics, complexity and emergence, and the relation between life, intelligence, and consciousness. Plus, Anil answers a handful of questions from Megan Peters and Steve Fleming, both previous brain inspired guests.
Anil's website.Twitter: @anilkseth.Anil's book: BEING YOU A New Science of Consciousness.Megan's previous episode:BI 073 Megan Peters: Consciousness and MetacognitionSteve's previous episodesBI 099 Hakwan Lau and Steve Fleming: Neuro-AI ConsciousnessBI 107 Steve Fleming: Know Thyself
0:00 - Intro
6:32 - Megan Peters Q: Communicating Consciousness
15:58 - Human vs. animal consciousness
19:12 - BEING YOU A New Science of Consciousness
20:55 - Megan Peters Q: Will the hard problem go away?
30:55 - Steve Fleming Q: Contents of consciousness
41:01 - Megan Peters Q: Phenomenal character vs. content
43:46 - Megan Peters Q: Lempels of complexity
52:00 - Complex systems and emergence
55:53 - Psychedelics
1:06:04 - Free will
1:19:10 - Consciousness vs. life vs. intelligence
10/19/2021 • 1 hour, 32 minutes, 9 seconds
BI 116 Michael W. Cole: Empirical Neural Networks
Support the show to get full episodes and join the Discord community.
Mike and I discuss his modeling approach to study cognition. Many people I have on the podcast use deep neural networks to study brains, where the idea is to train or optimize the model to perform a task, then compare the model properties with brain properties. Mike's approach is different in at least two ways. One, he builds the architecture of his models using connectivity data from fMRI recordings. Two, he doesn't train his models; instead, he uses functional connectivity data from the fMRI recordings to assign weights between nodes of the network (in deep learning, the weights are learned through lots of training). Mike calls his networks empirically-estimated neural networks (ENNs), and/or network coding models. We walk through his approach, what we can learn from models like ENNs, discuss some of his earlier work on cognitive control and our ability to flexibly adapt to new task rules through instruction, and he fields questions from Kanaka Rajan, Kendrick Kay, and Patryk Laurent.
The Cole Neurocognition lab.Twitter: @TheColeLab.Related papersDiscovering the Computational Relevance of Brain Network Organization.Constructing neural network models from brain data reveals representational transformation underlying adaptive behavior.Kendrick Kay's previous episode: BI 026 Kendrick Kay: A Model By Any Other Name.Kanaka Rajan's previous episode: BI 054 Kanaka Rajan: How Do We Switch Behaviors?
0:00 - Intro
4:58 - Cognitive control
7:44 - Rapid Instructed Task Learning and Flexible Hub Theory
15:53 - Patryk Laurent question: free will
26:21 - Kendrick Kay question: fMRI limitations
31:55 - Empirically-estimated neural networks (ENNs)
40:51 - ENNs vs. deep learning
45:30 - Clinical relevance of ENNs
47:32 - Kanaka Rajan question: a proposed collaboration
56:38 - Advantage of modeling multiple regions
1:05:30 - How ENNs work
1:12:48 - How ENNs might benefit artificial intelligence
1:19:04 - The need for causality
1:24:38 - Importance of luck and serendipity
10/12/2021 • 1 hour, 31 minutes, 20 seconds
BI 115 Steve Grossberg: Conscious Mind, Resonant Brain
Support the show to get full episodes and join the Discord community.
Steve and I discuss his book Conscious Mind, Resonant Brain: How Each Brain Makes a Mind. The book is a huge collection of his models and their predictions and explanations for a wide array of cognitive brain functions. Many of the models spring from his Adaptive Resonance Theory (ART) framework, which explains how networks of neurons deal with changing environments while maintaining self-organization and retaining learned knowledge. ART led Steve to the hypothesis that all conscious states are resonant states, which we discuss. There are also guest questions from György Buzsáki, Jay McClelland, and John Krakauer.
Steve's BU website.Conscious Mind, Resonant Brain: How Each Brain Makes a MindPrevious Brain Inspired episode:BI 082 Steve Grossberg: Adaptive Resonance Theory
0:00 - Intro
2:38 - Conscious Mind, Resonant Brain
11:49 - Theoretical method
15:54 - ART, learning, and consciousness
22:58 - Conscious vs. unconscious resonance
26:56 - Györy Buzsáki question
30:04 - Remaining mysteries in visual system
35:16 - John Krakauer question
39:12 - Jay McClelland question
51:34 - Any missing principles to explain human cognition?
1:00:16 - Importance of an early good career start
1:06:50 - Has modeling training caught up to experiment training?
1:17:12 - Universal development code
10/2/2021 • 1 hour, 23 minutes, 41 seconds
BI 114 Mark Sprevak and Mazviita Chirimuuta: Computation and the Mind
Support the show to get full episodes and join the Discord community.
Mark and Mazviita discuss the philosophy and science of mind, and how to think about computations with respect to understanding minds. Current approaches to explaining brain function are dominated by computational models and the computer metaphor for brain and mind. But there are alternative ways to think about the relation between computations and brain function, which we explore in the discussion. We also talk about the role of philosophy broadly and with respect to mind sciences, pluralism and perspectival approaches to truth and understanding, the prospects and desirability of naturalizing representations (accounting for how brain representations relate to the natural world), and much more.
Mark's website.Mazviita's University of Edinburgh page.Twitter (Mark): @msprevak.Mazviita's previous Brain Inspired episode:BI 072 Mazviita Chirimuuta: Understanding, Prediction, and RealityThe related book we discuss:The Routledge Handbook of the Computational Mind 2018 Mark Sprevak Matteo Colombo (Editors)
0:00 - Intro
5:26 - Philosophy contributing to mind science
15:45 - Trend toward hyperspecialization
21:38 - Practice-focused philosophy of science
30:42 - Computationalism
33:05 - Philosophy of mind: identity theory, functionalism
38:18 - Computations as descriptions
41:27 - Pluralism and perspectivalism
54:18 - How much of brain function is computation?
1:02:11 - AI as computationalism
1:13:28 - Naturalizing representations
1:30:08 - Are you doing it right?
9/22/2021 • 1 hour, 38 minutes, 7 seconds
BI 113 David Barack and John Krakauer: Two Views On Cognition
Support the show to get full episodes and join the Discord community.
David and John discuss some of the concepts from their recent paper Two Views on the Cognitive Brain, in which they argue the recent population-based dynamical systems approach is a promising route to understanding brain activity underpinning higher cognition. We discuss mental representations, the kinds of dynamical objects being used for explanation, and much more, including David's perspectives as a practicing neuroscientist and philosopher.
David's webpage.John's Lab.Twitter: David: @DLBarackJohn: @blamlabPaper: Two Views on the Cognitive Brain.John's previous episodes:BI 025 John Krakauer: Understanding CognitionBI 077 David and John Krakauer: Part 1BI 078 David and John Krakauer: Part 2
Timestamps
0:00 - Intro
3:13 - David's philosophy and neuroscience experience
20:01 - Renaissance person
24:36 - John's medical training
31:58 - Two Views on the Cognitive Brain
44:18 - Representation
49:37 - Studying populations of neurons
1:05:17 - What counts as representation
1:18:49 - Does this approach matter for AI?
9/12/2021 • 1 hour, 30 minutes, 38 seconds
BI ViDA Panel Discussion: Deep RL and Dopamine
9/2/2021 • 57 minutes, 25 seconds
BI 112 Ali Mohebi and Ben Engelhard: The Many Faces of Dopamine
8/26/2021 • 1 hour, 13 minutes, 56 seconds
BI NMA 06: Advancing Neuro Deep Learning Panel
8/19/2021 • 1 hour, 20 minutes, 32 seconds
BI NMA 05: NLP and Generative Models Panel
8/13/2021 • 1 hour, 23 minutes, 50 seconds
BI NMA 04: Deep Learning Basics Panel
8/6/2021 • 59 minutes, 21 seconds
BI 111 Kevin Mitchell and Erik Hoel: Agency, Emergence, Consciousness
Erik, Kevin, and I discuss... well a lot of things.
Erik's recent novel The Revelations is a story about a group of neuroscientists trying to develop a good theory of consciousness (with a murder mystery plot).
Kevin's book Innate - How the Wiring of Our Brains Shapes Who We Are describes the messy process of getting from DNA, traversing epigenetics and development, to our personalities.
We talk about both books, then dive deeper into topics like whether brains evolved for moving our bodies vs. consciousness, how information theory is lending insights to emergent phenomena, and the role of agency with respect to what counts as intelligence.
Kevin's website.Eriks' website.Twitter: @WiringtheBrain (Kevin); @erikphoel (Erik)Books:INNATE – How the Wiring of Our Brains Shapes Who We AreThe RevelationsPapersErikFalsification and consciousness.The emergence of informative higher scales in complex networks.Emergence as the conversion of information: A unifying theory.
Timestamps
0:00 - Intro
3:28 - The Revelations - Erik's novel
15:15 - Innate - Kevin's book
22:56 - Cycle of progress
29:05 - Brains for movement or consciousness?
46:46 - Freud's influence
59:18 - Theories of consciousness
1:02:02 - Meaning and emergence
1:05:50 - Reduction in neuroscience
1:23:03 - Micro and macro - emergence
1:29:35 - Agency and intelligence
7/28/2021 • 1 hour, 38 minutes, 4 seconds
BI NMA 03: Stochastic Processes Panel
Panelists:
Yael Niv.@yael_nivKonrad Kording@KordingLab.Previous BI episodes:BI 027 Ioana Marinescu & Konrad Kording: Causality in Quasi-Experiments.BI 014 Konrad Kording: Regulators, Mount Up!Sam Gershman.@gershbrain.Previous BI episodes:BI 095 Chris Summerfield and Sam Gershman: Neuro for AI?BI 028 Sam Gershman: Free Energy Principle & Human Machines.Tim Behrens.@behrenstim.Previous BI episodes:BI 035 Tim Behrens: Abstracting & Generalizing Knowledge, & Human Replay.BI 024 Tim Behrens: Cognitive Maps.
This is the third in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. In this episode, the panelists discuss their experiences with stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality.
The other panels:
First panel, about model fitting, GLMs/machine learning, dimensionality reduction, and deep learning.Second panel, about linear systems, real neurons, and dynamic networks.Fourth panel, about basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, & regularization.Fifth panel, about “doing more with fewer parameters: Convnets, RNNs, attention & transformers, generative models (VAEs & GANs).Sixth panel, about advanced topics in deep learning: unsupervised & self-supervised learning, reinforcement learning, continual learning/causality.
7/22/2021 • 1 hour, 48 seconds
BI NMA 02: Dynamical Systems Panel
Panelists:
Adrienne Fairhall.@alfairhall.Bing Brunton.@bingbrunton.Kanaka Rajan.@rajankdr.BI 054 Kanaka Rajan: How Do We Switch Behaviors?
This is the second in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. In this episode, the panelists discuss their experiences with linear systems, real neurons, and dynamic networks.
Other panels:
First panel, about model fitting, GLMs/machine learning, dimensionality reduction, and deep learning.Third panel, about stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality.Fourth panel, about basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, & regularization.Fifth panel, about “doing more with fewer parameters: Convnets, RNNs, attention & transformers, generative models (VAEs & GANs).Sixth panel, about advanced topics in deep learning: unsupervised & self-supervised learning, reinforcement learning, continual learning/causality.
7/15/2021 • 1 hour, 15 minutes, 28 seconds
BI NMA 01: Machine Learning Panel
Panelists:
Athena Akrami: @AthenaAkrami.Demba Ba.Gunnar Blohm: @GunnarBlohm.Kunlin Wei.
This is the first in a series of panel discussions in collaboration with Neuromatch Academy, the online computational neuroscience summer school. In this episode, the panelists discuss their experiences with model fitting, GLMs/machine learning, dimensionality reduction, and deep learning.
Other panels:
Second panel, about linear systems, real neurons, and dynamic networks.Third panel, about stochastic processes, including Bayes, decision-making, optimal control, reinforcement learning, and causality.Fourth panel, about basics in deep learning, including Linear deep learning, Pytorch, multi-layer-perceptrons, optimization, & regularization.Fifth panel, about “doing more with fewer parameters: Convnets, RNNs, attention & transformers, generative models (VAEs & GANs).Sixth panel, about advanced topics in deep learning: unsupervised & self-supervised learning, reinforcement learning, continual learning/causality.
7/12/2021 • 1 hour, 27 minutes, 12 seconds
BI 110 Catherine Stinson and Jessica Thompson: Neuro-AI Explanation
Catherine, Jess, and I use some of the ideas from their recent papers to discuss how different types of explanations in neuroscience and AI could be unified into explanations of intelligence, natural or artificial. Catherine has written about how models are related to the target system they are built to explain. She suggests both the model and the target system should be considered as instantiations of a specific kind of phenomenon, and explanation is a product of relating the model and the target system to that specific aspect they both share. Jess has suggested we shift our focus of explanation from objects - like a brain area or a deep learning model - to the shared class of phenomenon performed by those objects. Doing so may help bridge the gap between the different forms of explanation currently used in neuroscience and AI. We also discuss Henk de Regt's conception of scientific understanding and its relation to explanation (they're different!), and plenty more.
Catherine's website.Jessica's blog.Twitter: Jess: @tsonj.Related papersFrom Implausible Artificial Neurons to Idealized Cognitive Models: Rebooting Philosophy of Artificial Intelligence - CatherineForms of explanation and understanding for neuroscience and artificial intelligence - JessJess is a postdoc in Chris Summerfield's lab, and Chris and San Gershman were on a recent episode.Understanding Scientific Understanding by Henk de Regt.
Timestamps:
0:00 - Intro
11:11 - Background and approaches
27:00 - Understanding distinct from explanation
36:00 - Explanations as programs (early explanation)
40:42 - Explaining classes of phenomena
52:05 - Constitutive (neuro) vs. etiological (AI) explanations
1:04:04 - Do nonphysical objects count for explanation?
1:10:51 - Advice for early philosopher/scientists
7/6/2021 • 1 hour, 25 minutes, 2 seconds
BI 109 Mark Bickhard: Interactivism
Mark and I discuss a wide range of topics surrounding his Interactivism framework for explaining cognition. Interactivism stems from Mark's account of representations and how what we represent in our minds is related to the external world - a challenge that has plagued the mind-body problem since the beginning. Basically, representations are anticipated interactions with the world, that can be true (if enacting one helps an organism maintain its thermodynamic relation with the world) or false (if it doesn't). And representations are functional, in that they function to maintain far from equilibrium thermodynamics for the organism for self-maintenance. Over the years, Mark has filled out Interactivism, starting with a process metaphysics foundation and building from there to account for representations, how our brains might implement representations, and why AI is hindered by our modern "encoding" version of representation. We also compare interactivism to other similar frameworks, like enactivism, predictive processing, and the free energy principle.
For related discussions on the foundations (and issues of) representations, check out episode 60 with Michael Rescorla, episode 61 with Jörn Diedrichsen and Niko Kriegeskorte, and especially episode 79 with Romain Brette.
Mark's website.Related papersInteractivism: A manifesto.Plenty of other papers available via his website.Also mentioned:The First Half Second The Microgenesis and Temporal Dynamics of Unconscious and Conscious Visual Processes. 2006, Haluk Ögmen, Bruno G. BreitmeyerMaiken Nedergaard's work on sleep.
Timestamps
0:00 - Intro
5:06 - Previous and upcoming book
9:17 - Origins of Mark's thinking
14:31 - Process vs. substance metaphysics
27:10 - Kinds of emergence
32:16 - Normative emergence to normative function and representation
36:33 - Representation in Interactivism
46:07 - Situation knowledge
54:02 - Interactivism vs. Enactivism
1:09:37 - Interactivism vs Predictive/Bayesian brain
1:17:39 - Interactivism vs. Free energy principle
1:21:56 - Microgenesis
1:33:11 - Implications for neuroscience
1:38:18 - Learning as variation and selection
1:45:07 - Implications for AI
1:55:06 - Everything is a clock
1:58:14 - Is Mark a philosopher?
6/26/2021 • 2 hours, 3 minutes, 43 seconds
BI 108 Grace Lindsay: Models of the Mind
Grace's websiteTwitter: @neurograce.Models of the Mind: How Physics, Engineering and Mathematics Have Shaped Our Understanding of the Brain.We talked about Grace's work using convolutional neural networks to study vision and attention way back on episode 11.
Grace and I discuss her new book Models of the Mind, about the blossoming and conceptual foundations of the computational approach to study minds and brains. Each chapter of the book focuses on one major topic and provides historical context, the major concepts that connect models to brain functions, and the current landscape of related research endeavors. We cover a handful of those during the episode, including the birth of AI, the difference between math in physics and neuroscience, determining the neural code and how Shannon information theory plays a role, whether it's possible to guess a brain function based on what we know about some brain structure, "grand unified theories" of the brain. We also digress and explore topics beyond the book.
Timestamps
0:00 - Intro
4:19 - Cognition beyond vision
12:38 - Models of the Mind - book overview
14:00 - The good and bad of using math
21:33 - I quiz Grace on her own book
25:03 - Birth of AI and computational approach
38:00 - Rediscovering old math for new neuroscience
41:00 - Topology as good math to know now
45:29 - Physics vs. neuroscience math
49:32 - Neural code and information theory
55:03 - Rate code vs. timing code
59:18 - Graph theory - can you deduce function from structure?
1:06:56 - Multiple realizability
1:13:01 - Grand Unified theories of the brain
6/16/2021 • 1 hour, 26 minutes, 12 seconds
BI 107 Steve Fleming: Know Thyself
Steve and I discuss many topics from his new book Know Thyself: The Science of Self-Awareness. The book covers the full range of what we know about metacognition and self-awareness, including how brains might underlie metacognitive behavior, computational models to explain mechanisms of metacognition, how and why self-awareness evolved, which animals beyond humans harbor metacognition and how to test it, its role and potential origins in theory of mind and social interaction, how our metacognitive skills develop over our lifetimes, what our metacognitive skill tells us about our other psychological traits, and so on. We also discuss what it might look like when we are able to build metacognitive AI, and whether that's even a good idea.
Steve's lab: The MetaLab.Twitter: @smfleming.Steve and Hakwan Lau on episode 99 about consciousness. Papers:Metacognitive training: Domain-General Enhancements of Metacognitive Ability Through Adaptive TrainingThe book:Know Thyself: The Science of Self-Awareness.
Timestamps
0:00 - Intro
3:25 - Steve's Career
10:43 - Sub-personal vs. personal metacognition
17:55 - Meditation and metacognition
20:51 - Replay tools for mind-wandering
30:56 - Evolutionary cultural origins of self-awareness
45:02 - Animal metacognition
54:25 - Aging and self-awareness
58:32 - Is more always better?
1:00:41 - Political dogmatism and overconfidence
1:08:56 - Reliance on AI
1:15:15 - Building self-aware AI
1:23:20 - Future evolution of metacognition
6/6/2021 • 1 hour, 29 minutes, 24 seconds
BI 106 Jacqueline Gottlieb and Robert Wilson: Deep Curiosity
Jackie and Bob discuss their research and thinking about curiosity.
Jackie's background is studying decision making and attention, recording neurons in nonhuman primates during eye movement tasks, and she's broadly interested in how we adapt our ongoing behavior. Curiosity is crucial for this, so she recently has focused on behavioral strategies to exercise curiosity, developing tasks that test exploration, information sampling, uncertainty reduction, and intrinsic motivation.
Bob's background is developing computational models of reinforcement learning (including the exploration-exploitation tradeoff) and decision making, and he behavior and neuroimaging data in humans to test the models. He's broadly interested in how and whether we can understand brains and cognition using mathematical models. Recently he's been working on a model for curiosity known as deep exploration, which suggests we make decisions by deeply simulating a handful of scenarios and choosing based on the simulation outcomes.
We also discuss how one should go about their career (qua curiosity), how eye movements compare with other windows into cognition, and whether we can and should create curious AI agents (Bob is an emphatic yes, and Jackie is slightly worried that will be the time to worry about AI).
Jackie's lab: Jacqueline Gottlieb Laboratory at Columbia University.Bob's lab: Neuroscience of Reinforcement Learning and Decision Making.Twitter: Bob: @NRDLab (Jackie's not on twitter).Related papersCuriosity, information demand and attentional priority.Balancing exploration and exploitation with information and randomization.Deep exploration as a unifying account of explore-exploit behavior.Bob mentions an influential talk by Benjamin Van Roy:Generalization and Exploration via Value Function Randomization.Bob mentions his paper with Anne Collins:Ten simple rules for the computational modeling of behavioral data.
Timestamps:
0:00 - Intro
4:15 - Central scientific interests
8:32 - Advent of mathematical models
12:15 - Career exploration vs. exploitation
28:03 - Eye movements and active sensing
35:53 - Status of eye movements in neuroscience
44:16 - Why are we curious?
50:26 - Curiosity vs. Exploration vs. Intrinsic motivation
1:02:35 - Directed vs. random exploration
1:06:16 - Deep exploration
1:12:52 - How to know what to pay attention to
1:19:49 - Does AI need curiosity?
1:26:29 - What trait do you wish you had more of?
5/27/2021 • 1 hour, 31 minutes, 53 seconds
BI 105 Sanjeev Arora: Off the Convex Path
Sanjeev and I discuss some of the progress toward understanding how deep learning works, specially under previous assumptions it wouldn't or shouldn't work as well as it does. Deep learning theory poses a challenge for mathematics, because its methods aren't rooted in mathematical theory and therefore are a "black box" for math to open. We discuss how Sanjeev thinks optimization, the common framework for thinking of how deep nets learn, is the wrong approach. Instead, a promising alternative focuses on the learning trajectories that occur as a result of different learning algorithms. We discuss two examples of his research to illustrate this: creating deep nets with infinitely large layers (and the networks still find solutions among the infinite possible solutions!), and massively increasing the learning rate during training (the opposite of accepted wisdom, and yet, again, the network finds solutions!). We also discuss his past focus on computational complexity and how he doesn't share the current neuroscience optimism comparing brains to deep nets.
Sanjeev's website.His Research group website.His blog: Off The Convex Path.Papers we discussOn Exact Computation with an Infinitely Wide Neural Net.An Exponential Learning Rate Schedule for Deep LearningRelatedThe episode with Andrew Saxe covers related deep learning theory in episode 52.Omri Barak discusses the importance of learning trajectories to understand RNNs in episode 97.Sanjeev mentions Christos Papadimitriou.
Timestamps
0:00 - Intro
7:32 - Computational complexity
12:25 - Algorithms
13:45 - Deep learning vs. traditional optimization
17:01 - Evolving view of deep learning
18:33 - Reproducibility crisis in AI?
21:12 - Surprising effectiveness of deep learning
27:50 - "Optimization" isn't the right framework
30:08 - Infinitely wide nets
35:41 - Exponential learning rates
42:39 - Data as the next frontier
44:12 - Neuroscience and AI differences
47:13 - Focus on algorithms, architecture, and objective functions
55:50 - Advice for deep learning theorists
58:05 - Decoding minds
5/17/2021 • 1 hour, 1 minute, 43 seconds
BI 104 John Kounios and David Rosen: Creativity, Expertise, Insight
What is creativity? How do we measure it? How do our brains implement it, and how might AI?Those are some of the questions John, David, and I discuss. The neuroscience of creativity is young, in its "wild west" days still. We talk about a few creativity studies they've performed that distinguish different creative processes with respect to different levels of expertise (in this case, in jazz improvisation), and the underlying brain circuits and activity, including using transcranial direct current stimulation to alter the creative process. Related to creativity, we also discuss the phenomenon and neuroscience of insight (the topic of John's book, The Eureka Factor), unconscious automatic type 1 processes versus conscious deliberate type 2 processes, states of flow, creative process versus creative products, and a lot more.
John Kounios.Secret Chord Laboratories (David's company).Twitter: @JohnKounios; @NeuroBassDave.John's book (with Mark Beeman) on insight and creativity.The Eureka Factor: Aha Moments, Creative Insight, and the Brain.The papers we discuss or mention:All You Need to Do Is Ask? The Exhortation to Be Creative Improves Creative Performance More for Nonexpert Than Expert Jazz MusiciansAnodal tDCS to Right Dorsolateral Prefrontal Cortex Facilitates Performance for Novice Jazz Improvisers but Hinders ExpertsDual-process contributions to creativity in jazz improvisations: An SPM-EEG study.
Timestamps
0:00 - Intro
16:20 - Where are we broadly in science of creativity?
18:23 - Origins of creativity research
22:14 - Divergent and convergent thought
26:31 - Secret Chord Labs
32:40 - Familiar surprise
38:55 - The Eureka Factor
42:27 - Dual process model
52:54 - Creativity and jazz expertise
55:53 - "Be creative" behavioral study
59:17 - Stimulating the creative brain
1:02:04 - Brain circuits underlying creativity
1:14:36 - What does this tell us about creativity?
1:16:48 - Intelligence vs. creativity
1:18:25 - Switching between creative modes
1:25:57 - Flow states and insight
1:34:29 - Creativity and insight in AI
1:43:26 - Creative products vs. process
5/7/2021 • 1 hour, 50 minutes, 32 seconds
BI 103 Randal Koene and Ken Hayworth: The Road to Mind Uploading
Randal, Ken, and I discuss a host of topics around the future goal of uploading our minds into non-brain systems, to continue our mental lives and expand our range of experiences. The basic requirement for such a subtrate-independent mind is to implement whole brain emulation. We discuss two basic approaches to whole brain emulation. The "scan and copy" approach proposes we somehow scan the entire structure of our brains (at whatever scale is necessary) and store that scan until some future date when we have figured out how to us that information to build a substrate that can house your mind. The "gradual replacement" approach proposes we slowly replace parts of the brain with functioning alternative machines, eventually replacing the entire brain with non-biological material and yet retaining a functioning mind.
Randal and Ken are neuroscientists who understand the magnitude and challenges of a massive project like mind uploading, who also understand what we can do right now, with current technology, to advance toward that lofty goal, and who are thoughtful about what steps we need to take to enable further advancements.
Randal A KoeneTwitter: @randalkoeneCarboncopies Foundation.Randal's website.Ken HayworthTwitter: @KennethHayworthBrain Preservation Foundation.Youtube videos.
Timestamps
0:00 - Intro
6:14 - What Ken wants
11:22 - What Randal wants
22:29 - Brain preservation
27:18 - Aldehyde stabilized cryopreservation
31:51 - Scan and copy vs. gradual replacement
38:25 - Building a roadmap
49:45 - Limits of current experimental paradigms
53:51 - Our evolved brains
1:06:58 - Counterarguments
1:10:31 - Animal models for whole brain emulation
1:15:01 - Understanding vs. emulating brains
1:22:37 - Current challenges
4/26/2021 • 1 hour, 27 minutes, 26 seconds
BI 102 Mark Humphries: What Is It Like To Be A Spike?
Mark and I discuss his book, The Spike: An Epic Journey Through the Brain in 2.1 Seconds. It chronicles how a series of action potentials fire through the brain in a couple seconds of someone's life. Starting with light hitting the retina as a person looks at a cookie, Mark describes how that light gets translated into spikes, how those spikes get processed in our visual system and eventually transform into motor commands to grab that cookie. Along the way, he describes some of the big ideas throughout the history of studying brains (like the mechanisms to explain how neurons seem to fire so randomly), the big mysteries we currently face (like why do so many neurons do so little?), and some of the main theories to explain those mysteries (we're prediction machines!). A fun read and discussion. This is Mark's second time on the podcast - he was on episode 4 in the early days, talking more in depth about some of the work we discuss in this episode!
The Humphries Lab.Twitter: @markdhumphriesBook: The Spike: An Epic Journey Through the Brain in 2.1 Seconds.Related papersA spiral attractor network drives rhythmic locomotion.
Timestamps:
0:00 - Intro
3:25 - Writing a book
15:37 - Mark's main interest
19:41 - Future explanation of brain/mind
27:00 - Stochasticity and excitation/inhibition balance
36:56 - Dendritic computation for network dynamics
39:10 - Do details matter for AI?
44:06 - Spike failure
51:12 - Dark neurons
1:07:57 - Intrinsic spontaneous activity
1:16:16 - Best scientific moment
1:23:58 - Failure
1:28:45 - Advice
4/16/2021 • 1 hour, 32 minutes, 20 seconds
BI 101 Steve Potter: Motivating Brains In and Out of Dishes
Steve and I discuss his book, How to Motivate Your Students to Love Learning, which is both a memoir and a guide for teachers and students to optimize the learning experience for intrinsic motivation. Steve taught neuroscience and engineering courses while running his own lab studying the activity of live cultured neural populations (which we discuss at length in his previous episode). He relentlessly tested and tweaked his teaching methods, including constant feedback from the students, to optimize their learning experiences. He settled on real-world, project-based learning approaches, like writing wikipedia articles and helping groups of students design and carry out their own experiments. We discuss that, plus the science behind learning, principles important for motivating students and maintaining that motivation, and many of the other valuable insights he shares in the book.
The first half of the episode we discuss diverse neuroscience and AI topics, like brain organoids, mind-uploading, synaptic plasticity, and more. Then we discuss many of the stories and lessons from his book, which I recommend for teachers, mentors, and life-long students who want to ensure they're optimizing their own learning.
Potter Lab.Twitter: @stevempotter.The Book: How to Motivate Your Students to Love Learning.The glial cell activity movie.
0:00 - Intro
6:38 - Brain organoids
18:48 - Glial cell plasticity
24:50 - Whole brain emulation
35:28 - Industry vs. academia
45:32 - Intro to book: How To Motivate Your Students To Love Learning
48:29 - Steve's childhood influences
57:21 - Developing one's own intrinsic motivation
1:02:30 - Real-world assignments
1:08:00 - Keys to motivation
1:11:50 - Peer pressure
1:21:16 - Autonomy
1:25:38 - Wikipedia real-world assignment
1:33:12 - Relation to running a lab
4/6/2021 • 1 hour, 45 minutes, 22 seconds
BI 100.6 Special: Do We Have the Right Vocabulary and Concepts?
We made it to the last bit of our 100th episode celebration. These have been super fun for me, and I hope you've enjoyed the collections as well. If you're wondering where the missing 5th part is, I reserved it exclusively for Brain Inspired's magnificent Patreon supporters (thanks guys!!!!). The final question I sent to previous guests:
Do we already have the right vocabulary and concepts to explain how brains and minds are related? Why or why not?
Timestamps:
0:00 - Intro
5:04 - Andrew Saxe
7:04 - Thomas Naselaris
7:46 - John Krakauer
9:03 - Federico Turkheimer
11:57 - Steve Potter
13:31 - David Krakauer
17:22 - Dean Buonomano
20:28 - Konrad Kording
22:00 - Uri Hasson
23:15 - Rodrigo Quian Quiroga
24:41 - Jim DiCarlo
25:26 - Marcel van Gerven
28:02 - Mazviita Chirimuuta
29:27 - Brad Love
31:23 - Patrick Mayo
32:30 - György Buzsáki
37:07 - Pieter Roelfsema
37:26 - David Poeppel
40:22 - Paul Cisek
44:52 - Talia Konkle
47:03 - Steve Grossberg
3/28/2021 • 50 minutes, 3 seconds
BI 100.4 Special: What Ideas Are Holding Us Back?
In the 4th installment of our 100th episode celebration, previous guests responded to the question:
What ideas, assumptions, or terms do you think is holding back neuroscience/AI, and why?
As usual, the responses are varied and wonderful!
Timestamps:
0:00 - Intro
6:41 - Pieter Roelfsema
7:52 - Grace Lindsay
10:23 - Marcel van Gerven
11:38 - Andrew Saxe
14:05 - Jane Wang
16:50 - Thomas Naselaris
18:14 - Steve Potter
19:18 - Kendrick Kay
22:17 - Blake Richards
27:52 - Jay McClelland
30:13 - Jim DiCarlo
31:17 - Talia Konkle
33:27 - Uri Hasson
35:37 - Wolfgang Maass
38:48 - Paul Cisek
40:41 - Patrick Mayo
41:51 - Konrad Kording
43:22 - David Poeppel
44:22 - Brad Love
46:47 - Rodrigo Quian Quiroga
47:36 - Steve Grossberg
48:47 - Mark Humphries
52:35 - John Krakauer
55:13 - György Buzsáki
59:50 - Stefan Leijnan
1:02:18 - Nathaniel Daw
3/21/2021 • 1 hour, 4 minutes, 26 seconds
BI 100.3 Special: Can We Scale Up to AGI with Current Tech?
Part 3 in our 100th episode celebration. Previous guests answered the question:
Given the continual surprising progress in AI powered by scaling up parameters and using more compute, while using fairly generic architectures (eg. GPT-3):
Do you think the current trend of scaling compute can lead to human level AGI? If not, what's missing?
It likely won't surprise you that the vast majority answer "No." It also likely won't surprise you, there is differing opinion on what's missing.
Timestamps:
0:00 - Intro
3:56 - Wolgang Maass
5:34 - Paul Humphreys
9:16 - Chris Eliasmith
12:52 - Andrew Saxe
16:25 - Mazviita Chirimuuta
18:11 - Steve Potter
19:21 - Blake Richards
22:33 - Paul Cisek
26:24 - Brad Love
29:12 - Jay McClelland
34:20 - Megan Peters
37:00 - Dean Buonomano
39:48 - Talia Konkle
40:36 - Steve Grossberg
42:40 - Nathaniel Daw
44:02 - Marcel van Gerven
45:28 - Kanaka Rajan
48:25 - John Krakauer
51:05 - Rodrigo Quian Quiroga
53:03 - Grace Lindsay
55:13 - Konrad Kording
57:30 - Jeff Hawkins
102:12 - Uri Hasson
1:04:08 - Jess Hamrick
1:06:20 - Thomas Naselaris
3/17/2021 • 1 hour, 8 minutes, 43 seconds
BI 100.2 Special: What Are the Biggest Challenges and Disagreements?
In this 2nd special 100th episode installment, many previous guests answer the question: What is currently the most important disagreement or challenge in neuroscience and/or AI, and what do you think the right answer or direction is? The variety of answers is itself revealing, and highlights how many interesting problems there are to work on.
Timestamps:
0:00 - Intro
7:10 - Rodrigo Quian Quiroga
8:33 - Mazviita Chirimuuta
9:15 - Chris Eliasmith
12:50 - Jim DiCarlo
13:23 - Paul Cisek
16:42 - Nathaniel Daw
17:58 - Jessica Hamrick
19:07 - Russ Poldrack
20:47 - Pieter Roelfsema
22:21 - Konrad Kording
25:16 - Matt Smith
27:55 - Rafal Bogacz
29:17 - John Krakauer
30:47 - Marcel van Gerven
31:49 - György Buzsáki
35:38 - Thomas Naselaris
36:55 - Steve Grossberg
48:32 - David Poeppel
49:24 - Patrick Mayo
50:31 - Stefan Leijnen
54:24 - David Krakuer
58:13 - Wolfang Maass
59:13 - Uri Hasson
59:50 - Steve Potter
1:01:50 - Talia Konkle
1:04:30 - Matt Botvinick
1:06:36 - Brad Love
1:09:46 - Jon Brennan
1:19:31 - Grace Lindsay
1:22:28 - Andrew Saxe
3/12/2021 • 1 hour, 25 minutes
BI 100.1 Special: What Has Improved Your Career or Well-being?
Brain Inspired turns 100 (episodes) today! To celebrate, my patreon supporters helped me create a list of questions to ask my previous guests, many of whom contributed by answering any or all of the questions. I've collected all their responses into separate little episodes, one for each question. Starting with a light-hearted (but quite valuable) one, this episode has responses to the question, "In the last five years, what new belief, behavior, or habit has most improved your career or well being?" See below for links to each previous guest. And away we go...
Timestamps:
0:00 - Intro
6:13 - David Krakauer
8:50 - David Poeppel
9:32 - Jay McClelland
11:03 - Patrick Mayo
11:45 - Marcel van Gerven
12:11 - Blake Richards
12:25 - John Krakauer
14:22 - Nicole Rust
15:26 - Megan Peters
17:03 - Andrew Saxe
18:11 - Federico Turkheimer
20:03 - Rodrigo Quian Quiroga
22:03 - Thomas Naselaris
23:09 - Steve Potter
24:37 - Brad Love
27:18 - Steve Grossberg
29:04 - Talia Konkle
29:58 - Paul Cisek
32:28 - Kanaka Rajan
34:33 - Grace Lindsay
35:40 - Konrad Kording
36:30 - Mark Humphries
3/9/2021 • 42 minutes, 32 seconds
BI 099 Hakwan Lau and Steve Fleming: Neuro-AI Consciousness
Hakwan, Steve, and I discuss many issues around the scientific study of consciousness. Steve and Hakwan focus on higher order theories (HOTs) of consciousness, related to metacognition. So we discuss HOTs in particular and their relation to other approaches/theories, the idea of approaching consciousness as a computational problem to be tackled with computational modeling, we talk about the cultural, social, and career aspects of choosing to study something as elusive and controversial as consciousness, we talk about two of the models they're working on now to account for various properties of conscious experience, and, of course, the prospects of consciousness in AI. For more on metacognition and awareness, check out episode 73 with Megan Peters.
Hakwan's lab: Consciousness and Metacognition Lab.Steve's lab: The MetaLab.Twitter: @hakwanlau; @smfleming.Hakwan's brief Aeon article: Is consciousness a battle between your beliefs and perceptions?Related papersAn Informal Internet Survey on the Current State of Consciousness Science.Opportunities and challenges for a maturing science of consciousness.What is consciousness, and could machines have it?"Understanding the higher-order approach to consciousness.Awareness as inference in a higher-order state space. (Steve's bayesian predictive generative model)Consciousness, Metacognition, & Perceptual Reality Monitoring. (Hakwan's reality-monitoring model a la generative adversarial networks)
Timestamps
0:00 - Intro
7:25 - Steve's upcoming book
8:40 - Challenges to study consciousness
15:50 - Gurus and backscratchers
23:58 - Will the problem of consciousness disappear?
27:52 - Will an explanation feel intuitive?
29:54 - What do you want to be true?
38:35 - Lucid dreaming
40:55 - Higher order theories
50:13 - Reality monitoring model of consciousness
1:00:15 - Higher order state space model of consciousness
1:05:50 - Comparing their models
1:10:47 - Machine consciousness
1:15:30 - Nature of first order representations
1:18:20 - Consciousness prior (Yoshua Bengio)
1:20:20 - Function of consciousness
1:31:57 - Legacy
1:40:55 - Current projects
2/28/2021 • 1 hour, 46 minutes, 35 seconds
BI 098 Brian Christian: The Alignment Problem
Brian and I discuss a range of topics related to his latest book, The Alignment Problem: Machine Learning and Human Values. The alignment problem asks how we can build AI that does what we want it to do, as opposed to building AI that will compromise our own values by accomplishing tasks that may be harmful or dangerous to us. Using some of the stories Brain relates in the book, we talk about:
The history of machine learning and how we got this point;Some methods researches are creating to understand what's being represented in neural nets and how they generate their output;Some modern proposed solutions to the alignment problem, like programming the machines to learn our preferences so they can help achieve those preferences - an idea called inverse reinforcement learning;The thorny issue of accurately knowing our own values- if we get those wrong, will machines also get it wrong?
Links:
Brian's website.Twitter: @brianchristian.The Alignment Problem: Machine Learning and Human Values.Related papersNorbert Wiener from 1960: Some Moral and Technical Consequences of Automation.
Timestamps:
4:22 - Increased work on AI ethics
8:59 - The Alignment Problem overview
12:36 - Stories as important for intelligence
16:50 - What is the alignment problem
17:37 - Who works on the alignment problem?
25:22 - AI ethics degree?
29:03 - Human values
31:33 - AI alignment and evolution
37:10 - Knowing our own values?
46:27 - What have learned about ourselves?
58:51 - Interestingness
1:00:53 - Inverse RL for value alignment
1:04:50 - Current progress
1:10:08 - Developmental psychology
1:17:36 - Models as the danger
1:25:08 - How worried are the experts?
2/18/2021 • 1 hour, 32 minutes, 38 seconds
BI 097 Omri Barak and David Sussillo: Dynamics and Structure
Omri, David and I discuss using recurrent neural network models (RNNs) to understand brains and brain function. Omri and David both use dynamical systems theory (DST) to describe how RNNs solve tasks, and to compare the dynamical stucture/landscape/skeleton of RNNs with real neural population recordings. We talk about how their thoughts have evolved since their 2103 Opening the Black Box paper, which began these lines of research and thinking. Some of the other topics we discuss:
The idea of computation via dynamics, which sees computation as a process of evolving neural activity in a state space;Whether DST offers a description of mental function (that is, something beyond brain function, closer to the psychological level);The difference between classical approaches to modeling brains and the machine learning approach;The concept of universality - that the variety of artificial RNNs and natural RNNs (brains) adhere to some similar dynamical structure despite differences in the computations they perform;How learning is influenced by the dynamics in an ongoing and ever-changing manner, and how learning (a process) is distinct from optimization (a final trained state).David was on episode 5, for a more introductory episode on dynamics, RNNs, and brains.
Barak LabTwitter: @SussilloDavidThe papers we discuss or mention:Sussillo, D. & Barak, O. (2013). Opening the Black Box: Low-dimensional dynamics in high-dimensional recurrent neural networks.Computation Through Neural Population Dynamics.Implementing Inductive bias for different navigation tasks through diverse RNN attrractors.Dynamics of random recurrent networks with correlated low-rank structure.Quality of internal representation shapes learning performance in feedback neural networks.Feigenbaum's universality constant original paper: Feigenbaum, M. J. (1976) "Universality in complex discrete dynamics", Los Alamos Theoretical Division Annual Report 1975-1976TalksUniversality and individuality in neural dynamics across large populations of recurrent networks.World Wide Theoretical Neuroscience Seminar: Omri Barak, January 6, 2021
Timestamps:
0:00 - Intro
5:41 - Best scientific moment
9:37 - Why do you do what you do?
13:21 - Computation via dynamics
19:12 - Evolution of thinking about RNNs and brains
26:22 - RNNs vs. minds
31:43 - Classical computational modeling vs. machine learning modeling approach
35:46 - What are models good for?
43:08 - Ecological task validity with respect to using RNNs as models
46:27 - Optimization vs. learning
49:11 - Universality
1:00:47 - Solutions dictated by tasks
1:04:51 - Multiple solutions to the same task
1:11:43 - Direct fit (Uri Hasson)
1:19:09 - Thinking about the bigger picture
2/8/2021 • 1 hour, 23 minutes, 57 seconds
BI 096 Keisuke Fukuda and Josh Cosman: Forking Paths
K, Josh, and I were postdocs together in Jeff Schall's and Geoff Woodman's labs. K and Josh had backgrounds in psychology and were getting their first experience with neurophysiology, recording single neuron activity in awake behaving primates. This episode is a discussion surrounding their reflections and perspectives on neuroscience and psychology, given their backgrounds and experience (we reference episode 84 with György Buzsáki and David Poeppel). We also talk about their divergent paths - K stayed in academia and runs an EEG lab studying human decision-making and memory, and Josh left academia and has worked for three different pharmaceutical and tech companies. So this episode doesn't get into gritty science questions, but is a light discussion about the state of neuroscience, psychology, and AI, and reflections on academia and industry, life in lab, and plenty more.
The Fukuda Lab.Josh's website.Twitter: @KeisukeFukuda4
Time stamps
0:00 - Intro
4:30 - K intro
5:30 - Josh Intro
10:16 - Academia vs. industry
16:01 - Concern with legacy
19:57 - Best scientific moment
24:15 - Experiencing neuroscience as a psychologist
27:20 - Neuroscience as a tool
30:38 - Brain/mind divide
33:27 - Shallow vs. deep knowledge in academia and industry
36:05 - Autonomy in industry
42:20 - Is this a turning point in neuroscience?
46:54 - Deep learning revolution
49:34 - Deep nets to understand brains
54:54 - Psychology vs. neuroscience
1:06:42 - Is language sufficient?
1:11:33 - Human-level AI
1:13:53 - How will history view our era of neuroscience?
1:23:28 - What would you have done differently?
1:26:46 - Something you wish you knew
1/29/2021 • 1 hour, 34 minutes, 10 seconds
BI 095 Chris Summerfield and Sam Gershman: Neuro for AI?
It's generally agreed machine learning and AI provide neuroscience with tools for analysis and theoretical principles to test in brains, but there is less agreement about what neuroscience can provide AI. Should computer scientists and engineers care about how brains compute, or will it just slow them down, for example? Chris, Sam, and I discuss how neuroscience might contribute to AI moving forward, considering the past and present. This discussion also leads into related topics, like the role of prediction versus understanding, AGI, explainable AI, value alignment, the fundamental conundrum that humans specify the ultimate values of the tasks AI will solve, and more. Plus, a question from previous guest Andrew Saxe. Also, check out Sam's previous appearance on the podcast.
Chris's lab: Human Information Processing lab.Sam's lab: Computational Cognitive Neuroscience Lab.Twitter: @gershbrain; @summerfieldlabPapers we discuss or mention or are related:If deep learning is the answer, then what is the question?Neuroscience-Inspired Artificial Intelligence.Building Machines that Learn and Think Like People.
0:00 - Intro
5:00 - Good ol' days
13:50 - AI for neuro, neuro for AI
24:25 - Intellectual diversity in AI
28:40 - Role of philosophy
30:20 - Operationalization and benchmarks
36:07 - Prediction vs. understanding
42:48 - Role of humans in the loop
46:20 - Value alignment
51:08 - Andrew Saxe question
53:16 - Explainable AI
58:55 - Generalization
1:01:09 - What has AI revealed about us?
1:09:38 - Neuro for AI
1:20:30 - Concluding remarks
1/19/2021 • 1 hour, 25 minutes, 28 seconds
BI 094 Alison Gopnik: Child-Inspired AI
Alison and I discuss her work to accelerate learning and thus improve AI by studying how children learn, as Alan Turing suggested in his famous 1950 paper. The ways children learn are via imitation, by learning abstract causal models, and active learning by implementing a high exploration/exploitation ratio. We also discuss child consciousness, psychedelics, the concept of life history, the role of grandparents and elders, and lots more.
Alison's Website.Cognitive Development and Learning Lab.Twitter: @AlisonGopnik.Related papers:Childhood as a solution to explore-exploit tensions.The Aeon article about grandparents, children, and evolution: Vulnerable Yet Vital.Books:The Gardener and the Carpenter: What the New Science of Child Development Tells Us About the Relationship Between Parents and Children.The Scientist in the Crib: What Early Learning Tells Us About the Mind.The Philosophical Baby: What Children's Minds Tell Us About Truth, Love, and the Meaning of Life.
Take-home points:
Children learn by imitation, and not just unthinking imitation. They pay attention to and evaluate the intentions of others and judge whether a person seems to be a reliable source of information. That is, they learn by sophisticated socially-constrained imitation.Children build abstract causal models of the world. This allows them to simulate potential outcomes and test their actions against those simulations, accelerating learning.Children keep their foot on the exploration pedal, actively learning by exploring a wide spectrum of actions to determine what works. As we age, our exploratory cognition decreases, and we begin to exploit more what we've learned.
Timestamps
0:00 - Intro
4:40 - State of the field
13:30 - Importance of learning
20:12 - Turing's suggestion
22:49 - Patience for one's own ideas
28:53 - Learning via imitation
31:57 - Learning abstract causal models
41:42 - Life history
43:22 - Learning via exploration
56:19 - Explore-exploit dichotomy
58:32 - Synaptic pruning
1:00:19 - Breakthrough research in careers
1:04:31 - Role of elders
1:09:08 - Child consciousness
1:11:41 - Psychedelics as child-like brain
1:16:00 - Build consciousness into AI?
1/8/2021 • 1 hour, 19 minutes, 13 seconds
BI 093 Dileep George: Inference in Brain Microcircuits
Dileep and I discuss his theoretical account of how the thalamus and cortex work together to implement visual inference. We talked previously about his Recursive Cortical Network (RCN) approach to visual inference, which is a probabilistic graph model that can solve hard problems like CAPTCHAs, and more recently we talked about using his RCNs with cloned units to account for cognitive maps related to the hippocampus. On this episode, we walk through how RCNs can map onto thalamo-cortical circuits so a given cortical column can signal whether it believes some concept or feature is present in the world, based on bottom-up incoming sensory evidence, top-down attention, and lateral related features. We also briefly compare this bio-RCN version with Randy O'Reilly's Deep Predictive Learning account of thalamo-cortical circuitry.
Vicarious website - Dileeps AGI robotics company.Twitter: @dileeplearningThe papers we discuss or mention:A detailed mathematical theory of thalamic and cortical microcircuits based on inference in a generative vision model.From CAPTCHA to Commonsense: How Brain Can Teach Us About Artificial Intelligence.Probabilistic graphical models.Hierarchical temporal memory.
Time Stamps:
0:00 - Intro
5:18 - Levels of abstraction
7:54 - AGI vs. AHI vs. AUI
12:18 - Ideas and failures in startups
16:51 - Thalamic cortical circuitry computation
22:07 - Recursive cortical networks
23:34 - bio-RCN
27:48 - Cortical column as binary random variable
33:37 - Clonal neuron roles
39:23 - Processing cascade
41:10 - Thalamus
47:18 - Attention as explaining away
50:51 - Comparison with O'Reilly's predictive coding framework
55:39 - Subjective contour effect
1:01:20 - Necker cube
12/29/2020 • 1 hour, 6 minutes, 31 seconds
BI 092 Russ Poldrack: Cognitive Ontologies
Russ and I discuss cognitive ontologies - the "parts" of the mind and their relations - as an ongoing dilemma of how to map onto each other what we know about brains and what we know about minds. We talk about whether we have the right ontology now, how he uses both top-down and data-driven approaches to analyze and refine current ontologies, and how all this has affected his own thinking about minds. We also discuss some of the current meta-science issues and challenges in neuroscience and AI, and Russ answers guest questions from Kendrick Kay and David Poeppel.
Russ’s website.Poldrack Lab.Stanford Center For Reproducible Neuroscience.Twitter: @russpoldrack.Book:The New Mind Readers: What Neuroimaging Can and Cannot Reveal about Our Thoughts.The papers we discuss or mention:Atlases of cognition with large-scale human brain mapping.Mapping Mental Function to Brain Structure: How Can Cognitive Neuroimaging Succeed?From Brain Maps to Cognitive Ontologies: Informatics and the Search for Mental Structure.Uncovering the structure of self-regulation through data-driven ontology discoveryTalks:Reproducibility: NeuroHackademy: Russell Poldrack - Reproducibility in fMRI: What is the problem?Cognitive Ontology: Cognitive Ontologies, from Top to BottomA good series of talks about cognitive ontologies: Online Seminar Series: Problem of Cognitive Ontology.
Some take-home points:
Our folk psychological cognitive ontology hasn't changed much since early Greek Philosophy, and especially since William James wrote about attention, consciousness, and so on.Using encoding models, we can predict brain responses pretty well based on what task a subject is performing or what "cognitive function" a subject is engaging, at least to a course approximation.Using a data-driven approach has potential to help determine mental structure, but important human decisions must still be made regarding how exactly to divide up the various "parts" of the mind.
Time points
0:00 - Introduction
5:59 - Meta-science issues
19:00 - Kendrick Kay question
23:00 - State of the field
30:06 - fMRI for understanding minds
35:13 - Computational mind
42:10 - Cognitive ontology
45:17 - Cognitive Atlas
52:05 - David Poeppel question
57:00 - Does ontology matter?
59:18 - Data-driven ontology
1:12:29 - Dynamical systems approach
1:16:25 - György Buzsáki's inside-out approach
1:22:26 - Ontology for AI
1:27:39 - Deep learning hype
12/15/2020 • 1 hour, 42 minutes, 12 seconds
BI 091 Carsen Stringer: Understanding 40,000 Neurons
Carsen and I discuss how she uses 2-photon calcium imaging data from over 10,000 neurons to understand the information processing of such large neural population activity. We talk about the tools she makes and uses to analyze the data, and the type of high-dimensional neural activity structure they found, which seems to allow efficient and robust information processing. We also talk about how these findings may help build better deep learning networks, and Carsen's thoughts on how to improve the diversity, inclusivity, and equality in neuroscience research labs. Guest question from Matt Smith.
Stringer Lab.Twitter: @computingnature.The papers we discuss or mention:High-dimensional geometry of population responses in visual cortexSpontaneous behaviors drive multidimensional, brain-wide population activity.
Timestamps:
0:00 - Intro
5:51 - Recording > 10k neurons
8:51 - 2-photon calcium imaging
14:56 - Balancing scientific questions and tools
21:16 - Unsupervised learning tools and rastermap
26:14 - Manifolds
32:13 - Matt Smith question
37:06 - Dimensionality of neural activity
58:51 - Future plans
1:00:30- What can AI learn from this?
1:13:26 - Diversity, inclusivity, equality
12/4/2020 • 1 hour, 28 minutes, 19 seconds
BI 090 Chris Eliasmith: Building the Human Brain
Chris and I discuss his Spaun large scale model of the human brain (Semantic Pointer Architecture Unified Network), as detailed in his book How to Build a Brain. We talk about his philosophical approach, how Spaun compares to Randy O'Reilly's Leabra networks, the Applied Brain Research Chris co-founded, and I have guest questions from Brad Aimone, Steve Potter, and Randy O'Reilly.
Chris's website.Applied Brain Research.The book: How to Build a Brain.Nengo (you can run Spaun).Paper summary of Spaun: A large-scale model of the functioning brain.
Some takeaways:
Spaun is an embodied fully functional cognitive architecture with one eye for task instructions and an arm for responses.Chris uses elements from symbolic, connectionist, and dynamical systems approaches in cognitive science.The neural engineering framework (NEF) is how functions get instantiated in spiking neural networks.The semantic pointer architecture (SPA) is how representations are stored and transformed - i.e. the symbolic-like cognitive processing.
Time Points:
0:00 - Intro
2:29 - Sense of awe
6:20 - Large-scale models
9:24 - Descriptive pragmatism
15:43 - Asking better questions
22:48 - Brad Aimone question: Neural engineering framework
29:07 - Engineering to build vs. understand
32:12 - Why is AI world not interested in brains/minds?
37:09 - Steve Potter neuromorphics question
44:51 - Spaun
49:33 - Semantic Pointer Architecture
56:04 - Representations
58:21 - Randy O'Reilly question 1
1:07:33 - Randy O'Reilly question 2
1:10:31 - Spaun vs. Leabra
1:32:43 - How would Chris start over?
11/23/2020 • 1 hour, 38 minutes, 57 seconds
BI 089 Matt Smith: Drifting Cognition
Matt and I discuss how cognition and behavior drifts over the course of minutes and hours, and how global brain activity drifts with it. How does the brain continue to produce steady perception and action in the midst of such drift? We also talk about how to think about variability in neural activity. How much of it is noise and how much of it is hidden important activity? Finally, we discuss the effect of recording more and more neurons simultaneously, collecting bigger and bigger datasets, plus guest questions from Adam Snyder and Patrick Mayo.
Smith Lab.Twitter: @SmithLabNeuro.Related:Slow drift of neural activity as a signature of impulsivity in macaque visual and prefrontal cortex.Artwork by Melissa Neely
Take home points:
The “noise” in the variability of neural activity is likely just activity devoted to processing other things.Recording lots of neurons simultaneously helps resolve the question of what’s noise and how much information is in a population of neurons.There’s a neural signature of the behavioral “slow drift” of our internal cognitive state.The neural signature is global, and it’s an open question how the brain compensates to produce steady perception and action.
Timestamps:
0:00 - Intro
4:35 - Adam Snyder question
15:26 - Multi-electrode recordings
17:48 - What is noise in the brain?
23:55 - How many neurons is enough?
27:43 - Patrick Mayo question
33:17 - Slow drift
54:10 - Impulsivity
57:32 - How does drift happen?
59:49 - Relation to AI
1:06:58 - What AI and neuro can teach each other
1:10:02 - Ecologically valid behavior
1:14:39 - Brain mechanisms vs. mind
1:17:36 - Levels of description
1:21:14 - Hard things to make in AI
1:22:48 - Best scientific moment
11/12/2020 • 1 hour, 26 minutes, 52 seconds
BI 088 Randy O’Reilly: Simulating the Human Brain
Randy and I discuss his LEABRA cognitive architecture that aims to simulate the human brain, plus his current theory about how a loop between cortical regions and the thalamus could implement predictive learning and thus solve how we learn with so few examples. We also discuss what Randy thinks is the next big thing neuroscience can contribute to AI (thanks to a guest question from Anna Schapiro), and much more.
Computational Cognitive Neuroscience Laboratory.The papers we discuss or mention:The Leabra Cognitive Architecture: How to Play 20 Principles with Nature and Win!Deep Predictive Learning in Neocortex and Pulvinar.Unraveling the Mysteries of Motivation.His youTube series detailing the theory and workings of Leabra:Computational Cognitive Neuroscience.The free textbook:Computational Cognitive Neuroscience
A few take-home points:
Leabra has been a slow incremental project, inspired in part by Alan Newell’s suggested approach.Randy began by developing a learning algorithm that incorporated both kinds of biological learning (error-driven and associative).Leabra's core is 3 brain areas - frontal cortex, parietal cortex, and hippocampus - and has grown from there.There’s a constant balance between biological realism and computational feasibility.It’s important that a cognitive architecture address multiple levels- micro-scale, macro-scale, mechanisms, functions, and so on.Deep predictive learning is a possible brain mechanism whereby predictions from higher layer cortex precede input from lower layer cortex in the thalamus, where an error is computed and used to drive learning.Randy believes our metacognitive ability to know what we do and don’t know is a key next function to build into AI.
Timestamps:
0:00 - Intro
3:54 - Skip Intro
6:20 - Being in awe
18:57 - How current AI can inform neuro
21:56 - Anna Schapiro question - how current neuro can inform AI.
29:20 - Learned vs. innate cognition
33:43 - LEABRA
38:33 - Developing Leabra
40:30 - Macroscale
42:33 - Thalamus as microscale
43:22 - Thalamocortical circuitry
47:25 - Deep predictive learning
56:18 - Deep predictive learning vs. backrop
1:01:56 - 10 Hz learning cycle
1:04:58 - Better theory vs. more data
1:08:59 - Leabra vs. Spaun
1:13:59 - Biological realism
1:21:54 - Bottom-up inspiration
1:27:26 - Biggest mistake in Leabra
1:32:14 - AI consciousness
1:34:45 - How would Randy begin again?
11/2/2020 • 1 hour, 39 minutes, 8 seconds
BI 087 Dileep George: Cloning for Cognitive Maps
When a waiter hands me the bill, how do I know whether to pay it myself or let my date pay? On this episode, I get a progress update from Dileep on his company, Vicarious, since Dileep's last episode. We also talk broadly about his experience running Vicarious to develop AGI and robotics. Then we turn to his latest brain-inspired AI efforts using cloned structured probabilistic graph models to develop an account of how the hippocampus makes a model of the world and represents our cognitive maps in different contexts, so we can simulate possible outcomes to choose how to act.
Special guest questions from Brad Love (episode 70: How We Learn Concepts) .
Vicarious website - Dileep's AGI robotics company.Twitter: @dileeplearning.Papers we discuss:Learning cognitive maps as structured graphs for vicarious evaluation.A detailed mathematical theory of thalamic and cortical microcircuits based on inference in a generative vision model.Probabilistic graphical models.Hierarchical temporal memory.
Time stamps:
0:00 - Intro
3:00 - Skip Intro
4:00 - Previous Dileep episode
10:22 - Is brain-inspired AI over-hyped?
14:38 - Compteition in robotics field
15:53 - Vicarious robotics
22:12 - Choosing what product to make
28:13 - Running a startup
30:52 - Old brain vs. new brain
37:53 - Learning cognitive maps as structured graphs
41:59 - Graphical models
47:10 - Cloning and merging, hippocampus
53:36 - Brad Love Question 1
1:00:39 - Brad Love Question 2
1:02:41 - Task examples
1:11:56 - What does hippocampus do?
1:14:14 - Intro to thalamic cortical microcircuit
1:15:21 - What AI folks think of brains
1:16:57 - Which levels inform which levels
1:20:02 - Advice for an AI startup
10/23/2020 • 1 hour, 23 minutes
BI 086 Ken Stanley: Open-Endedness
Ken and I discuss open-endedness, the pursuit of ambitious goals by seeking novelty and interesting products instead of advancing directly toward defined objectives. We talk about evolution as a prime example of an open-ended system that has produced astounding organisms, Ken relates how open-endedness could help advance artificial intelligence and neuroscience, and we discuss a range of topics related to the general concept of open-endedness, and Ken takes a couple questions from Stefan Leijnen and Melanie Mitchell.
Related:
Ken's website.Twitter: @kenneth0stanley.The book:Why Greatness Cannot Be Planned: The Myth of the Objective by Kenneth Stanley and Joel Lehman.Papers:Evolving Neural Networks Through Augmenting Topologies (2002)Minimal Criterion Coevolution: A New Approach to Open-Ended Search
Some key take-aways:
Many of the best inventions were not the result of trying to achieve a specific objective.Open-endedness is the pursuit of ambitious advances without a clearly defined objective.Evolution is a quintessential example of an open-ended process: it produces a vast array of complex beings by searching the space of possible organisms, constrained by the environment, survival, and reproduction.Perhaps the key to developing artificial general intelligence is by following an open-ended path rather that pursing objectives (solving the same old benchmark tasks, etc.).
0:00 - Intro
3:46 - Skip Intro
4:30 - Evolution as an Open-ended process
8:25 - Why Greatness Cannot Be Planned
20:46 - Open-endedness in AI
29:35 - Constraints vs. objectives
36:26 - The adjacent possible
41:22 - Serendipity
44:33 - Stefan Leijnen question
53:11 - Melanie Mitchell question
1:00:32 - Efficiency
1:02:13 - Gentle Earth
1:05:25 - Learning vs. evolution
1:10:53 - AGI
1:14:06 - Neuroscience, AI, and open-endedness
1:26:06 - Open AI