Let’s say you typically eat eggs for breakfast but were running late and ate cereal. As you crunched on a spoonful of Raisin Bran, other contextual similarities remained: You ate at the same table, at the same time, preparing to go to the same job. When someone asks later what you had for breakfast, you incorrectly remember eating eggs.
This would be a real-world example of a false memory. But what happens in your brain before recalling eggs, compared to what would happen if you correctly recalled cereal?
In a paper published in Proceedings of the National Academy of Sciences, University of Pennsylvania neuroscientists show for the first time that electrical signals in the human hippocampus differ immediately before recollection of true and false memories. They also found that low-frequency activity in the hippocampus decreases as a function of contextual similarity between a falsely recalled word and the target word.
“Whereas prior studies established the role of the hippocampus in event memory, we did not know that electrical signals generated in this region would distinguish the imminent recall of true from false memories,” says psychology professor Michael Jacob Kahana, director of the Computational Memory Lab and the study’s senior author. He says this shows that the hippocampus stores information about an item with the context in which it was presented.
Researchers also found that, relative to correct recalls, the brain exhibited lower theta and high-frequency oscillations and higher alpha/beta oscillations ahead of false memories. The findings came from recording neural activity in epilepsy patients who were already undergoing invasive monitoring to pinpoint the source of their seizures.
Noa Herz, lead author and a postdoctoral fellow in Kahana’s lab at the time of the research, explains that the monitoring was done through intracranial electrodes, the methodology researchers wanted to use for this study. She says that, compared to scalp electrodes, this method “allowed us to more precisely, and directly, measure the neural signals that were generated in deep brain structures, so the activity we are getting is much more localized.”
Michael Kahana is the Edmund J. and Louise W. Kahn Term Professor of Psychology in the School of Arts & Sciences and director of the Computational Memory Lab at the University of Pennsylvania. He is a member of the Penn Bioengineering Graduate Group.
Ongoing clinical trials have demonstrated that psychedelics like psilocybin and LSD can have rapid and long-lived antidepressant and anti-anxiety effects. A related clinical problem is chronic pain, which is notoriously difficult to treat and often associated with depression and anxiety.
This summer, Ahmad Hammo, a rising third-year student in bioengineering in the School of Engineering and Applied Science, is conducting a pilot study to explore psilocybin’s potential as a therapy for chronic pain and the depression that often accompanies it.
“There’s a strong correlation between chronic pain and depression, so I’m looking at how a psychedelic might be used for treating both of these things simultaneously,” says Hammo, who is originally from Amman, Jordan.
Hammo’s project focuses on neuropathic pain, pain associated with nerve damage. Like other forms of chronic pain, most experts believe that chronic neuropathic pain is stored in the brain.
“Neuropathic pain can lead to a centralized pain syndrome where the pain is still being processed in the brain,” Cichon says. “It’s as if there’s a loop that keeps playing over and over again, and this chronic form is completely divorced from that initial injury.”
Traumatic brain injury (TBI) has disabled 1 to 2% of the population, and one of their most common disabilities is problems with short-term memory. Electrical stimulation has emerged as a viable tool to improve brain function in people with other neurological disorders.
Led by University of Pennsylvania psychology professor Michael Jacob Kahana, a team of neuroscientists studied TBI patients with implanted electrodes, analyzed neural data as patients studied words, and used a machine learning algorithm to predict momentary memory lapses. Other lead authors included Wesleyan University psychology professor Youssef Ezzyat and Penn research scientist Paul Wanda.
“The last decade has seen tremendous advances in the use of brain stimulation as a therapy for several neurological and psychiatric disorders including epilepsy, Parkinson’s disease, and depression,” Kahana says. “Memory loss, however, represents a huge burden on society. We lack effective therapies for the 27 million Americans suffering.”
Machine learning (ML) programs computers to learn the way we do – through the continual assessment of data and identification of patterns based on past outcomes. ML can quickly pick out trends in big datasets, operate with little to no human interaction and improve its predictions over time. Due to these abilities, it is rapidly finding its way into medical research.
People with breast cancer may soon be diagnosed through ML faster than through a biopsy. Those suffering from depression might be able to predict mood changes through smart phone recordings of daily activities such as the time they wake up and amount of time they spend exercising. ML may also help paralyzed people regain autonomy using prosthetics controlled by patterns identified in brain scan data. ML research promises these and many other possibilities to help people lead healthier lives.
But while the number of ML studies grow, the actual use of it in doctors’ offices has not expanded much past simple functions such as converting voice to text for notetaking.
The limitations lie in medical research’s small sample sizes and unique datasets. This small data makes it hard for machines to identify meaningful patterns. The more data, the more accuracy in ML diagnoses and predictions. For many diagnostic uses, massive numbers of subjects in the thousands would be needed, but most studies use smaller numbers in the dozens of subjects.
But there are ways to find significant results from small datasets if you know how to manipulate the numbers. Running statistical tests over and over again with different subsets of your data can indicate significance in a dataset that in reality may be just random outliers.
This tactic, known as P-hacking or feature hacking in ML, leads to the creation of predictive models that are too limited to be useful in the real world. What looks good on paper doesn’t translate to a doctor’s ability to diagnose or treat us.
These statistical mistakes, oftentimes done unknowingly, can lead to dangerous conclusions.
To help scientists avoid these mistakes and push ML applications forward, Konrad Kording, Nathan Francis Mossell University Professor with appointments in the Departments of Bioengineering and Computer and Information Science in Penn Engineering and the Department of Neuroscience at Penn’s Perelman School of Medicine, is leading an aspect of a large, NIH-funded program known as CENTER – Creating an Educational Nexus for Training in Experimental Rigor. Kording will lead Penn’s cohort by creating the Community for Rigor which will provide open-access resources on conducting sound science. Members of this inclusive scientific community will be able to engage with ML simulations and discussion-based courses.
“The reason for the lack of ML in real-world scenarios is due to statistical misuse rather than the limitations of the tool itself,” says Kording. “If a study publishes a claim that seems too good to be true, it usually is, and many times we can track that back to their use of statistics.”
Such studies that make their way into peer-reviewed journals contribute to misinformation and mistrust in science and are more common than one might expect.
Brain development does not occur uniformly across the brain, but follows a newly identified developmental sequence, according to a new Penn Medicine study. Brain regions that support cognitive, social, and emotional functions appear to remain malleable—or capable of changing, adapting, and remodeling—longer than other brain regions, rendering youth sensitive to socioeconomic environments through adolescence. The findings are published in Nature Neuroscience.
Researchers charted how developmental processes unfold across the human brain from the ages of 8 to 23 years old through magnetic resonance imaging (MRI). The findings indicate a new approach to understanding the order in which individual brain regions show reductions in plasticity during development.
Brain plasticity refers to the capacity for neural circuits—connections and pathways in the brain for thought, emotion, and movement—to change or reorganize in response to internal biological signals or the external environment. While it is generally understood that children have higher brain plasticity than adults, this study provides new insights into where and when reductions in plasticity occur in the brain throughout childhood and adolescence.
The findings reveal that reductions in brain plasticity occur earliest in “sensory-motor” regions, such as visual and auditory regions, and occur later in “associative” regions, such as those involved in higher-order thinking (problem solving and social learning). As a result, brain regions that support executive, social, and emotional functions appear to be particularly malleable and responsive to the environment during early adolescence, as plasticity occurs later in development.
“Studying brain development in the living human brain is challenging. A lot of neuroscientists’ understanding about brain plasticity during development actually comes from studies conducted with rodents. But rodent brains do not have many of what we refer to as the association regions of the human brain, so we know less about how these important areas develop,” says corresponding author Theodore D. Satterthwaite,the McLure Associate Professor of Psychiatry in the Perelman School of Medicine, and director of the Penn Lifespan Informatics and Neuroimaging Center (PennLINC).
People with epilepsy are often prescribed anti-seizure medications, and, while they are effective for many, about 30% of patients still continue to experience seizures. Litt sought new ways to offer patients better treatment options by investigating a class of devices that electronically stimulate cells in the brain to modulate activity known as neurostimulation devices.
Litt’s research on implantable neurostimulation devices has led to significant breakthroughs in the technology and has broadened scientists’ understanding of the brain. This work started not long after he came to Penn in 2002 with licensing algorithms to help drive a groundbreaking device by NeuroPace, the first closed-loop, responsive neurostimulator to treat epilepsy.
Building on this work, Litt noted in 2011 how the implantable neurostimulation devices being used at the time had rigid wires that didn’t conform to the brain’s surface, and he received support from CURE Epilepsy to accelerate the development of newer, flexible wires to monitor and stimulate the brain.
“CURE is one of the epilepsy community’s most influential funding organizations,” Litt says. “Their support for my lab has been incredibly helpful in enabling the cutting-edge research that we hope will change epilepsy care for our patients.”
Artist-in-residence and visiting scholar Rebecca Kamen has blended AI and art to produce animated illustrations representing how a dyslexic brain interprets information.
Communicating thoughts with words is considered a uniquely human evolutionary adaptation known as language processing. Fundamentally, it is an information exchange, a lot like data transfer between devices, but one riddled with discrete layers of complexity, as the ways in which our brains interpret and express ideas differ from person to person.
Learning challenges such as dyslexia are underpinned by these differences in language processing and can be characterized by difficulty learning and decoding information from written text.
Artist-in-residence in Penn’s Department of Physics and Astronomy Rebecca Kamen has explored her personal relationship with dyslexia and information exchange to produce works that reflect elements of both her creative process and understanding of language. Kamen unveiled her latest exhibit at Arion Press Gallery in San Francisco, where nine artists with dyslexia were invited to produce imaginative interpretations of learning and experiencing language.
The artists were presented with several prompts in varying formats, including books, words, poems, quotes, articles, and even a single letter, and tasked with creating a dyslexic dictionary: an exploration of the ways in which their dyslexia empowered them to engage in information exchange in unique ways.
“[For the exhibit], each artist selected a word representing the way they learn, and mine was ‘lens,’” explains Kamen. “It’s a word that captures how being dyslexic provides me with a unique perspective for viewing and interacting with the world.”
From an early age, Kamen enjoyed learning about the natural sciences and was excited about the process of discovery. She struggled, however, with reading at school, which initially presented an obstacle to achieving her dreams of becoming a teacher. “I had a difficult time getting into college,” says Kamen. “When I graduated high school, the word ‘dyslexia’ didn’t really exist, so I assumed everyone struggled with reading.”
Kamen was diagnosed with dyslexia well into her tenure as a professor. “Most dyslexic people face challenges that may go unnoticed by others,” she says, “but they usually find creative ways to overcome them.”
This perspective on seeing and experiencing the world through the lens of dyslexia not only informed Kamen’s latest work for the exhibition “Dyslexic Dictionary,” but also showcased her background in merging art and science. For decades, Kamen’s work has investigated the intersection of the two, creating distinct ways of exploring new relationships and similarities.
“Artists and scientists are curious creatures always looking for patterns,” explains Kamen. “And that’s because patterns communicate larger insights about the world around us.”
The researchers studied different information-seeking approaches by monitoring how participants explore Wikipedia pages and categorically related these to two ideas rooted in philosophical understandings of learning: a “busybody,” who typically jumps between diverse ideas and collects loosely connected information; and a more purpose-driven “hunter,” who systematically ties in closely related concepts to fill their knowledge gaps.
They used these classifications to inform their computational model, the knowledge network. This uses text and context to determine the degree of relatedness between the Wikipedia pages and their content—represented by dots connected with lines of varying thickness to illustrate the strength of association.
In an adaption of the knowledge network, Kamen was classified as a dancer, an archetype elaborated on in an accompanying review paper by Dale Zhou, a Ph.D. candidate in Bassett’s Complex Systems Lab, who had also collaborated with Kamen on “Reveal.”
“The dancer can be described as an individual that breaks away from the traditional pathways of investigation,” says Zhou. “Someone who takes leaps of creative imagination and in the process, produces new concepts and radically remodels knowledge networks.”
Dani Smith Bassett is J. Peter Skirkanich Professor in Bioengineering with secondary appointments in the Departments of Physics & Astronomy, Electrical & Systems Engineering, Neurology, and Psychiatry.
David Lydon-Staley is an Assistant Professor in the Annenberg School for Communications and Bioengineering and is an alumnus of the Bassett Lab.
We hope you will join us for the 2022 Grace Hopper Distinguished Lecture by Dr. Jennifer Lewis, presented by the Department of Bioengineering and hosted by Dani S. Bassett, J. Peter Skirkanich Professor in Bioengineering, Electrical and Systems Engineering, Physics & Astronomy, Neurology and Psychiatry.
Date: Thursday, December 8, 2022
Start Time: 3:30 PM EST
Location: Glandt Forum, Singh Center for Nanotechnology, 3205 Walnut Street, Philadelphia, PA 19104
Join us after the live lecture for a light reception!
Speaker: Daphna Shohamy, Ph.D.
Kavli Professor of Brain Science, Co-Director of the Kavli Institute for Brain Science, Professor in the Department of Psychology & Zuckerman Mind Brain Behavior Institute Columbia University
From robots to humans, the ability to learn from experience turns a rigid response system into a flexible, adaptive one. In the past several decades, major advances have been made in understanding how humans and other animals learn from experience to make decisions. However, most of this progress has focused on rather simple forms of stimulus-response learning, such as automatic responses or habits. In this talk, I will turn to consider how past experience guides more complex decisions, such as those requiring flexible reasoning, inference, and deliberation. Across a range of behavioral contexts, I will demonstrate a critical role for memory in such decisions and will discuss how multiple brain regions interact to support learning, what this means for how memories are used, and the consequences for how decisions are made. Uncovering the pervasive role of memory in decision-making challenges the way we think about what memory is for, suggesting that memory’s primary purpose may be to guide future behavior and that storing a record of the past is just one way to do so.
Dr. Shohamy Bio:
Daphna Shohamy, PhDis a professor at Columbia University where she co-directs the Kavli Center for Neural Sciences and is Associate Director of the Zuckerman Mind, Brain Behavior Institute. Dr. Shohamy’s work focuses on the link between memory, and decision-making. Combining brain imaging in healthy humans with studies of patients with neurological and psychiatric disorders, Dr. Shohamy seeks to understand how the brain transforms experiences into memories; how memories shape decisions and actions; and how motivation and exploration affect human behavior.
Information on the GraceHopper Lecture: In support of its educational mission of promoting the role of all engineers in society, the School of Engineering and Applied Science presents the GraceHopper Lecture Series. This series is intended to serve the dual purpose of recognizing successful women in engineering and of inspiring students to achieve at the highest level.
Rear Admiral GraceHopper was a mathematician, computer scientist, systems designer and the inventor of the compiler. Her outstanding contributions to computer science benefited academia, industry and the military. In 1928 she graduated from Vassar College with a B.A. in mathematics and physics and joined the Vassar faculty. While an instructor, she continued her studies in mathematics at Yale University where she earned an M.A. in 1930 and a Ph.D. in 1934. GraceHopper is known worldwide for her work with the first large-scale digital computer, the Navy’s Mark I. In 1949 she joined Philadelphia’s Eckert-Mauchly, founded by the builders of ENIAC, which was building UNIVAC I. Her work on compilers and on making machines understand ordinary language instructions lead ultimately to the development of the business language, COBOL. GraceHopper served on the faculty of the Moore School for 15 years, and in 1974 received an honorary degree from the University. In support of the accomplishments of women in engineering, each department within the School invites a prominent speaker for a one or two-day visit that incorporates a public lecture, various mini-talks and opportunities to interact with undergraduate and graduate students and faculty.
Twin academics Dani S. Basset, J. Peter Skirkanich Professor and director of the Complex Systems Lab, and Perry Zurn, a professor of philosophy at American University, were recently featured as guests on NPR radio show “Detroit Today” to discuss their new book, “Curious Mind: The Power of Connection.”
In their book, Basset and Zurn draw on their previous research, as well as an expansive network of ideas from philosophy, history, education and art to explore how and why people experience curiosity, as well as the different types it can take.
Basset, who holds appointments in the Departments of Bioengineering and Electrical and Systems Engineering, as well as the Department of Physics and Astronomy in Penn Arts & Science, and the Departments of Neuroscience and Psychiatry in Penn Perelman’s School of Medicine, and Zurn spoke with “Detroit Today” producer Sam Corey about what types of things make people curious, and how to stimulate more curiosity in our everyday lives.
According to the twin experts, curiosity is not a standalone facet of one’s personality. Basset and Zurn’s work has shown that a person’s capacity for inquiry is very much tied to the overall state of their health.
“There’s a lot of scientific research focusing on intellectual humility and also openness to ideas,” says Bassett. “And there are really interesting relationships between someone’s openness to ideas, someone’s intellectual humility and their curiosity and also their wellbeing or flourishing,”