As a neuroscientist surveying the landscape of generative AI—artificial intelligence capable of generating text, images, or other media—Konrad Kording cites two potential directions forward: One is the “weird future” of political use and manipulation, and the other is the “power tool direction,” where people use ChatGPT to get information as they would use a drill to build furniture.
“I’m not sure which of those two directions we’re going but I think a lot of the AI people are working to move us into the power tool direction,” says Kording, a Penn Integrates Knowledge (PIK) University professor with appointments in the Perelman School of Medicine and School of Engineering and Applied Science. Reflecting on how generative AI is shifting the paradigm of science as a discipline, Kording said he thinks “it will push science as a whole into a much more collaborative direction,” though he has concerns about ChatGPT’s blind spots.
Kording joined three University of Pennsylvania researchers from the chemistry, political science, and psychology departments sharing their perspectives in the recent panel “ChatGPT turns one: How is generative AI reshaping science?” PIK Professor René Vidal opened the event, which was hosted by the School of Arts & Sciences’ Data Driven Discovery Initiative (DDDI), and Bhuvnesh Jain, physics and astronomy professor and co-faculty director of DDDI, moderated the discussion.
“Generative AI is moving so rapidly that even if it’s a snapshot, it will be very interesting for all of us to get that snapshot from these wonderful experts,” Jain said. OpenAI launched ChatGPT, a large language model (LLM)-based chatbot, on Nov. 30, 2022, and it rapidly ascended to ubiquity in news reports, faculty discussions, and research papers. Colin Twomey, interim executive director of DDDI, told Penn Today that it’s an open question as to how it will change the landscape of scientific research, and the` idea of the event was to solicit colleagues’ opinions on interesting directions in their fields.
Machine learning (ML) programs computers to learn the way we do – through the continual assessment of data and identification of patterns based on past outcomes. ML can quickly pick out trends in big datasets, operate with little to no human interaction and improve its predictions over time. Due to these abilities, it is rapidly finding its way into medical research.
People with breast cancer may soon be diagnosed through ML faster than through a biopsy. Those suffering from depression might be able to predict mood changes through smart phone recordings of daily activities such as the time they wake up and amount of time they spend exercising. ML may also help paralyzed people regain autonomy using prosthetics controlled by patterns identified in brain scan data. ML research promises these and many other possibilities to help people lead healthier lives.
But while the number of ML studies grow, the actual use of it in doctors’ offices has not expanded much past simple functions such as converting voice to text for notetaking.
The limitations lie in medical research’s small sample sizes and unique datasets. This small data makes it hard for machines to identify meaningful patterns. The more data, the more accuracy in ML diagnoses and predictions. For many diagnostic uses, massive numbers of subjects in the thousands would be needed, but most studies use smaller numbers in the dozens of subjects.
But there are ways to find significant results from small datasets if you know how to manipulate the numbers. Running statistical tests over and over again with different subsets of your data can indicate significance in a dataset that in reality may be just random outliers.
This tactic, known as P-hacking or feature hacking in ML, leads to the creation of predictive models that are too limited to be useful in the real world. What looks good on paper doesn’t translate to a doctor’s ability to diagnose or treat us.
These statistical mistakes, oftentimes done unknowingly, can lead to dangerous conclusions.
To help scientists avoid these mistakes and push ML applications forward, Konrad Kording, Nathan Francis Mossell University Professor with appointments in the Departments of Bioengineering and Computer and Information Science in Penn Engineering and the Department of Neuroscience at Penn’s Perelman School of Medicine, is leading an aspect of a large, NIH-funded program known as CENTER – Creating an Educational Nexus for Training in Experimental Rigor. Kording will lead Penn’s cohort by creating the Community for Rigor which will provide open-access resources on conducting sound science. Members of this inclusive scientific community will be able to engage with ML simulations and discussion-based courses.
“The reason for the lack of ML in real-world scenarios is due to statistical misuse rather than the limitations of the tool itself,” says Kording. “If a study publishes a claim that seems too good to be true, it usually is, and many times we can track that back to their use of statistics.”
Such studies that make their way into peer-reviewed journals contribute to misinformation and mistrust in science and are more common than one might expect.
Neuroscientists frequently say that neural activity ‘represents’ certain phenomena, PIK Professor Konrad Kording and postdoc Ben Baker led a study that took a philosophical approach to tease out what the term means.
One of neuroscience’s greatest challenges is to bridge the gaps between the external environment, the brain’s internal electrical activity, and the abstract workings of behavior and cognition. Many neuroscientists rely on the word “representation” to connect these phenomena: A burst of neural activity in the visual cortex may represent the face of a friend or neurons in the brain’s memory centers may represent a childhood memory.
But with the many complex relationships between mind, brain, and environment, it’s not always clear what neuroscientists mean when they say neural activity “represents” something. Lack of clarity around this concept can lead to miscommunication, flawed conclusions, and unnecessary disagreements.
To tackle this issue, an interdisciplinary paper takes a philosophical approach to delineating the many aspects of the word “representation” in neuroscience. The work, published in Trends in Cognitive Sciences, comes from the lab of Konrad Kording, a Penn Integrates Knowledge University Professor and senior author on the study whose research lies at the intersection of neuroscience and machine learning.
In 2005, John Ioannidis published a bombshell paper titled “Why Most Published Research Findings Are False.” In it, Ioannidis argued that a lack of scientific rigor in biomedical research — such as poor study design, small sample sizes and improper assessment of the significance of data— meant that a large percentage of experiments would not return the same results if they were conducted again.
Since then, researchers’ awareness of this “replication crisis” has grown, especially in fields that directly impact the health and wellbeing of people, where lapses in rigor can have life-or-death consequences. Despite this attention and motivation, however, little progress has been made in addressing the roots of the problem. Formal training in rigorous research practices remains rare; while mentors advise their students on how to properly construct and conduct experiments to produce the most reliable evidence, few educational resources exist to support them.
Konrad Kording, a Penn Integrates Knowledge Professor with appointments in the Departments of Bioengineering and Computer and Information Science in Penn Engineering and the Department of Neuroscience in Penn’s Perelman School of Medicine, has been awarded one of the initiative’s first five grants.
“The replication crisis is real,” says Kording. “I’ve tried to replicate the research of others and failed. I’ve reanalyzed my own data and found major mistakes that needed to be corrected. I was never properly taught how to do rigorous science, and I want to improve that for the next generation.”
Though the technology for brain-computer interfaces (or BCI’s) has existed for decades, recent strides have been made to create BCI devices which are safer, smaller, and more effective. Konrad Kording, Nathan Francis Mossell University Professor in Bioengineering, Neuroscience, and Computer and Information Science, helps to elucidate the potential future of this technology in a recent feature in Wired. In the article, he discusses the “invasive” aspects of previous BCI technology, in contrast to recent innovations, such as a new device by Synchron, which do not require surgery and are consequently much less risky:
“The device, called a Stentrode, has a mesh-like design and is about the length of a AAA battery. It is implanted endovascularly, meaning it’s placed into a blood vessel in the brain, in the region known as the motor cortex, which controls movement. Insertion involves cutting into the jugular vein in the neck, snaking a catheter in, and feeding the device through it all the way up into the brain, where, when the catheter is removed, it opens up like a flower and nestles itself into the blood vessel’s wall. Most neurosurgeons are already up to speed on the basic approach required to put it in, which reduces a high-risk surgery to a procedure that could send the patient home the very same day. ‘And that is the big innovation,” Kording says.
Konrad Kording, Nathan Francis Mossell University Professor in Bioengineering, Neuroscience, and Computer and Information Sciences, was appointed the Co-Director of the CIFAR Program in Learning in Machines & Brains. The appointment will start April 1, 2022.
CIFAR is a global research organization that convenes extraordinary minds to address the most important questions facing science and humanity. CIFAR was founded in 1982 and now includes over 400 interdisciplinary fellows and scholars, representing over 130 institutions and 22 countries. CIFAR supports research at all levels of development in areas ranging from Artificial Intelligence and child and brain development, to astrophysics and quantum computing. The program in Learning in Machines & Brains brings together international scientists to examine “how artificial neural networks could be inspired by the human brain, and developing the powerful technique of deep learning.” Scientists, industry experts, and policymakers in the program are working to understand the computational and mathematical principles behind learning, whether in brains or in machines, in order to understand human intelligence and improve the engineering of machine learning. As Co-Director, Kording will oversee the collective intellectual development of the LMB program which includes over 30 Fellows, Advisors, and Global Scholars. The program is also co-directed by Yoshua Benigo, the Canada CIFAR AI Chair and Professor in Computer Science and Operations Research at Université de Montréal.
Kording, a Penn Integrates Knowledge (PIK) Professor, was previously named an associate fellow of CIFAR in 2017. Kording’s groundbreaking interdisciplinary research uses data science to advance a broad range of topics that include understanding brain function, improving personalized medicine, collaborating with clinicians to diagnose diseases based on mobile phone data and even understanding the careers of professors. Across many areas of biomedical research, his group analyzes large datasets to test new models and thus get closer to an understanding of complex problems in bioengineering, neuroscience and beyond.
From smartphones and fitness trackers to social media posts and COVID-19 cases, the past few years have seen an explosion in the amount and types of data that are generated daily. To help make sense of these large, complex datasets, the field of data science has grown, providing methodologies, tools, and perspectives across a wide range of academic disciplines.
As part of its $750 million investment in science, engineering, and medicine, the University has committed to supporting the future needs of this field. To this end, the Innovation in Data Engineering and Science (IDEAS) initiative will help Penn become a leader in developing data-driven approaches that can transform scientific discovery, engineering research, and technological innovation.
“The IDEAS initiative is game-changing for our University,” says President Amy Gutmann. “This new investment allows us to boost our interdisciplinary efforts across campus, recruit phenomenal additional team members, and generate an even more sound foundation for discovery, experimentation, and design. This initiative is a clear statement that Penn is committed to taking data science head-on.”
“One of the unique things about data science and data engineering is that it’s a very horizontal technology, one that is going to be impacting every department on campus,” says George Pappas, Electrical and Systems Engineering Department chair. “When you have a horizontal technology in a competitive area, we have to figure out specific areas where Penn can become a worldwide leader.”
To do this, IDEAS aims to recruit new faculty across three research areas: artificial intelligence (AI) to transform scientific discovery, trustworthy AI for autonomous systems, and understanding connections between the human brain and AI.
In the area of neuroscience and how the human brain is similar to AI and machine learning approaches, research from PIK Professor Konrad Kording and Dani Bassett’sComplex Systems lab exemplifies the types of cross-disciplinary efforts that are essential for addressing complex questions. By recruiting additional faculty in this area, IDEAS will help Penn make strides in bio-inspired computing and in future life-changing discoveries that could address cognitive disorders and nervous system diseases.
When Nathan Francis Mossell graduated in 1882, he became the first African American to earn a medical degree from Penn. He soon became a prominent African American physician, the first to be elected to the Philadelphia County Medical Society. He helped found the Frederick Douglass Memorial Hospital and Training School, which treated Black patients and helped train the next generation of Black doctors and nurses.
“Dr. Mossell was truly inspiring. He had to fight for everything, yet never reneged on his principles. He pretty much started a hospital and was a major champion for the advancement of equality for African Americans,” Kording said. “In my research, where I study how intelligence works, I am inspired by scholars like him who combine many different insights. He was a wonderful man, and I will be proud to carry his name.”
While artificial intelligence is becoming a bigger part of nearly every industry and increasingly present in everyday life, even the most impressive AI is no match for a toddler, chimpanzee, or even a honeybee when it comes to learning, creativity, abstract thinking or connecting cause and effect in ways they haven’t been explicitly programmed to recognize.
This discrepancy gets at one of the field’s fundamental questions: what does it mean to say an artificial system is “intelligent” in the first place?
Seventy years ago, Alan Turing famously proposed such a benchmark; a machine could be considered to have artificial intelligence if it could successfully fool a person into thinking it was a human as well. Now, many artificial systems could pass a “Turing Test” in certain limited domains, but none come close to imitating the holistic sense of intelligence we recognize in animals and people.
Understanding how AI might someday be more like this kind of biological intelligence — and developing new versions of the Turing Test with those principles in mind — is the goal of a new collaboration between researchers at the University of Pennsylvania, Carnegie Mellon University and Johns Hopkins University.
The project, called “From Biological Intelligence to Human Intelligence to Artificial General Intelligence,” is led by Konrad Kording, a Penn Integrates Knowledge Professor with appointments in the Departments of Bioengineering and Computer and Information Science in Penn Engineering and the Department of Neuroscience at Penn’s Perelman School of Medicine. Kording will collaborate with Timothy Verstynen of Carnegie Mellon University, as well Joshua T. Vogelstein and Leyla Isik, both of Johns Hopkins University, on the project.
When the COVID-19 pandemic began taking hold in the United States, one of the first “superspreader” events was an academic conference. Such conferences have long been a primary way for researchers to share new findings and launch collaborations, but with thousands of people from around the world, indoors and in close proximity, it quickly became clear that the traditional format for these events would need to radically change.
Konrad Kording, a Penn Integrates Knowledge Professor with appointments in the departments of Bioengineering and Computer and Information Science in Penn Engineering and the Department of Neuroscience at Penn’s Perelman School of Medicine, was ahead of the curve on this shift. With the issues of prohibitive costs and environmental impact of travel in mind, Kording had already started brainstorming ways of reinventing the traditional conference format when the pandemic made it a necessity.
The resulting event, Neuromatch, involved algorithmically analyzing participants’ work in order to connect researchers who might not otherwise meet. Building on the success of that “unconference,” Kording and his colleagues launched the Neuromatch Academy, a free-ranging online summer school organized around the same principles.
Kording already had experience quickly pulling together online events. Early in the pandemic, together with Dan Goodman, Titipat Achakulvisut and Brad Wyble, he developed an online ‘unconference,’ which featured both lectures and a virtual networking component designed to mimic the in-person interactions that make conferences so valuable. (For more, see “Designing a Virtual Neuroscience Conference.”) Soon after, they decided to spin that success into a full-fledged summer school offering live lectures with top computational neuroscientists, guided coding exercises to teach mathematical approaches to neural modeling and analysis, and community support from mentors and teaching assistants (TAs).
The result was a summer school with well-designed content, a diverse student body, including participants from U.S.-sanctioned Iran, and a determined group of organizers who managed to pull off the most inclusive computational neuroscience school yet. NMA now has its eye on a future with even broader representation across countries, languages and skill levels. This year has been incredibly difficult for many, but NMA has provided an important precedent for how to collaborate across, and even dismantle, all sorts of barriers.