Konrad Kording, Nathan Francis Mossell University Professor in Bioengineering, Neuroscience, and Computer and Information Sciences, was appointed the Co-Director of the CIFAR Program in Learning in Machines & Brains. The appointment will start April 1, 2022.
CIFAR is a global research organization that convenes extraordinary minds to address the most important questions facing science and humanity. CIFAR was founded in 1982 and now includes over 400 interdisciplinary fellows and scholars, representing over 130 institutions and 22 countries. CIFAR supports research at all levels of development in areas ranging from Artificial Intelligence and child and brain development, to astrophysics and quantum computing. The program in Learning in Machines & Brains brings together international scientists to examine “how artificial neural networks could be inspired by the human brain, and developing the powerful technique of deep learning.” Scientists, industry experts, and policymakers in the program are working to understand the computational and mathematical principles behind learning, whether in brains or in machines, in order to understand human intelligence and improve the engineering of machine learning. As Co-Director, Kording will oversee the collective intellectual development of the LMB program which includes over 30 Fellows, Advisors, and Global Scholars. The program is also co-directed by Yoshua Benigo, the Canada CIFAR AI Chair and Professor in Computer Science and Operations Research at Université de Montréal.
Kording, a Penn Integrates Knowledge (PIK) Professor, was previously named an associate fellow of CIFAR in 2017. Kording’s groundbreaking interdisciplinary research uses data science to advance a broad range of topics that include understanding brain function, improving personalized medicine, collaborating with clinicians to diagnose diseases based on mobile phone data and even understanding the careers of professors. Across many areas of biomedical research, his group analyzes large datasets to test new models and thus get closer to an understanding of complex problems in bioengineering, neuroscience and beyond.
Carl H. June, the Richard W. Vague Professor in Immunotherapy in Pathology and Laboratory Medicine at Penn Medicine, director of the Center for Cellular Immunotherapies and the Parker Institute for Cancer Immunotherapy, and member of the Penn Bioengineering Graduate Group at the University of Pennsylvania, has led a new analytical study published in Nature that explains the longest persistence of CAR T cell therapy recorded to date against chronic lymphocytic leukemia (CLL), and shows that the CAR T cells remained detectable at least a decade after infusion, with sustained remission in both patients. June’s pioneering work in gene therapy led to the FDA approval for the CAR T therapy (sold by Novartis as Kymriah) for treating leukemia and transforming the fight against cancer. His lab develops new forms of T cell based therapies.
The human brain uses more energy than any other organ in the body, requiring as much as 20% of the body’s total energy. While this may sound like a lot, the amount of energy would be even higher if the brain were not equipped with an efficient way to represent only the most essential information within the vast, constant stream of stimuli taken in by the five senses. The hypothesis for how this works, known as efficient coding, was first proposed in the 1960s by vision scientist Horace Barlow.
Now, new research from the Scuola Internazionale Superiore di Studi Avanzati (SISSA) and the University of Pennsylvania provides evidence of efficient visual information coding in the rodent brain, adding support to this theory and its role in sensory perception. Published in eLife, these results also pave the way for experiments that can help understand how the brain works and can aid in developing novel artificial intelligence (AI) systems based on similar principles.
According to information theory—the study of how information is quantified, stored, and communicated—an efficient sensory system should only allocate resources to how it represents, or encodes, the features of the environment that are the most informative. For visual information, this means encoding only the most useful features that our eyes detect while surveying the world around us.
Vijay Balasubramanian, a computational neuroscientist at Penn, has been working on this topic for the past decade. “We analyzed thousands of images of natural landscapes by transforming them into binary images, made up of black and white pixels, and decomposing them into different textures defined by specific statistics,” he says. “We noticed that different kinds of textures have different variability in nature, and human subjects are better at recognizing those which vary the most. It is as if our brains assign resources where they are most necessary.”
Jennifer E. Phillips-Cremins, Associate Professor and Dean’s Faculty Fellow in Bioengineering and Genetics, has been awarded the 2022 Dr. Susan Lim Award for Outstanding Young Investigator by the International Society for Stem Cell Research (ISSCR), the preeminent, global organization dedicated to stem cells research.
This award recognizes the exceptional achievements of an investigator in the early part of his or her independent career in stem cell research. Cremins works in the field of epigenetics, and is a pioneer in understanding how chromatin, the substance within a chromosome, works:
“Dr. Phillips-Cremins is a gifted researcher with diverse skills across cell, molecular, and computational biology. She is a shining star in the stem cell field who has already made landmark contributions in bringing long-range chromatin folding mechanisms to stem cell research. In addition to her skills as an outstanding researcher,” ISSCR President Melissa Little, Ph.D., said. “She has flourished as an independent investigator, providing the stem cell field with unique and creative approaches that have facilitated conceptual leaps in our understanding of long-range spatial regulation of stem cell fate. Congratulations, Jennifer, on this prestigious honor.”
Cremins was awarded a NIH Director’s Pioneer Award in 2021 and a Chan Zuckerberg Initiative (CZI) grant as part of the CZI Collaborative Pairs Pilot Project in 2020. The long-term goal of her lab is to understand the mechanisms by which chromatin architecture governs genome function. The ISSCR will recognize Cremins and her research in a plenary session during the ISSCR annual meeting on June 15.
From COVID vaccines to cancer immunotherapies to the potential for correcting developmental disorders in utero, mRNA-based approaches are a promising tool in the fight against a wide range of diseases. These treatments all depend on providing a patient’s cells with genetic instructions for custom proteins and other small molecules, meaning that getting those instructions inside the target cells is of critical importance.
The current delivery method of choice uses lipid nanoparticles (LNPs). Thanks to surfaces customized with binding and signaling molecules, they encapsulate mRNA sequences and smuggle them through the cell membrane. But with a practically unlimited number of variables in the makeup of those surfaces and molecules, figuring out how to design the most effective LNP is a fundamental challenge.
Now, in a study featured on the cover of the journal Nano Letters, researchers from the University of Pennsylvania’s School of Engineering and Applied Science and Perelman School of Medicine have now shown how to computationally optimize the design of these delivery vehicles.
Using an established methodology for comparing a wide range of variables known as “orthogonal design of experiments,” the researchers simultaneously tested 256 candidate LNPs. They found the frontrunner was three times better at delivering mRNA sequences into T cells than the current standard LNP formulation for mRNA delivery.
The study was led by Michael Mitchell, Skirkanich Assistant Professor of Innovation in the Department of Bioengineering in Penn’s School of Engineering and Applied Science, and Margaret Billingsley, a graduate student in his lab.
The rise of drug-resistant bacteria infections is one of the world’s most severe global health issues, estimated to cause 10 million deaths annually by the year 2050. Some of the most virulent and antibiotic-resistant bacterial pathogens are the leading cause of life-threatening, hospital-acquired infections, particularly dangerous for immunocompromised and critically ill patients. Traditional and continual synthesis of antibiotics will simply not be able to keep up with bacteria evolution.
To avoid the continual process of synthesizing new antibiotics to target bacteria as they evolve, Penn Engineers have looked at a new, natural resource for antibiotic molecules.
A recent study on the search for encrypted peptides with antimicrobial properties in the human proteome has located naturally occurring antibiotics within our own bodies. By using an algorithm to pinpoint specific sequences in our protein code, a team of Penn researchers along with collaborators, led by César de la Fuente, Presidential Assistant Professor in Psychiatry, Bioengineering, Microbiology, and Chemical and Biomolecular Engineering, and Marcelo Torres, a post doc in de la Fuente’s lab, were able to locate novel peptides, or amino acid chains, that when cleaved, indicated their potential to fend off harmful bacteria.
Now, in a new study published in ACS Nano, the team along with Angela Cesaro, the lead author and post doc in de la Fuente’s lab, have identified three distinct antimicrobial peptides derived from a protein in human plasma and demonstrate their abilities in mouse models. Angela Cesaro performed a great part of the activities during her PhD under the supervision of corresponding author, Professor Angela Arciello, from the University of Naples Federico II. The collaborative study also includes Utrecht University in the Netherlands.
“We identified the cardiovascular system as a hot spot for potential antimicrobials using an algorithmic approach,” says de la Fuente. “Then we looked closer at a specific protein in the plasma.”
Our bodies are equipped with specialized white blood cells that protect us from foreign invaders, such as viruses and bacteria. These T cells identify threats using antigen receptors, proteins expressed on the surface of individual T cells that recognize specific amino acid sequences found in or on those invaders. Once a T cell’s antigen receptors bind to the corresponding antigen, it can directly kill infected cells or call for backup from the rest of the immune system.
We have hundreds of billions of T cells, each with unique receptors that recognize unique antigens, so profiling this T cell antigen specificity is essential in our understanding of the immune response. It is especially critical in developing targeted immunotherapies, which equip T cells with custom antigen receptors that recognize threats they would otherwise miss, such as the body’s own mutated cancer cells.
Jenny Jiang, Peter and Geri Skirkanich Associate Professor of Innovation in Bioengineering, along with lab members and colleagues at the University of Texas, Austin, recently published a study in Nature Immunology that describes their technology, which simultaneously provides information in four dimensions of T cell profiling. Ke-Yue Ma and Yu-Wan Guo, a former post doc and current graduate student in Jiang’s Penn Engineering lab, respectively, also contributed to this study.
This technology, called TetTCR-SeqHD, is the first to provide such detailed information about single T cells in a high-throughput manner, opening doors for personalized immune diagnostics and immunotherapy development.
There are many pieces of information needed to comprehensively understand the immune response of T cells, and gathering all of these measurements simultaneously has been a challenge in the field. Comprehensive profiling of T cells includes sequencing the antigen receptors, understanding how specific those receptors are in their recognition of invading antigens, and understanding T cell gene and protein expression. Current technologies only screen for one or two of these dimensions due to various constraints.
“Current technologies that measure T cell immune response all have limitations,” says Jiang. “Those that use cultured or engineered T cells cannot tell us about their original phenotype, because once you take a cell out of the body to culture, its gene and protein expression will change. The technologies that address T cell and antigen sequencing with mass spectrometry damage genetic information of the sample. And current technologies that do provide information on antigen specificity use a very expensive binding ligand that can cost more than a thousand dollars per antigen, so it is not feasible if we want to look at hundreds of antigens. There is clearly room for advancement here.”
The TetTCR-SeqHD technology combines Jiang’s previously developed T cell receptor sequencing tool, TetTCR-Seq, described in a Nature Biotechnology paper published in 2018, with the new ability of characterizing both gene and protein expression.
The Society for Biomaterials is a multidisciplinary society of academic, healthcare, governmental and business professionals dedicated to promoting advancements in all aspects of biomaterial science, education and professional standards to enhance human health and quality of life.
Mitchell, whose research lies at the interface of biomaterials science, drug delivery, and cellular and molecular bioengineering to fundamentally understand and therapeutically target biological barriers, is specifically being recognized for his development of the first nanoparticle RNAi therapy to treat multiple myeloma, an incurable hematologic cancer that colonizes in bone marrow.
“Before this, no one in the drug delivery field has developed an effective gene delivery system to target bone marrow,” said United States National Medal of Science recipient Robert S. Langer in Mitchell’s award citation. “Mike is a standout young investigator and leader that intimately understands the importance of research and collaboration at the interface of nanotechnology and medicine.”
Academic recipients of the SFB Young Investigator Award should not exceed the rank of Assistant Professor and must not be tenured at the time of nomination. The award includes a $1,000 endowment.
Most organisms have proteins that react to light. Even creatures that don’t have eyes or other visual organs use these proteins to regulate many cellular processes, such as transcription, translation, cell growth and cell survival.
The field of optogenetics relies on such proteins to better understand and manipulate these processes. Using lasers and genetically engineered versions of these naturally occurring proteins, known as probes, researchers can precisely activate and deactivate a variety of cellular pathways, just like flipping a switch.
Now, Penn Engineering researchers have described a new type of optogenetic protein that can be controlled not only by light, but also by temperature, allowing for a higher degree of control in the manipulation of cellular pathways. The research will open new horizons for both basic science and translational research.
Lukasz Bugaj, Assistant Professor in Bioengineering (BE), Bomyi Lim, Assistant Professor in Chemical and Biomolecular Engineering, Brian Chow, Associate Professor in BE, and graduate students William Benman in Bugaj’s lab, Hao Deng in Lim’s lab, and Erin Berlew and Ivan Kuznetsov in Chow’s lab, published their study in Nature Chemical Biology. Arndt Siekmann, Associate Professor of Cell and Developmental Biology at the Perelman School of Medicine, and Caitlyn Parker, a research technician in his lab, also contributed to this research.
The team’s original aim was to develop a single-component probe that would be able to manipulate specific cellular pathways more efficiently. The model for their probe was a protein called BcLOV4, and through further investigation of this protein’s function, they made a fortuitous discovery: that the protein is controlled by both light and temperature.
More data is being produced across diverse fields within science, engineering, and medicine than ever before, and our ability to collect, store, and manipulate it grows by the day. With scientists of all stripes reaping the raw materials of the digital age, there is an increasing focus on developing better strategies and techniques for refining this data into knowledge, and that knowledge into action.
Enter data science, where researchers try to sift through and combine this information to understand relevant phenomena, build or augment models, and make predictions.
One powerful technique in data science’s armamentarium is machine learning, a type of artificial intelligence that enables computers to automatically generate insights from data without being explicitly programmed as to which correlations they should attempt to draw.
Advances in computational power, storage, and sharing have enabled machine learning to be more easily and widely applied, but new tools for collecting reams of data from massive, messy, and complex systems—from electron microscopes to smart watches—are what have allowed it to turn entire fields on their heads.
“This is where data science comes in,” says Susan Davidson, Weiss Professor in Computer and Information Science (CIS) at Penn’s School of Engineering and Applied Science. “In contrast to fields where we have well-defined models, like in physics, where we have Newton’s laws and the theory of relativity, the goal of data science is to make predictions where we don’t have good models: a data-first approach using machine learning rather than using simulation.”
Penn Engineering’s formal data science efforts include the establishment of the Warren Center for Network & Data Sciences, which brings together researchers from across Penn with the goal of fostering research and innovation in interconnected social, economic and technological systems. Other research communities, including Penn Research in Machine Learning and the student-run Penn Data Science Group, bridge the gap between schools, as well as between industry and academia. Programmatic opportunities for Penn students include a Data Science minor for undergraduates, and a Master of Science in Engineering in Data Science, which is directed by Davidson and jointly administered by CIS and Electrical and Systems Engineering.
Penn academic programs and researchers on the leading edge of the data science field will soon have a new place to call home: Amy Gutmann Hall. The 116,000-square-foot, six-floor building, located on the northeast corner of 34th and Chestnut Streets near Lauder College House, will centralize resources for researchers and scholars across Penn’s 12 schools and numerous academic centers while making the tools of data analysis more accessible to the entire Penn community.
Faculty from all six departments in Penn Engineering are at the forefront of developing innovative data science solutions, primarily relying on machine learning, to tackle a wide range of challenges. Researchers show how they use data science in their work to answer fundamental questions in topics as diverse as genetics, “information pollution,” medical imaging, nanoscale microscopy, materials design, and the spread of infectious diseases.
Bioengineering: Unraveling the 3D genomic code
Scattered throughout the genomes of healthy people are tens of thousands of repetitive DNA sequences called short tandem repeats (STRs). But the unstable expansion of these repetitions is at the root of dozens of inherited disorders, including Fragile X syndrome, Huntington’s disease, and ALS. Why these STRs are susceptible to this disease-causing expansion, whereas most remain relatively stable, remains a major conundrum.
Complicating this effort is the fact that disease-associated STR tracts exhibit tremendous diversity in sequence, length, and localization in the genome. Moreover, that localization has a three-dimensional element because of how the genome is folded within the nucleus. Mammalian genomes are organized into a hierarchy of structures called topologically associated domains (TADs). Each one spans millions of nucleotides and contains smaller subTADs, which are separated by linker regions called boundaries.
“The genetic code is made up of three billion base pairs. Stretched out end to end, it is 6 feet 5 inches long, and must be subsequently folded into a nucleus that is roughly the size of a head of a pin,” says Jennifer Phillips-Cremins, associate professor and dean’s faculty fellow in Bioengineering. “Genome folding is an exciting problem for engineers to study because it is a problem of big data. We not only need to look for patterns along the axis of three billion base pairs of letters, but also along the axis of how the letters are folded into higher-order structures.”
To address this challenge, Phillips-Cremins and her team recently developed a new mathematical approach called 3DNetMod to accurately detect these chromatin domains in 3D maps of the genome in collaboration with the lab of Dani Bassett, J. Peter Skirkanich Professor in Bioengineering.
“In our group, we use an integrated, interdisciplinary approach relying on cutting-edge computational and molecular technologies to uncover biologically meaningful patterns in large data sets,” Phillips-Cremins says. “Our approach has enabled us to find patterns in data that classic biology training might overlook.”
In a recent study, Phillips-Cremins and her team used 3DNetMod to identify tens of thousands of subTADs in human brain tissue. They found that nearly all disease-associated STRs are located at boundaries demarcating 3D chromatin domains. Additional analyses of cells and brain tissue from patients with Fragile X syndrome revealed severe boundary disruption at a specific disease-associated STR.
“To our knowledge, these findings represent the first report of a possible link between STR instability and the mammalian genome’s 3D folding patterns,” Phillips-Cremins says. “The knowledge gained may shed new light into how genome structure governs function across development and during the onset and progression of disease. Ultimately, this information could be used to create molecular tools to engineer the 3D genome to control repeat instability.”