Could the Age of the Universe Be Twice as Old as Current Estimates Suggest?

by Nathi Magubane

NASA’s James Webb Space Telescope has produced the deepest and sharpest infrared image of the distant universe to date. Known as Webb’s First Deep Field, this image of galaxy cluster SMACS 0723 is rich with detail. Thousands of galaxies—including the faintest objects ever observed in the infrared—have appeared in Webb’s view for the first time. The image shows the galaxy cluster SMACS 0723 as it appeared 4.6 billion years ago. The combined mass of this galaxy cluster acts as a gravitational lens, magnifying much more distant galaxies behind it. Webb’s Near-Infra Red Cam has brought those distant galaxies into sharp focus—they have tiny, faint structures that have never been seen before, including star clusters and diffuse features. (Image: NASA, ESA, CSA, and STScI)

Could the universe be twice as old as current estimates put forward? Rajendra Gupta of the University of Ottawa recently published a paper suggesting just that. Gupta claims the universe may be around 26.7 billion years rather than the commonly accepted 13.8 billion. The news has generated many headlines as well as criticism from astronomers and the larger scientific community.

Penn Today met with professors Vijay Balasubramanian and Mark Devlin to discuss Gupta’s findings and better understand the rationale of these claims and how they fit in the broader context of problems astronomers are attempting to solve.

How do we know how old the universe actually is?

Balasubramanian: The universe is often reported to be 13.8 billion years old, but, truth be told, this is an amalgamation of various measurements that factor in different kinds of data involving the apparent ages of ‘stuff’ in the universe.

This stuff includes observable or ordinary matter like you, me, galaxies far and near, stars, radiation, and the planets, then dark matter—the sort of matter that doesn’t interact with light and which makes up about 27% of the universe—and finally, dark energy, which makes up a massive chunk of the universe, around 68%, and is what we believe is causing the universe to expand.

And so, we take as much information as we can about the stuff and build what we call a consensus model of the universe, essentially a line of best fit. We call the model the Lambda Cold Dark Matter (ΛCDM).

Lambda represents the cosmological constant, which is linked to dark energy, namely how it drives the expansion of the universe according to Einstein’s theory of general relativity. In this framework, how matter and energy behave in the universe determines the geometry of spacetime, which in turn influences how matter and energy move throughout the cosmos. Including this cosmological constant, Lambda, allows for an explanation of a universe that expands at an accelerating rate, which is consistent with our observations.

Now, the Cold Dark Matter part represents a hypothetical form of dark matter. ‘Dark’ here means that it neither interacts with nor emits light, so it’s very hard to detect. ‘Cold’ refers to the fact that its particles move slowly because when things cool down their components move less, whereas when they heat up the components get excited and move around more relative to the speed of light.

So, when you consider the early formation of the universe, this ‘slowness’ influences the formation of structures in the universe like galaxies and clusters of galaxies, in that smaller structures like the galaxies form before the larger ones, the clusters.

Devlin: And then taking a step back, the way cosmology works and pieces how old things are is that we look at the way the universe looks today, how all the structures are arranged within it, and we compare it to how it used to be with a set of cosmological parameters like Cosmic Microwave Background (CMB) radiation, the afterglow of the Big Bang, and the oldest known source of electromagnetic radiation, or light. We also refer to it as the baby picture of the universe because it offers us a glimpse of what it looked like at 380,000 years old, long before stars and galaxies were formed.

And what we know about the physical nature of the universe from the CMB is that it was something really smooth, dense, and hot. And as it continued to expand and cool, the density started to vary, and these variations became the seeds for the formation of cosmic structures.
The denser regions of the universe began to collapse under their own gravity, forming the first stars, galaxies, and clusters of galaxies. So, this is why, when we look at the universe today, we see this massive cosmic web of galaxies and clusters separated by vast voids. This process of structure formation is still ongoing.

And, so, the ΛCDM model suggests that the primary driver of this structure formation was dark matter, which exerts gravity and which began to clump together soon after the Big Bang. These clumps of dark matter attracted the ordinary matter, forming the seeds of galaxies and larger cosmic structures.

So, with models like the ΛCDM and the knowledge of how fast light travels, we can add bits of information, or parameters, and we have from things like the CMB and other sources of light in our universe, like the ones we get from other distant galaxies, and we see this roadmap for the universe that gives us it’s likely age. Which we think is somewhere in the ballpark of 13.8 billion years.

Read the full Q&A in Penn Today.

Vijay Balasubramanian is the Cathy and Marc Lasry Professor in the Department of Physics and Astronomy in the School of Arts & Sciences at the University of Pennsylvania. He is a member of the Penn Bioengineering Graduate Group.

Mark Devlin is the Reese W. Flower Professor of Astronomy and Astrophysics in the Department of Physics and Astronomy in the School of Arts & Sciences at Penn.

The Big Bang at 75

by Kristina García

A child stops by an image of the cosmic microwave background at Shanghai Astronomy Museum in Shanghai, China on July 18, 2021. (Image: FeatureChina via AP Images)
A girl stops by an image of the cosmic microwave background (CMB) at Shanghai Astrology Museum in Shanghai, China Sunday, Jul. 18, 2021. The planetarium, with a total floor space of 38,000 square meters and claimed to be the world’s largest, opens to visitors from July 18. (FeatureChina via AP Images)

There was a time before time when the universe was tiny, dense, and hot. In this world, time didn’t even exist. Space didn’t exist. That’s what current theories about the Big Bang posit, says Vijay Balasubramanian, the Cathy and Marc Lasry Professor of Physics. But what does this mean? What did the beginning of the universe look like? “I don’t know, maybe there was a timeless, spaceless soup,” Balasubramanian says. When we try to describe the beginning of everything, “our words fail us,” he says.

Yet, for thousands of years, humans have been trying to do just that. One attempt came 75 years ago from physicists George Gamow and Ralph Alpher. In a paper published on April 1, 1948, Alpher and Gamow imagined the universe starts in a hot, dense state that cools as it expands. After some time, they argued, there should have been a gas of neutrons, protons, electrons, and neutrinos reacting with each other and congealing into atomic nuclei as the universe aged and cooled. As the universe changed, so did the rates of decay and the ratios of protons to neutrons. Alpher and Gamow were able to mathematically calculate how this process might have occurred.

Now known as the alpha-beta-gamma theory, the paper predicted the surprisingly large fraction of helium and hydrogen in the universe. (By weight, hydrogen comprises 74% of nuclear matter, helium 24%, and heavier elements less than 1%.)

The findings of Gamow and Alpher hold up today, Balasubramanian says, part of an increasingly complex picture of matter, time and space. Penn Today spoke with Balasubramanian about the paper, the Big Bang, and the origin of the universe.

Read the full Q&A in Penn Today.

Balasubramanian is Cathy and Marc Lasry Professor in the Department of Physics and Astronomy in the Penn School of Arts and Sciences and a member of the Penn Bioengineering Graduate Group.

How Bacteria Store Information to Kill Viruses (But Not Themselves)

by Luis Melecio-Zambrano

A group of bacteriophages, viruses that infect bacteria, imaged using transmission electron microscopy. New research sheds light on how bacteria fight off these invaders without triggering an autoimmune response. (Image: ZEISS Microscopy, CC BY-NC-ND 2.0)

During the last few years, CRISPR has grabbed headlines for helping treat patients with conditions as varied as blindness and sickle cell disease. However, long before humans co-opted CRISPR to fight genetic disorders, bacteria were using CRISPR as an immune system to fight off viruses.

In bacteria, CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats) works by stealing small pieces of DNA from infecting viruses and storing those chunks in the genes of the bacteria. These chunks of DNA, called spacers, are then copied to form little tags, which attach to proteins that float around until they find a matching piece of DNA. When they find a match, they recognize it as a virus and cut it up.

Now, a paper published in Current Biology by researchers from the University of Pennsylvania Department of Physics and Astronomy shows that the risk of autoimmunity plays a key role in shaping how CRISPR stores viral information, guiding how many spacers bacteria keep in their genes, and how long those spacers are.

Ideally, spacers should only match DNA belonging to the virus, but there is a small statistical chance that the spacer matches another chunk of DNA in the bacteria itself. That could spell death from an autoimmune response.

“The adaptive immune system in vertebrates can produce autoimmune disorders. They’re very serious and dangerous, but people hadn’t really considered that carefully for bacteria,” says Vijay Balasubramanian, principal investigator for the paper and the Cathy and Marc Lasry Professor of Physics in the School of Arts & Sciences.

Balancing this risk can put the bacteria in something of an evolutionary bind. Having more spacers means they can store more information and fend off more types of viruses, but it also increases the likelihood that one of the spacers might match the DNA in the bacteria and trigger an autoimmune response.

Read the full story in Penn Today.

Vijay Balasubramanian is the Cathy and Marc Lasry Professor of Physics at the Department of Physics and Astronomy of the University of Pennsylvania, a visiting professor at Vrije Universiteit Brussel, and a member of the Penn Bioengineering Graduate Group.

Vijay Balasubramanian Discusses Theoretical Physics in Quanta Magazine

Cathy and Marc Lasry Professor Vijay Balasubramanian at Penn’s BioPond.

In an interview with Quanta Magazine, Vijay Balasubramanian discusses his work as a theoretical physicist, noting his study of the foundations of physics and the fundamentals of space and time. He speaks of the importance of interdisciplinary study and about how literature and the humanities can contextualize scientific exploration in the study of physics, computer science, and neuroscience.

Balasubramanian is Cathy and Marc Lasry Professor in the Department of Physics and Astronomy in the Penn School of Arts and Sciences and a member of the Penn Bioengineering Graduate Group.

Read “Pondering the Bits That Build Space-Time and Brains” in Quanta Magazine.

Understanding Optimal Resource Allocation in the Brain

by Erica K. Brockmeier

A processed image representative of the types of images used in this study. Natural landscapes were transformed into binary images, ones made of black and white pixels, that were decomposed into different textures defined by specific statistics. (Image: Eugenio Piasini)

The human brain uses more energy than any other organ in the body, requiring as much as 20% of the body’s total energy. While this may sound like a lot, the amount of energy would be even higher if the brain were not equipped with an efficient way to represent only the most essential information within the vast, constant stream of stimuli taken in by the five senses. The hypothesis for how this works, known as efficient coding, was first proposed in the 1960s by vision scientist Horace Barlow.

Now, new research from the Scuola Internazionale Superiore di Studi Avanzati (SISSA) and the University of Pennsylvania provides evidence of efficient visual information coding in the rodent brain, adding support to this theory and its role in sensory perception. Published in eLife, these results also pave the way for experiments that can help understand how the brain works and can aid in developing novel artificial intelligence (AI) systems based on similar principles.

According to information theory—the study of how information is quantified, stored, and communicated—an efficient sensory system should only allocate resources to how it represents, or encodes, the features of the environment that are the most informative. For visual information, this means encoding only the most useful features that our eyes detect while surveying the world around us.

Vijay Balasubramanian, a computational neuroscientist at Penn, has been working on this topic for the past decade. “We analyzed thousands of images of natural landscapes by transforming them into binary images, made up of black and white pixels, and decomposing them into different textures defined by specific statistics,” he says. “We noticed that different kinds of textures have different variability in nature, and human subjects are better at recognizing those which vary the most. It is as if our brains assign resources where they are most necessary.”

Read the full story in Penn Today.

Vijay Balasubramanian is the Cathy and Marc Lasry Professor in the Department of Physics and Astronomy in the School of Arts & Sciences at the University of Pennsylvania. He is a member of the Penn Bioengineering Graduate Group.

A New Model for How the Brain Perceives Unique Odors

by Erica K. Brockmeier

Cathy and Marc Lasry Professor Vijay Balasubramanian at Penn’s BioPond.

A study published in PLOS Computational Biology describes a new model for how the olfactory system discerns unique odors. Researchers from the University of Pennsylvania found that a simplified, statistics-based model can explain how individual odors can be perceived as more or less similar from others depending on the context. This model provides a starting point for generating new hypotheses and conducting experiments that can help researchers better understand the olfactory system, a complex, crucial part of the brain.

The sense of smell, while crucial for things like taste and hazard avoidance, is not as well studied as other senses. Study co-author Vijay Balasubramanian, a theoretical physicist with an interest in how living systems process information, says that olfaction is a prime example of a complex information-processing system found in nature, as there are far more types of volatile molecules—on the scale of tens or hundreds of thousands—than there are receptor types in the nose to detect them, on the scale of tens to hundreds depending on the species.

“Every molecule can bind to many receptors, and every receptor can bind to many molecules, so you get this combinatorial mishmash, with the nose encoding smells in a way that involves many receptor types to collectively tell you what a smell is,” says Balasubramanian. “And because there are many fewer receptor types than molecular species, you basically have to compress a very high dimensional olfactory space into a much lower dimensional space of neural responses.”

Read the full story in Penn Today.

Vijay Balasubramanian is the Cathy and Marc Lasry Professor in the Department of Physics & Astronomy in the School of Arts & Sciences at the University of Pennsylvania and a member of the Penn Bioengineering Graduate Group.

This research was supported by the Simons Foundation Mathematical Modeling of Living Systems (Grant 400425) and the Swartz Foundation.

Decoding How the Brain Accurately Depicts Ever-changing Visual Landscapes

A collaborative study finds that deeper regions of the brain encode visual information more slowly, enabling the brain to identify fast-moving objects and images more accurately and persistently.

by Erica K. Brockmeier

Busy pedestrian crossing at Hong Kong

New research from the University of Pennsylvania, the Scuola Internazionale Superiore de Studi Avanzati (SISSA), and KU Leuven details the time scales of visual information processing across different regions of the brain. Using state-of-the-art experimental and analytical techniques, the researchers found that deeper regions of the brain encode visual information slowly and persistently, which provides a mechanism for explaining how the brain accurately identifies fast-moving objects and images. The findings were published in Nature Communications.

Understanding how the brain works is a major research challenge, with many theories and models developed to explain how complex information is processed and represented. One area of particular interest is vision, a major component of neural activity. In humans, for example, there is evidence that around half of the neurons in the cortex are related to vision.

Researchers are eager to understand how the visual cortex can process and retain information about objects in motion in a way that allows people to take in dynamic scenes while still retaining information about and recognizing the objects around them.

“One of the biggest challenges of all the sensory systems is to maintain a consistent representation of our surroundings, despite the constant changes taking place around us. The same holds true for the visual system,” says Davide Zoccolan, director of SISSA’s Visual Neuroscience Laboratory. “Just look around us: objects, animals, people, all on the move. We ourselves are moving. This triggers rapid fluctuations in the signals acquired by the retina, and until now it was unclear whether the same type of variations apply to the deeper layers of the visual cortex, where information is integrated and processed. If this was the case, we would live in tremendous confusion.”

Experiments using static stimuli, such as photographs, have found that information from the sensory periphery are processed in the visual cortex according to a finely tuned hierarchy. Deeper regions of the brain then translate this information about visual scenes into more complex shapes, objects, and concepts. But how this process works in more dynamic, real-world settings is not well understood.

To shed light on this, the researchers analyzed neural activity patterns in multiple visual cortical areas in rodents while they were being shown dynamic visual stimuli. “We used three distinct datasets: one from SISSA, one from a group in KU Leuven led by Hans Op de Beeck and one from the Allen Institute for Brain Science in Seattle,” says Zoccolan. “The visual stimuli used in each were of different types. In SISSA, we created dedicated video clips showing objects moving at different speeds. The other datasets were acquired using various kinds of clips, including from films.”

Next, the researchers analyzed the signals registered in different areas of the visual cortex through a combination of sophisticated algorithms and models developed by Penn’s Eugenio Pasini and Vijay Balasubramanian. To do this, the researchers developed a theoretical framework to help connect the images in the movies to the activity of specific neurons in order to determine how neural signals evolve over different time scales.

“The art in this science was figuring out an analysis method to show that the processing of visual images is getting slower as you go deeper and deeper in the brain,” says Balasubramanian. “Different levels of the brain process information over different time scales; some things could be more stable, some quicker. It’s very hard to tell if the time scales across the brain are changing, so our contribution was to devise a method for doing this.”

Read the full story in Penn Today.

Vijay Balasubramanian is the Cathy and Marc Lasry Professor in the Department of Physics and Astronomy in the School of Arts & Sciences and a member of the Penn Bioengineering Graduate Group at the University of Pennsylvania.

The Optimal Immune Repertoire for Bacteria

by Erica K. Brockmeier

Transmission electron micrograph of multiple bacteriophages, viruses that infect bacteria, attached to a cell wall. New research describes how bacteria can optimize their “memory” of past viral infections in order to launch an effective immune response against a new invader. (Image: Graham Beards)

Before CRISPR became a household name as a tool for gene editing, researchers had been studying this unique family of DNA sequences and its role in the bacterial immune response to viruses. The region of the bacterial genome known as the CRISPR cassette contains pieces of viral genomes, a genomic “memory” of previous infections. But what was surprising to researchers is that rather than storing remnants of every single virus encountered, bacteria only keep a small portion of what they could hold within their relatively large genomes.

Work published in the Proceedings of the National Academy of Sciences provides a new physical model that explains this phenomenon as a tradeoff between how much memory bacteria can keep versus how efficiently they can respond to new viral infections. Conducted by researchers at the American Physical Society, Max Planck Institute, University of Pennsylvania, and University of Toronto, the model found an optimal size for a bacteria’s immune repertoire and provides fundamental theoretical insights into how CRISPR works.

In recent years, CRISPR has become the go-to biotechnology platform, with the potential to transform medicine and bioengineering. In bacteria, CRISPR is a heritable and adaptive immune system that allows cells to fight viral infections: As bacteria come into contact with viruses, they acquire chunks of viral DNA called spacers that are incorporated into the bacteria’s genome. When the bacteria are attacked by a new virus, spacers are copied from the genome and linked onto molecular machines known as Cas proteins. If the attached sequence matches that of the viral invader, the Cas proteins will destroy the virus.

Bacteria have a different type of immune system than vertebrates, explains senior author Vijay Balasubramanian, but studying bacteria is an opportunity for researchers to learn more about the fundamentals of adaptive immunity. “Bacteria are simpler, so if you want to understand the logic of immune systems, the way to do that would be in bacteria,” he says. “We may be able to understand the statistical principles of effective immunity within the broader question of how to organize an immune system.”

Read more on Penn Today.

Vijay Balasubramanian is the Cathy and Marc Lasry Professor in the Department of Physics and Astronomy in the School of Arts & Sciences at the University of Pennsylvania and a member of the Department of Bioengineering Graduate Group

This research was supported by the Simons Foundation (Grant 400425) and National Science Foundation Center for the Physics of Biological Function (Grant PHY-1734030).