The murder of George Floyd, an unarmed Black man who was killed by a White police officer, affected the mental well-being of many Americans. The effects were multifaceted as it was an act of police brutality and example of systemic racism that occurred during the uncertainty of a global pandemic, creating an even more complex dynamic and emotional response.
Because poor mental health can lead to a myriad of additional ailments, including poor physical health, inability to hold a job and an overall decrease in quality of life, it is important to understand how certain events affect it. This is especially critical when the emotional burden of these events falls most on demographics affected by systemic racism. However, unlike physical health, mental health is challenging to characterize and measure, and thus, population-level data on mental health has been limited.
To better understand patterns of mental health on a population scale, Penn Engineers Lyle H. Ungar, Professor of Computer and Information Science (CIS), and Sharath Chandra Guntuku, Research Assistant Professor in CIS, take a computational approach to this challenge. Drawing on large-scale surveys as well as language analysis in social media through their work with the World Well-Being Project, they have developed visualizations of these patterns across the U.S.
Their latest study involves tracking changes in emotional and mental health following George Floyd’s murder. Combining polling data from the U.S. Census and Gallup, Guntuku, Ungar and colleagues have shown that Floyd’s murder spiked a wave of unprecedented sadness and anger across the U.S. population, the largest since relevant data began being recorded in 2009.
The retiring CIS professor chats about his recent ACM SIGGRAPH election and his expansive computer graphics path.
Norman Badler’selection into the 2021 ACM SIGGRAPH Academy Class is right on time. After nearly five decades of teaching and trailblazing in the Penn community, the Rachleff Family Professor in the Department of Computer and Information Sciences retired at the end of the spring semester.
When he arrived at the University in 1974, CIS itself was only about 2 years old, and there was virtually no computer graphics focus or program at all. Badler had no intention to teach it.
“At that time, I was actually a computer vision researcher, but I was also working a little bit in natural language,” says Badler. “So I was literally brought in to fit between the chair, Aravind Joshi, who was a natural language person, and the computer vision person. It wasn’t until about three or four years after I came here that I switched over to computer graphics. Mostly because there was a vacuum and a need and an excitement.”
Several years after completing his dissertation in computer vision and forming a career path to head in that direction, Badler “started getting serious about computer graphics.” An organization that was getting its start around the same time as his Penn career would play a major role: ACM SIGGRAPH (the Association for Computing Machinery’s Special Interest Group on Computer Graphics and Interactive Techniques).
While artificial intelligence is becoming a bigger part of nearly every industry and increasingly present in everyday life, even the most impressive AI is no match for a toddler, chimpanzee, or even a honeybee when it comes to learning, creativity, abstract thinking or connecting cause and effect in ways they haven’t been explicitly programmed to recognize.
This discrepancy gets at one of the field’s fundamental questions: what does it mean to say an artificial system is “intelligent” in the first place?
Seventy years ago, Alan Turing famously proposed such a benchmark; a machine could be considered to have artificial intelligence if it could successfully fool a person into thinking it was a human as well. Now, many artificial systems could pass a “Turing Test” in certain limited domains, but none come close to imitating the holistic sense of intelligence we recognize in animals and people.
Understanding how AI might someday be more like this kind of biological intelligence — and developing new versions of the Turing Test with those principles in mind — is the goal of a new collaboration between researchers at the University of Pennsylvania, Carnegie Mellon University and Johns Hopkins University.
The project, called “From Biological Intelligence to Human Intelligence to Artificial General Intelligence,” is led by Konrad Kording, a Penn Integrates Knowledge Professor with appointments in the Departments of Bioengineering and Computer and Information Science in Penn Engineering and the Department of Neuroscience at Penn’s Perelman School of Medicine. Kording will collaborate with Timothy Verstynen of Carnegie Mellon University, as well Joshua T. Vogelstein and Leyla Isik, both of Johns Hopkins University, on the project.