by Carol Garcia
Science is a constant. From cooking a meal to getting a vaccine, scientific advancements have improved the daily lives of people all over the world. But how much does the average person living in the U.S. trust science? To answer this question, we must look at different demographics and time periods, specifically before and after the COVID-19 pandemic. There is indeed a range of opinions on controversial topics like climate change, which can often lead to questioning the integrity of scientific work itself. Additionally, flashy headlines such as “Can the public’s trust in science-and scientists- be restored?”, have placed the scientific community under scrutiny, which has led some Americans to distrust science in the last five years. An article about the decrease of trust in science reported that in April of 2020, 87% of Americans expressed “at least a fair amount of confidence” in scientists, whereas in a similar survey taken in November of 2024, 76% of the population believed that “scientists act in the best interest of the public”. Currently, as the scientific community reels from a flurry of executive orders and funding cuts, some scientists believe this will “erode the public trust in science” once again. In this piece, we briefly discuss trust in science during and after the COVID-19 pandemic and how this major event has impacted the public’s view of science.
by Carol Garcia
The mental health of academics across the country in recent decades has been in decline. From undergraduate students to early career researchers, concern is being expressed about the heavy toll that academic pressures put on mental health. In a study about the prevalence of anxiety and depression amongst graduate students, it was found that20-50% of graduate students report symptoms during their training. Attending graduate school in any field is stressful, but due to the nature of STEM (science, technology, engineering, and mathematics) research, mental health issues can be particularly prevalent among graduate students in thesciences. Across the country, many seek solutions from their own institutions and come to understand that often, these very institutions do not have the resources necessary to handle a mental health crisis.
by Skyler Berardi
This is the third post in our series on the consequences of outside influences on the performance and communication of science.
This November, National Geographic uploaded a post on Instagram titled, “Climate change could impact where we live. Are these cities ready?” I had already been researching public attitudes towards climate change for this article, so I was curious to see what discourse was happening in the comments section. I scrolled and found an all-too-familiar debate. A handful of folks were posting in support of climate change action, including one person who penned, “to protect Earth is a burning topic and must be focused upon.” Then, there were the dissenters:
by Gabriel Iván Vega Bellido
This is the second post in our series on the consequences of outside influences on the performance and communication of science.
Medical research has a history going back to when the Egyptians started documenting the medicinal properties of plants, but it has drastically changed in the context of the industrial revolution and capitalism. Though these changes have undoubtedly contributed to medical advancements and an increase in longevity, the interests of industry can often be in tension with optimal human health. Some notable examples of this include the enormous fast and processed food industries' contributions to the current obesity epidemic, and the continued promotion of fossil fuels by large oil companies despite having knowledge of their environmental impact for at least 50 years. Given the multibillion dollar size of the medical research industry, there is ample room for such conflicts of interest. This blog will examine how the external interests of industry and politics have recently shaped medicine of the pharmaceutical and psychedelic kind.
by Clara Raithel
This is the first post in our series on the consequences of outside influences on the performance and communication of science.
More than fifty years after the criminalization of psychedelic drugs, psychedelic research is experiencing what many call a “renaissance” [1,2]. Researchers across the world conduct clinical trials testing the efficacy of psychedelic drugs, such as ecstasy (MDMA), mescaline, lysergic acid diethylamide (LSD), and psilocybin, as a form of treatment against various mental diseases, with some pointing towards therapeutic potential [3,4]. This increase in scientific interest is not a first: in the 1950s and 1960s, researchers first got their hands on psychedelics and saw great potential in these compounds to allow for a breakthrough in mental health science. However, instead of describing a breakthrough, psychedelic drugs gained a bad reputation and were effectively banned in 1970 when the US government initiated its “war on drugs”. How did the perception of psychedelic drugs change so drastically? Contemporary psychedelic scientists often put the blame on counterculture figures. The most prominent scapegoat is Dr. Timothy Leary, a Harvard psychologist who lightheartedly shared the drug at parties, evoking a bulk of negative press surrounding LSD that ultimately justified the “war on drugs” in the public eye. But explaining the bad reputation of past psychedelic research takes more than just pointing fingers at “bad scientists”.
By Gabriel Iván Vega Bellido
This is the ninth post in our series about how science is communicated and the consequences thereof.
Most people don’t interact with professional scientists on a regular basis. Therefore, the depictions of scientists in popular media play a significant role in influencing the general public’s expectations, trust, and understanding of the scientific community. If you were to ask your friends and family who aren't in scientific fields about their understanding of a scientist, their responses would likely be shaped by a mix of both real-life and fictionalized portrayals encountered through various media. This post aims to contemplate how some of the most popular depictions of scientists in English-speaking media, including various works of fiction, have reflected and influenced the public’s perception of science and scientists.
By David Sidibe
This is the eighth post in our series about how science is communicated and the consequences thereof.
Mistrust in the scientific and medical community is reaching a boiling point. You don’t have to look too far to find evidence of this fact, with the politicization of medicine and vaccines during the Covid-19 pandemic. While I would like to say that there is one cause of this mistrust, the reality is that this mistrust is multi-faceted and rooted in a history of careless and, quite frankly, horrific treatment of patients. The impact of mistrust of the medical community among people of color and people from underrepresented minority (URM) backgrounds is of particular concern. Ethnic, racial, and gender health disparities are prevalent in communities throughout the US. Current and historical events (see Tuskegee Syphilis Study, HeLa cells, and the AIDS epidemic for examples) have only further amplified the mistrust within URM communities.
By Kay Labella
For Pride Month, the PSPDG blog is collaborating with students from LTBGS and Lambda Grads for a series of posts highlighting the LGBTQ+ community and related matters at Penn and beyond.
Dr. Ben A. Barres (MD/PhD) was born September 13, 1954, in West Orange, New Jersey. After graduating from Massachusetts Institute of Technology with a B.S. in Biology, he went on to obtain his medical degree from Dartmouth Medical School in 1979. While in his neurology residency at Weill Cornell Medicine, Barres found himself intrigued by neurodegeneration and glial cell function; he subsequently resigned his residency to pursue a PhD in neurobiology at Harvard Medical School so that he might pursue research into these subjects. In his time as a postdoc and later heading his own lab at Stanford University, Barres remained at the forefront of scientific discovery. His lab published numerous critical studies expanding upon our understanding of the role of astrocytes, microglia, and the blood-brain barrier, as well as how synapses form within the brain, among other topics. As a PI, he was well-loved and dedicated to the success of his trainees.
By Stefan Peterson
For Pride Month, the PSPDG blog is collaborating with students from LTBGS and Lambda Grads for a series of posts highlighting the LGBTQ+ community and related matters at Penn and beyond.
Members of the LGBTQ+ community in science, technology, engineering, and math (STEM) deal with many challenges beyond those of their non-LGBTQ+ peers. Scientists worldwide share experiences of fear of coming out to colleagues, which is reasonable when considering that harassment is 30% more likely for LGBTQ+ individuals in STEM than their non-LGBTQ+ peers. Harassment and discrimination against LGBTQ+ people in STEM result in higher rates of queer students dropping out of STEM majors and of LGBTQ+ scientists planning to leave their careers. Nations and institutions around the world urgently need to address these issues to support the STEM LGBTQ+ community. Over the past two years, one team of early career scientists has been working to address these issues in the United States and the United Kingdom through science diplomacy.
By Maxwell Pisciotta
For Pride Month, the PSPDG blog is collaborating with students from LTBGS and Lambda Grads for a series of posts highlighting the LGBTQ+ community and related matters at Penn and beyond.
It is no secret that if you’re looking for a gender-neutral restroom on the University of Pennsylvania campus, they are often difficult to find. The difficulty, of course, depends on your department, the buildings which you occupy, the age of those buildings, and the school that owns those buildings. Ultimately, this is to say, that yes, there are gender-inclusive restrooms throughout campus, but if you happen to be in a building that does not already have one, you may have to go as far as two or three buildings over to locate one. For faculty, staff, and students who do not have to leave their “home” building often or who spend hours in lab, this can pose a substantial inconvenience.
By Kay Labella
For Pride Month, the PSPDG blog is collaborating with students from LTBGS and Lambda Grads for a series of posts highlighting the LGBTQ+ community and related matters at Penn and beyond.
Founded in 2022 by Dr. José Bauermeister, the Eidos LGBTQ+ Health Initiative was created to address persistent health disparities facing LGBTQ+ communities. PSPDG, in collaboration with LTBGS, was fortunate enough to interview Kevin Schott, Eidos’ Director of Engagement, about this fantastic partnership that aims to streamline turning academic research into social impact.
By Maya Hale
This is the seventh post in our series about how science is communicated and the consequences thereof.
Academia requires publications for success. They can be a condition to defend a PhD thesis (or dissertation), a part of a tenure application, or guidance for research and education.
Lack of access to primary literature isn’t just restrictive to academics. It also acts as a barrier for aspiring students to break into science. Students submitting papers to the Journal of Emerging Investigators (JEI), a volunteer-run scientific journal for middle and high school students, are often unable to explore certain questions or even make reviewer edits to their submissions because they do not have access to publications on their research topic or the statistical analysis needed to improve publication quality.
By Alexandra Ramirez
This is the sixth post in our series about how science is communicated and the consequences thereof.
Communicating science takes many forms and is practiced by those in all stages of their scientific career. Likely the most common and most recognizable type of science communication is publishing research articles in scientific journals. These articles can be accessed by audiences around the world, both in and outside of the scientific field (though all are not always available through open-access sources). Publications are also increasingly becoming the currency needed for career growth and success in academic research. However, going from project conception to publication of a research article takes years. For those just starting out in research, publishing is a goal that is highly sought after but cannot be immediately accomplished.
By Zeenat Diwan
This is the fifth post in our series about how science is communicated and the consequences thereof.
“Science has the potential to address some of the most important problems in society and for that to happen, scientists have to be trusted by society and they have to be able to trust each others' work. If we are seen as just another special interest group that are doing whatever it takes to advance our careers and that the work is not necessarily reliable, it's tremendously damaging for all of society because we need to be able to rely on science.”
—Ferric Fang quoted by Jha (2012)
By Amanda N. Weiss
This is the fourth post in our series about how science is communicated and the consequences thereof.
Peer-reviewed manuscripts often serve as the primary way to disseminate information within academia. People trust that the information they’re reading is reliable and rigorous, as other experts in the field have already vetted the paper to make sure that it’s of high quality. However, peer review can be a slow process, preventing valuable information from reaching audiences in a timely manner. This can be a barrier in fields that are rapidly advancing, and is especially problematic when information is needed as quickly as possible, as is the case during public health emergencies.
By Clara Raithel
This is the third post in our series about how science is communicated and the consequences thereof.
Whether it is an application for research funding, or a manuscript sent to a scholarly journal for publication, writing is an essential aspect of scientific work. The majority of the resulting output is evaluated by other scientists, in a process referred to as peer review. The ultimate purpose of this evaluation is to ensure the originality, importance, and quality of the academic work before it is executed, or made publicly available. In other words, grants are awarded, and manuscripts are published only when scientific standards are met. As such, peer review represents a critical gatekeeping moment that can define the outcome of entire scientific careers - simply because so much in science depends on the amount of financial resources available, and the number of papers published. However, its consequences reach far beyond an individual’s career: peer review impacts the knowledge produced and shared with the rest of the world – thereby potentially affecting the lives of millions of people.
This is the second post in our series about how scientific findings are communicated and the consequences thereof.
“Do your own research.” It sounds harmless, even admirable. But dig a little deeper, and you’ll find that this philosophy is a major antagonist in the fight against scientific misinformation. Anti-vaxxers “did their own research” when they found a Lancet paper (later retracted) that claimed a link between the MMR vaccine and the onset of autism. Despite the paper’s retraction and its claims having no scientific basis, a nationally representative survey conducted in 2019 found that 18% of respondents believed vaccines cause autism. Ivermectin, an anti-parasitic drug most commonly used in veterinary practice, received attention after a few early studies suggested it might prevent and treat COVID-19 infection. It couldn’t have helped when Joe Rogan, currently the most popular podcast host on Spotify, shared “his own research” and personal belief in ivermectin’s efficacy. The research on ivermectin was later retracted by the author or subject to criticism. Still, due to the initial hyperbolic attention, the CDC reported a 24-fold increase in ivermectin prescriptions in August of 2021 compared to the pre-pandemic baseline. This sharp rise in prescriptions was associated with concern from Poison Centers and the FDA about the number of individuals reporting medical complications and hospitalizations following inappropriate ivermectin use.
This is the first post in our series about how scientific findings are communicated and the consequences thereof.
“Abortion bans are on the ballot this year, and they are going by the name Doug Mastriano.”
So began a get-out-the-vote advertisement that I, based in Philadelphia, saw multiple times on YouTube in the weeks before the 2022 midterm elections. Similar ads centered on the anti-abortion stances of other Republican candidates for office. Abortion rights loomed large on the ballot last November, following the Supreme Court’s decision in Dobbs v. Jackson Women’s Health Organization barely five months earlier to overturn Roe v. Wade. The Roe decision had limited the ability of state governments to regulate abortion in the first two trimesters, interpreting the 14th Amendment of the Constitution as conferring a right to privacy in such cases. Now, the Dobbs decision declared that the Constitution does not protect the right to an abortion at all, and restricting abortion at any stage of pregnancy is up to the states.
By Sanjana Hemdev & Sonia Roberts
This is the fifth post in a series about artificial intelligence, along with its uses and social/political implications.
“Artificial intelligence” (AI) is ubiquitous in today’s world. From Amazon’s Alexa to Five Nights at Freddy’s, from weather prediction systems to vaccine development, AI is a constant source of both wonder and trepidation in the popular imagination. As development in the field of AI continues at breakneck speeds, we are invited to consider not only how AI can change the physical world, but also how it can change our understanding of what it means to be “intelligent.” Can AI systems truly be considered intelligent in the same way that humans are? If so, by whose definition – if any? In search of answers to these questions, we spoke with Dr. Lisa Miracchi Titus, a Professor of Philosophy and seasoned artificial intelligence expert here at Penn. Dr. Miracchi Titus is affiliated with the GRASP Lab, one of the oldest and most well known robotics labs in the country. “This is a beautiful time to be asking all of these questions,” Dr. Miracchi Titus says. “The technological progress that we’ve made both enables us and requires us to take a step back at this moment of time and ask some pretty foundational questions.”
By Hersh Sanghvi
This is the fourth post in a series about artificial intelligence, along with its uses and social/political implications.
For a long time, Machine Learning (ML) and Artificial Intelligence (AI) have remained firmly in the domain of research applications and science fiction. This is changing thanks to the emergence of a huge variety of AI devices, from smartphones and wearables to robots and autonomous drones. Many companies are working on AI hardware and software as a way to help businesses and organizations create better, smarter solutions, like personal assistants and self-driving cars. As it advances, AI has begun to have an impact on everyday life. We are starting to see more people using AI-powered devices to help their daily lives. The question is: will it help or hurt us?
The previous paragraph, except the first sentence, was written entirely by a special kind of ML model called a “language model”… Despite recent progress, current AI still has many limitations that can make its widespread use dangerous. In light of this, it’s important to understand what these limitations are and ultimately how we as a society should take action. In particular, we’ll look at two exciting, consumer-facing applications of AI: CV and NLP. To get more insight into this topic, I interviewed Dr. CJ Taylor, professor of Computer and Information Science here at UPenn, whose research focuses on computer vision and robotics.
By Hannah Kolev
This is the third post in a series about artificial intelligence, along with its uses and social/political implications.
An enchanting video of the Spanish artist Salvador Dalí - who passed away in 1989 - remarking on the current weather with visitors of the Dalí Museum. A video of British soccer star David Beckham speaking nine different languages - only one of which he actually speaks - petitioning world leaders to end malaria. A faked pornographic video used to discredit investigative journalist Rana Ayyub for speaking out against a sex abuse case. In each of these scenarios, deepfake videos were used to dupe their audiences. These deepfakes can be used to ignite the imagination, to inspire change, or to intimidate victims. But what are deepfakes and how are they made? What is their purpose and how are they regulated? We will address these questions in the following blog post.
by Shannon Wolfman
This is the second post in a series about artificial intelligence, along with its uses and social/political implications.
If you’ve been following the news on climate change and artificial intelligence over the last few years, you might feel conflicted about the potential for AI to help us in combating global warming. For the most part, mainstream and tech publications either exalt AI as a climate savior or decry AI’s ever-increasing carbon footprint. When both issues are discussed in the same article, the focus is on whether AI is doing more harm than good for the environment.
But AI isn’t inherently carbon-intensive; it’s a tool that has tremendous capabilities for mitigating climate damage, and any technology is only as green as its power source. Considering AI’s climate costs and potential climate benefits as distinct issues and understanding the impacts they have on each other can lead to more coherent conversations about the role that this technology can and should play in our climate future.
By Amanda N. Weiss
This is the first post in a series about artificial intelligence, along with its uses and social/political implications.
When you hear the phrase “artificial intelligence,” many different thoughts may come to mind. Perhaps you think of the helpful internet assistants on your phone and computer. Maybe your mind goes to the villainous computers of science fiction stories, seeking revenge on humanity. Whichever contexts your mind wanders into, it is clear that certain types of artificial intelligences have risen in prominence in both in everyday life and in popular media. At its core, artificial intelligence (AI) is the collection of computers or computational programs that are able to exhibit behaviors we would associate with human intelligence. AI algorithms can analyze data and incorporate new information into future decision-making. However, does the ability to learn in this sense truly constitute intelligence? Could advancements in AI eventually lead to sentient, self-aware computers? Or is there something unique to human psychology that cannot be modeled by algorithms and data processing? These are all questions that arise as we continue to use AI technology in our daily lives. In order to gain experienced insight, I had a conversation with Camilo Fosco, a machine learning PhD student at MIT, about artificial intelligence in the context of his research on computer vision and cognition.
By Wisberty J. Gordián Vélez
This is the second post on the big idea of the role of government funding in scientific research.
Knowledge and technologies that we often take for granted, such as the internet, Google search, global positioning system (GPS) devices and magnetic resonance imaging (MRI), are the product of federal investments in research and development (R&D). Federally-supported research promotes innovation: 30% of issued patents rely on this support, which includes government-owned patents, patents citing federal funding, or patents citing other supported patents and research. Scientific and technological innovations account for most of the exponential growth in individual income since 1880. During the COVID-19 pandemic, we have benefited from vaccines developed in part thanks to decades of federally-supported research. The NIH also partnered with Moderna to develop their vaccine, and the Trump administration allocated billions of dollars to develop and manufacture vaccines. The science and technology (S&T) output of the country is directly tied to our individual and collective wellbeing, and the government plays an irreplaceable and necessary role in creating policies that determine what is achieved. In this blog, I explore federal funding of R&D and S&T policy in the U.S., and talk about this with Dr. Kenneth Evans, an S&T policy scholar at Rice University’s Baker Institute for Public Policy and a member of the project staff for a report by the American Academy of Arts and Sciences titled “The Perils of Complacency: America at a Tipping Point in Science & Engineering”.
By Lulu Allen-Walker
This is a guest post from contributing author Lulu Allen-Walker. If you are interested in contributing an entry, please contact PSPDG.
The headlines roll in like waves, more every year. Half of the great barrier reef is dead. Reefs are battered by climate change and look “ravaged by war.” But how does ocean warming actually affect how corals function? And can some corals take the heat? I’m on a research team at Penn Biology that’s trying to find answers. Our newest results suggest that heat-stressed corals slow their metabolisms and lose the power to regulate their cellular chemistry – even if they appear healthy at first glance.
By Amanda N. Weiss
This is the first post on the big idea of the role of government funding in scientific research.
The United States gained its independence as a country near the start of the first Industrial Revolution. Thus, perhaps unsurprisingly, technological research has been an element of our society for much of its existence. As the country has grown and advanced, it has undergone changes in global involvements and societal priorities, and these have been reflected in the STEM research that the federal government promotes and funds. I recently spoke with Dr. Thomas Cornell, a professor of Science, Technology, and Society at the Rochester Institute of Technology, who offered insight into the ways that government funding of STEM research in the United States has changed over time, especially during periods of upheaval such as the World Wars.
By Wisberty J. Gordián Vélez
This is our third post on the big idea of confirmation bias.
The United States is experiencing an unprecedented intermingling of crises: job losses not seen since World War II, a pandemic that has killed more than 500,000 Americans, and political division that led to an attack on the Capitol. These crises have heightened our understanding of the role of politics and policy in our lives, as reflected by the record levels of votes cast and turnout in the 2020 election. This historic engagement has been driven in part by political polarization, a phenomenon in which the beliefs of different groups regarding policy, ideology, and political institutions become increasingly oppositional. When driven to extremes, it can impair democracy and the implementation of policies that address society’s problems. Voters are now more likely to dislike the other side and see it as an existential threat to the country at levels exceeding differences in policy opinions. Political scientists have argued that polarization has been fomented by the nationalization of politics and by parties becoming more homogeneous and identifiable with specific policies, social views, race, religion, ideology, and identity. Policymakers also contribute with how they communicate and act in response to a polarized electorate to maintain their power. Every disagreement is a battle to the death between two sides, where cooperation is impossible and no victory is secure. This high-stakes feeling is reflected in the small margins and few districts or states that determine control of Congress and the presidency. As people retreat to their corners, the two parties are differentiated further and polarization is reinforced in a vicious cycle.
By Sonia Roberts
This is the second post on the big idea of confirmation bias.
“Confirmation bias” refers to a characteristically human weakness: The tendency to favor information that supports what we already believe. But if we aren’t careful, biases can also pop up in systems that use machine learning tools. Let’s say you want to develop a tool to determine the locations in a city where more crimes are committed so that you can alert the police to send more officers to those locations. The tool is trained using existing police reports. More crimes are reported in the locations where there are more officers, because officers are the ones filing reports. Thus, the tool learns that more crimes are committed in the places that already have a greater density of police officers, and keeps advising the police force to send officers to those locations. There could be other parts of the city with lots of crimes that are not being reported, but the tool will never learn about those crimes -- they simply don’t appear in the dataset. This could create a feedback loop that looks and behaves a lot like confirmation bias does in humans.
By Amanda N. Weiss
This is the first post on the big idea of confirmation bias.
In a world with an overwhelming abundance of information and opinions, it is no surprise that we do not take it all in and instead must devote our attention to a select subset of information . But, as with every instance of selective information-seeking, we risk our cognitive biases preventing us from forming a well rounded and well reasoned mental model of the world. Confirmation bias, the tendency to seek out information that agrees with our beliefs and reject that which opposes them, is one such element of our psychology. I recently had a discussion with Dr. Tali Sharot, the director of the Affective Brain Lab at the University College London, about her work on the neuroscience of information-seeking and how it relates to confirmation bias. Dr. Sharot is especially interested in figuring out why we engage in behaviors that seem arational (not based on reason), as opposed to those with clear purposes.