Investigating the Gender Differences in Trust in Science
Survey-weighted analysis of trust and attitudes toward science/healthcare across gender, education, politics, and religiosity
Motivation
Public trust in science and healthcare shapes outcomes: people’s medical decisions, responses to public health guidance, and interpretation of uncertainty and risk. I wanted to explore how attitudes related to science/healthcare vary across gender, and how those relationships interact with education, political orientation, and religiosity — while being careful about the statistical pitfalls common in large survey datasets.
What I analyzed
The analysis focuses on trust/attitude measures (Likert-style items) and compares distributions across groups, then examines correlations and modeled relationships with key demographic predictors. The goal is not to “win an argument” with a plot, but to produce an interpretable, reproducible workflow that a careful reader could audit.
Core comparisons
- Gender × trust: how trust/attitudes differ by gender
- Education × trust: association between education level and trust measures
- Politics × trust: association between political orientation and trust measures
- Religiosity × trust: association between religiosity and trust measures
Methodology
Survey analysis is easy to get wrong. This project treats methodology as a first-class concern: consistent recoding of numeric survey responses, explicit handling of missing values, and the use of survey weights when comparing groups so that summary statistics better represent the target population.
Data preparation
- Converted numeric-coded survey columns safely (coercing non-responses to missing)
- Reverse-coded negatively worded items to keep interpretation consistent
- Kept analysis scoped to interpretable comparisons rather than overfitting
Analysis & reporting
- Used weights for group summaries where appropriate
- Focused on effect sizes and directionality, not just “significance”
- Designed plots to be readable and honest about uncertainty
Challenges & debugging
A recurring practical challenge in survey analysis is reshaping and aggregating data cleanly when columns contain mixed types or duplicated labels, and when missingness is encoded inconsistently across variables. Debugging involved making transformations explicit and validating intermediate outputs (counts, distributions, and spot-checks) before producing plots.
What this demonstrates (industry framing)
This project demonstrates end-to-end data analysis skills on messy, real-world data: defining a scope, cleaning and recoding, choosing methods that match the data-generating process (including weighting), and communicating results in a way that avoids overclaiming. These skills transfer directly to product analytics, applied research, policy-facing data work, and R&D teams working with observational datasets.