Sidelined UX Research: Lessons From Meta’s Senate Hearing
Whistleblower testimony reveals how corporate pressure can twist research and undermined its integrity.

By Michal Luria, Ph.D. Research Fellow at the Center for Democracy & Technology
It is not every day that user experience (UX) researchers find themselves testifying before the United States Senate. Yet that is precisely what happened two weeks ago, when former Meta employees, Jason Sattizahn and Cayce Savage, stepped forward as whistleblowers. Their testimony raised serious allegations about how safety research on virtual reality (VR) products was handled within Meta. It also raises broader questions about the challenges and hurdles that UX researchers might face when their findings conflict with corporate priorities.
The hearing came shortly after The Washington Post published a story about how safety research was allegedly shaped, constrained, ignored or suppressed by Meta. If true, this significantly undermines research integrity; deleting concerning data, misrepresenting findings, and removing inconvenient findings directly violate the most fundamental research ethics code.
What made last week’s testimony striking was not just the alleged misconduct itself, but the glimpse into the unique and complex challenges that UX researchers face when conducting research inside a tech giant. Researchers are often caught in the middle of competing pressures: leadership prioritizing brand reputation, legal teams focused on limiting liability, and product teams pushing for growth and engagement. As a result, the potential for conflicts of interest with research findings are an everyday reality, and even well-intentioned research can end up being misrepresented, skewed or fully ignored.
UX research and especially safety research teams are also commonly under-resourced, and many report heavy workloads, limited staffing, and lack of support when their findings clash with business goals. According to Dr. Sattizahn’s testimony, the one constant during his six years at Meta was “not having enough money to build for safety.”
But the absence of research isn’t even the worst outcome. The truth is that misleading research is worse than no research at all: a gap in research is easily identifiable, but research that is riddled with malpractice is extremely difficult to identify and point out.
This all adds up to a host of pressures that push against good internal UX research, but giving up on the profoundly important function of UX would be a devastating outcome, as many company researchers work extremely hard to surface safety risks and push for change within industry. Instead, companies need to take deliberate steps towards protecting and empowering their research teams.
Where Does UX Research Go From Here?
First, companies must recognize that having research teams is not enough. At a time when companies are pouring millions into hiring “superstar” AI researchers, it’s important to emphasize that research integrity isn’t about prestige research hires; it demands leadership in all parts of a company that respect research findings, even when those findings are uncomfortable. It also requires clear accountability structures, including internal auditing processes.
Second, there must be greater transparency and public visibility into research methods and approaches. Sharing findings and research processes (in participant privacy-protecting ways) will allow the public, regulators, and other stakeholders to understand people’s experiences of products, their shortcomings, and adopt conclusions and recommendations. This kind of openness would not only help build trust with companies and their internal research outcomes, but would ensure that critical safety issues are not hidden behind corporate walls.
Third, industry should establish ways in which independent researchers can gain access to company data in order to conduct their own research. Recent actions, including the closing of Meta’s CrowdTangle program last year, a critical tool for conducting research and providing transparency using company data, are a step back in that regard. Providing vetted, external researchers with datasets allows for both verification and replication of internal research findings, as well as additional independent research into important safety questions.
As Cayce Savage noted in her testimony, the purpose of UX research is generally to advocate for users, especially for vulnerable populations like children. This mission requires that researchers have the freedom to ask difficult questions, apply appropriate methodologies, and communicate findings clearly and accurately. When those are compromised, the researchers themselves, and the millions of people relying on technology to be safe, will likely bear the cost.
Now available for download: Global Governance of Low Earth Orbit Satellites 🚀 Featuring IX's Mallory Knodel
The new book Global Governance of Low Earth Orbit Satellites features a chapter from Mallory, “Insights from the Internet – How to Govern Outer Space” (Section IV, p. 207), where she compares the challenges of governing a borderless internet with those of managing an increasingly crowded orbit, drawing lessons from multistakeholder internet institutions like ICANN and the IGF.
The volume as a whole brings together international experts to tackle the legal, policy, and technical dimensions of Low Earth Orbit satellites, from digital sovereignty and cybersecurity to spectrum allocation and space sustainability. It’s both a scholarly resource and a practical policy guide for shaping sustainable, inclusive, and secure governance of emerging space-based internet systems.
Support the Internet Exchange
If you find our emails useful, consider becoming a paid subscriber! You'll get access to our members-only Signal community where we share ideas, discuss upcoming topics, and exchange links. Paid subscribers can also leave comments on posts and enjoy a warm, fuzzy feeling.
Not ready for a long-term commitment? You can always leave us a tip.
This Week's Links
Internet Governance
- Michigan representatives just proposed a bill to ban many types of internet content, as well as VPNs that could be used to circumvent it. Could it work? In reality it would be nearly impossible to enforce at scale, easily challenged in court, and likely to break essential services. https://www.cnet.com/tech/services-and-software/how-a-new-bill-tries-to-ban-both-adult-content-online-and-vpn-use-and-if-it-could-work
- Arizona will begin enforcing mandatory age verification for adult content sites on September 26, 2025, under HB 2112. Sites with more than 33% adult material must verify users’ ages with government IDs, digital IDs, or credit card checks. https://www.techradar.com/vpn/vpn-privacy-security/people-in-arizona-will-soon-need-to-prove-their-age-to-access-adult-sites-and-critics-warn-of-privacy-risks?trk=feed-detail_main-feed-card_feed-article-content
- The UK's Department for Science, Innovation and Technology and the AI Safety Institute have published a report on the state of advanced AI capabilities and risks – written by 100 AI experts including representatives nominated by 33 countries and intergovernmental organisations. https://www.gov.uk/government/publications/international-ai-safety-report-2025
- The United States Frequency Allocations: The Radio Spectrum Chart was updated in September 2025, reflecting data current as of March 2025. https://www.ntia.gov/page/united-states-frequency-allocation-chart
- Over 200 prominent politicians and scientists, including 10 Nobel Prize winners and many leading artificial intelligence researchers, released an urgent call for binding international measures against dangerous AI uses at UNGA. https://www.nbcnews.com/tech/tech-news/un-general-assembly-opens-plea-binding-ai-safeguards-red-lines-nobel-rcna231973
- For decades, the internet’s business model has funneled billions into platforms that weaken democracy and public knowledge. IX contributor Robin Berjon argues that Wikipedia’s scale and governance could instead build an advertising system designed for the public good. https://www.techpolicy.press/how-wikipedia-can-save-the-internet-with-advertising
- The Center for Technology Innovation at Brookings and AI Safety Asia held an online discussion about a new report examining the role of Southeast Asia in global AI governance. Watch the recording here: https://www.brookings.edu/events/ai-safety-governance-in-southeast-asia
- A new systematic review maps the legal challenges of internet intermediary liability, highlighting regulatory gaps, human rights concerns, and emerging best practices for digital governance. https://link.springer.com/article/10.1007/s11135-025-02344-y
- Killing the Messenger is a highly readable survey of the current political and legal wars over social media platforms. https://www.cambridge.org/core/books/killing-the-messenger/479C3E89365E984A765E3C7C341C2597
- The US-UK Tech Prosperity Deal touts £150bn in AI investment, but much of it is speculative and risks leaving the UK dependent on US tech firms with uncertain economic gains write Leevi Saari and Frederike Kaltheuner for AI Now Institute https://euaipolicymonitor.substack.com/p/us-uk-tech-prosperity-deal
Digital Rights
- The Stop Censoring Abortion project is a multi-part blog series from EFF examining how social platforms suppress abortion-related content and the harms of this censorship. https://www.eff.org/pages/stop-censoring-abortion
- The digital rights movement faces a funding crisis, autocratic tech, and fragmentation. The antidote? Radically networking its resources and agenda says Claudio Ruiz. https://www.openglobalrights.org/networks-over-organizations-an-infrastructural-revolution-for-digital-rights
- While world leaders gather at UNGA 80 to deliver lofty speeches, a quiet “internet coup” is unfolding as censorship technologies spread globally, warns IX contributor Konstantinos Komaitis. https://www.techpolicy.press/the-internet-coup-is-here-and-the-world-is-still-asleep
- This commentary reveals how everyday policy debates on data and emerging technologies in Turkey, Sudan, El Salvador, and India subtly reinforce digital authoritarianism by shaping narratives that legitimize state control. https://journals.sagepub.com/doi/10.1177/29768640251379940
- While Donald Trump assaults civil liberties and the social safety net, Democrats are lost. Tech capital’s continued dominance of both parties and Big Tech’s machinations in particular are key to understanding our political crisis, argues Thomas Ferguson. https://jacobin.com/2025/09/tech-capital-american-politics-ferguson
- British-Egyptian activist Alaa Abdel Fattah has been freed and reunited with his family after spending the past six years in jail in Egypt. https://www.bbc.co.uk/news/articles/cvg9772q3e1o
- To celebrate his release, the ebook edition of YOU HAVE NOT YET BEEN DEFEATED, tr. an anonymous collective, is available to download for free for until September 30. https://bsky.app/profile/fitzcarraldoeds.bsky.social/post/3lzixy7okl22f
Technology for Society
- AI-generated deepfake videos are reshaping political narratives across Africa, particularly around Burkina Faso’s military leader Capt. Ibrahim Traoré. Clips of him delivering fiery, fictional speeches or being praised in AI-generated songs by celebrities like Beyoncé, R. Kelly, and Eminem have gone viral, attracting tens of millions of views on YouTube and TikTok. https://newlinesmag.com/reportage/africas-ai-strongmen/?trk=feed_main-feed-card_feed-article-content
- As the European Commission and other actors are advocating for a ‘twin transition’ where digitalisation enables sustainability, this paper shows how Google and Amazon use AI and cloud services to frame the green transition on their terms, turning sustainability into a profit-driven strategy that consolidates their control. https://www.tandfonline.com/doi/full/10.1080/14747731.2025.2548716
- This episode of FreshEd examines how ed-tech philanthropy shapes South African schools with anthropologist Amy Stambach, author of The Corporate Alibi. https://open.spotify.com/episode/5zWUDIPwr5JsDSccTlpT0d
- Steven Levy argues that Silicon Valley’s embrace of Trump in pursuit of short-term gains has become a “suicide pact” that threatens both the industry’s future and democratic norms. https://www.wired.com/story/silicon-valley-politics-shift
- As influence shifts from gatekept media to networked, creator-driven ecosystems that reward authenticity and participation, institutions must build creator-style capacity and engage communities or risk irrelevance, write Renée DiResta and Rachel Kleinfeld. https://carnegieendowment.org/research/2025/09/communications-social-media-nonprofit-institutions-new-media-environment
- This paper develops heuristics to show how human attention, treated as a scarce resource, shapes governance in online organisations, drawing on theory and blockchain case studies. https://osf.io/preprints/mediarxiv/cdrmp_v1
Privacy and Security
- This is a weird story: The US Secret Service says it disrupted a large SIM farm in the New York area ahead of the UN General Assembly, seizing 300 SIM servers and 100,000 SIM cards allegedly tied to nation-state actors, organized crime, and terrorist groups. Experts and commentators, however, are skeptical. SIM farms are typically used for spam, scams, fake accounts, or cheap international calls, not large-scale infrastructure attacks. https://www.schneier.com/blog/archives/2025/09/us-disrupts-massive-cell-phone-array-in-new-york.html
Upcoming Events
- Policy Network on AI (PNAI) Multistakeholder Working Group meeting. September 29. 1pm UTC. Online. To join, please access the PNAI page and register for the mailing list to receive the agenda and meeting link. Registration also gives you access to the archive of past updates: https://intgovforum.org/en/content/igf-policy-network-on-ai#resources
- Mainstreaming Trust and Safety in Online Games. Join the NYU Stern Center, the Center for Democracy and Technology, and AU's CSINT for a public event on trust and safety in online games. September 29. 9am-1pm EDT. Washington, DC. https://www.eventbrite.com/e/mainstreaming-trust-and-safety-in-online-games-tickets-1568036753139
- Internet Accountability Forum, conference organized by the Global Initiative on the Future of the Internet (GIFI), October 2-3. Brussels, Belgium. https://www.eui.eu/events?id=579262
- Race, Media & International Affairs 101 with Prof. Karen Attiah will examine the intersection between race, national identity, and the mass media in today's global world order. Live, online classes run October 6-November 21. 6-7:45pm ET. Online. https://www.resistancesummerschool.com/fall-2025-registration
- Solving the Standardisation Dilemma: The European Standardisation System faces the challenge of moving fast enough to meet policy and market needs while remaining inclusive, and this roundtable will examine how openness in models like open source can provide a way forward. October 22. 5.30-7pm CEST. Brussels, Belgium. https://openforumeurope.org/event/solving-the-standardisation-dilemma
- From Founder to Future. MEDLab presents author John Abrams for an insightful discussion on transitioning businesses to employee ownership. October 23. 12-1.30pm MT. Boulder, CO. https://www.colorado.edu/lab/medlab/2025/08/30/john-abrams-founder-future
- Workshop – Internet Rules: Understanding digital rights and policies in South and Southeast Asia. Apply by October 9. November 24-December 11. Online. https://www.apc.org/en/event/workshop-internet-rules-understanding-digital-rights-and-policies-south-and-southeast-asia
- Survivor AI Launch Party. Chayn will launch Survivor AI, a groundbreaking feminist AI tool that empowers survivors of image-based abuse to request takedowns of harmful content from platforms. September 30. 5-6.30 GMT+2. Berlin, Germany. https://luma.com/rg71dkhb?trk=feed_main-feed-card_feed-article-content
Careers and Funding Opportunities
- OpenForum Europe: Executive Director. Brussels, Belgium. https://openforumeurope.org/call-for-applications-join-openforum-europe-as-our-next-executive-director
- Future of Life Institute: Multiple PhD and postdoctoral fellowships open for submissions. Apply by January 5 2026. US, UK and Canada. https://futureoflife.org/grant-program/postdoctoral-fellowships
- Glitch: Advocacy & Communications Manager. Apply by October 12. Remote UK. https://docs.google.com/document/d/19SrA-bpf0ViuIaygxSzqBspq5Q7u2VqY6ReTgl45HTo/mobilebasic
- Interledger: Director of Fundraising. Remote. https://www.linkedin.com/posts/interledger-foundation_director-of-fundraising-job-description-activity-7372343184158543872-SOtL/?rcm=ACoAAAF2vmUBb7nX9-vJJu_5XpVoPz9DQWdC2iM
Opportunities to Get Involved
- Call for Reviewers: ACM CHI Conference (AI × People/Society). September 28. https://www.linkedin.com/posts/tahaei_call-for-reviewers-acm-chi-conference-activity-7376678557403684864-rYVe/
- Call for Speakers: ecoCompute 2025. September 30. https://cfp.eco-compute.io/ecocompute-2025/cfp
- Call for Papers: The Imaginative Landscape of AI – Visions, Positions, Conflicts. Abstracts due December 1. https://comai.space/en/call-for-papers-the-imaginative-landscape-of-ai-visions-positions-conflicts-2
What did we miss? Please send us a reply or write to editor@exchangepoint.tech.
Comments ()