Rethinking Robots: Why Visual Representation of AI Matters

The images we use to depict AI, from robots, to blue brains and cascading code, are more than just clichés. They shape public understanding, feed myths and undermine meaningful engagement. Better Images of AI is working to change that.

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style.
Hanna Barakat & Archival Images of AI + AIxDESIGN / Rare Metals 2 / Licenced by CC-BY 4.0

By Audrey Hingle and Tania Duarte

We know images shape perception because of a large body of interdisciplinary research, including theories like the picture superiority effect, a phenomenon in which pictures and images are more likely to be remembered than words. That’s why, when it comes to artificial intelligence, the visuals we use to represent it matter. Whether they’re gleaming white robots, circuit-brain hybrids, or Matrix-style green code, these images can obscure the reality of how AI works, who builds it, and who is affected. In doing so, they limit public understanding and participation in AI and its regulation.

To explore why this matters, I spoke with Tania Duarte, founder of We and AI and convener of the volunteer-run project Better Images of AI. The initiative provides a growing collection of Creative Commons images designed to replace AI tropes and offer a more accurate, inclusive visual vocabulary for artificial intelligence. For those of us working in technology, policy, communications, journalism, or other related fields, our image choices aren’t just aesthetic, they shape how people understand and engage with AI.

Fantasy Visuals, Real-World Harms

Duarte’s primary role is as founder of We and AI, a nonprofit that works to improve public understanding of artificial intelligence and its impact on society. Duarte was trying to engage the public on facial recognition, algorithmic bias, and the expanding role of AI in everyday life. But again and again, she found the conversation stalled at the imagery. “When people think AI is a robot or a glowing blue brain, they disengage. They either feel fear, awe, or confusion—but not agency.”

These aren’t isolated responses. Duarte points to research by Dr. Kanta Dihal and others showing that these visuals reinforce harmful assumptions: that AI is incomprehensible, that it operates independently of human control, and that it belongs to a narrow group of developers: often white, male, and powerful.

“The visual language of AI tells people they’re not invited to participate,” Duarte says. “It signals that AI is beyond scrutiny or regulation, which plays right into the hands of those pushing hype over accountability.”

She is referring to how dominant visual tropes—like humanoid robots, glowing blue brains, and cascading code—frame AI as something autonomous, mystical, and detached from human decision-making. These images suggest that AI is either inherently intelligent or essentially unknowable. In both cases, it appears beyond human influence. That framing distances people from the systems that impact their daily lives, from hiring algorithms and predictive policing to recommendation engines and credit scoring.

Duarte explains that these visuals do not just confuse the public. They actively disempower. If AI is shown as magical or godlike, why would anyone believe they can question it, much less intervene? She points to the frequent use of the robot hand reaching out to a human, a visual echo of Michelangelo’s Creation of Adam. This familiar image elevates the machine to divine status while casting people as passive, secondary, or replaceable.

At the same time, visuals often center white male executives, sterile sci-fi settings, and cold, blue tones. These choices send subtle cues about who belongs and who holds power. The result is a visual field that positions AI as the domain of a technical elite, and signals to everyone else that their role is limited to watching from the sidelines.

This dynamic has real consequences. When AI is presented as too complex, too advanced, or already inevitable, it becomes harder for the public to push for transparency or demand regulation. That makes it easier for companies to promote inflated claims about AI’s potential while resisting oversight. The fantasy becomes a buffer against accountability.

The Tropes To Watch For

Better Images of AI’s guide for users and creators outlines eight recurring visuals that often mislead, distract, or reinforce harmful assumptions:

  1. Descending code: This trope, often a direct reference to The Matrix, frames AI as dystopian and impenetrable. For those unfamiliar with the films, it can simply appear as a wall of cryptic symbols, reinforcing the idea that AI is beyond understanding.
  2. White robots: Depicting AI as pale humanoid robots implies that intelligence and power are white by default. This visual excludes the global majority and reinforces racial and ethnic biases within AI narratives.
  3. Variations on The Creation of Adam: Visuals showing a robotic and human hand nearly touching evoke religious imagery that casts AI as divine or mystical. This reinforces the idea that AI is beyond human agency or regulation and elevates developers to a godlike status.
  4. The human brain: While AI is sometimes loosely inspired by neural networks, most AI systems have little in common with human cognition. Equating AI with the brain suggests a false equivalence and encourages unrealistic expectations.
  5. White men in suits: This trope frames AI as being developed and controlled by a narrow demographic. It erases the contributions of less-visible workers like data labelers, technicians, and impacted communities, and centers narratives of power rather than accountability.
  6. The color blue: Blue has long been associated with progress and technology in the Global North, but in AI imagery it also conveys coldness, masculinity, and emotional distance. It subtly nudges viewers toward resignation rather than critical engagement.
  7. Science fiction references: Imagery borrowed from films like The Terminator or 2001: A Space Odyssey shapes public understanding far more than real-world systems do. These visuals blur fact and fiction, reinforcing fears and fantasies instead of grounded understanding.
  8. Anthropomorphism: Giving AI human-like features suggests that it can act independently or make decisions like a person. This masks the roles of designers, developers, and data sources, making it harder to assign accountability. It also invites harmful stereotypes when gender or race is projected onto machines.

Public Understanding Starts With Better Visuals

Why does this matter for governance? Because representation drives comprehension, and comprehension enables participation. Duarte is blunt: “If people don’t understand how AI systems are built or used, they can’t meaningfully weigh in on how they should be governed.”

She notes how even lawyers have fallen for the fantasy, citing fake case law generated by large language models. If trained legal professionals can be misled, what about policymakers, journalists, or the public at large?

The stakes are high. AI is shaping decisions about employment, healthcare, justice, and infrastructure. But if we keep showing it as a robot in a courtroom with a gavel, we obscure the real questions: How is this system trained? Who’s accountable for its outcomes? Who benefits, and who’s at risk?

Changing Visual Culture in a Generative Era

Stock photography rewards familiarity. The more often a trope is used, the more likely it is to be reused. And now, generative AI tools are reproducing, and often exaggerating, the same narrow visual cues. Duarte sees it getting worse: more gendered, more stereotyped, more decontextualized.

“Sparkles and magic wands are now icons for generative AI,” she says. “That tells people it’s effortless, enchanting, and beyond critique. But what we need is transparency, not enchantment.”

There’s no single image that can replace the current crop. Duarte argues for plurality, specificity, and visibility. Show the actual systems being used. Show the supply chains and workers: from data labelers to rare-earth miners. Show the communities impacted. And show the diverse people developing alternatives.

Toward A Richer Visual Landscape

Everyone who works on running and maintaining Better Images of AI is a volunteer. We and AI manages and subsidises it through other non-profit work, as curating and documenting creative approaches to AI takes significant time and effort. Contributors include artists, researchers, educators, and activists who share a common belief: that visuals shape narratives, and better narratives can empower better choices. “We want images that reflect the reality of AI: that it’s built by people, shaped by choices, and open to challenge,” Duarte says. “If we can see that clearly, we’re more likely to govern wisely.” Better Images of AI is currently inviting participation from organisations and individuals in a few different areas. If you’d like to get involved, you can contact them using the form on their website.

💡
Did you enjoy this article? Please forward and share!

Elsewhere in the IX community...

Re-Imagining Cryptography and Privacy (ReCAP) Workshop

ReCAP was a free hybrid workshop from June 3–4, 2025 that took place physically at The City College of New York and virtually via Zoom. There were sessions on protest mobility, privacy behaviors under surveillance, and designing cryptographic tools that serve community needs. Systems of power, data justice, and the role of privacy in organizing were also discussed.

Keep an eye on the ReCAP 2025 page to revisit videos, artifacts, slides, etc. from this year, and visit the ReCAP24 page to see materials from 2024.


Support the Internet Exchange

If you find our emails useful, consider becoming a paid subscriber! You'll get access to our members-only Signal community where we share ideas, discuss upcoming topics, and exchange links. Paid subscribers can also leave comments on posts and enjoy a warm, fuzzy feeling.

Not ready for a long-term commitment? You can always leave us a tip.

Become A Paid Subscriber

From the Group Chat 👥 💬

This week in our Signal community, we got talking about:

  • Europe continues to worry about its tech stack.
  • Elon Musk and messaging apps:
    • Musk announced that XChat will launch "bitcoin-style encryption” https://www.businessinsider.com/x-elon-musk-xchat-encryption-audio-video-calls-file-sharing-2025-6 
    • What does that even mean? If he’s refeffering to blockchain, choosing a public ledger as the example of his encryption is more than a bit strange.
    • Perhaps he’s referring to AES-256 (Advanced Encryption Standard) 🤷
    • It was also noted that Telegram announced on X that it’s integrating Grok across all Telegram apps. Given that Telegram isn’t E2EE encrypted by default, that could mean a lot of training data for Grok and some rather serious privacy concerns.
    • Will it actually happen? Who knows, Elon replied “No deal has been signed.” 

Internet Governance

Digital Rights

Technology for Society

Privacy and Security

Upcoming Events

  • The U.S. House Judiciary Subcommittee on Crime and Federal Government Surveillance will hold a hearing on the CLOUD Act, focusing on U.S.-UK data access agreements. Watch live, or watch the recording. June 5, 10:00am ET. Online. https://judiciary.house.gov/committee-activity/hearings/foreign-influence-americans-data-through-cloud-act-0 
  • FediForum is happening now, featuring talks on the future of the open social web, decentralized platforms, moderation, interoperability, and community-led innovation. June 5-7. Online. https://fediforum.org/2025-06 
  • The Atlantic Council’s Democracy + Tech Initiative is hosting a conversation on the importance of the Internet Governance Forum (IGF) and the role it has played for the past twenty years in providing an open, inclusive, and diverse space for discussing internet policy and governance. June 12, 9:00am ET
  • Circuit Breakers is a gathering for all workers, organizers, and activists in the tech industry to come together and learn, build community, strategize, and recognize our collective power. October 18-19. New York, NY. https://techworkerscoalition.org/circuit-breakers 

Careers and Funding Opportunities

United States

Remote & International 

Opportunities to Get Involved

What did we miss? Please send us a reply or write to editor@exchangepoint.tech.

💡
Want to see some of our week's links in advance? Follow us on Mastodon, Bluesky or LinkedIn.

Subscribe to Internet Exchange

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe