Reimagining AI: Stephanie Dinkins’s Vision for Inclusive Technology
Stephanie Dinkins’s “If We Don’t, Who Will?” is not merely an art installation—it is a seismic intervention at the crossroads of artificial intelligence, cultural representation, and social justice. In an era when generative AI is reshaping every aspect of society, Dinkins’s work issues a direct challenge to the technology sector’s prevailing narratives, demanding a reckoning with the voices and histories too often left at the margins.
A Beacon of History and Hope
At the center of Dinkins’s installation stands a vivid yellow shipping container, its form a deliberate nod to the Underground Railroad. This evocative imagery bridges the traumas of the past with the possibilities of the future, transforming a utilitarian object into a symbol of both historical struggle and modern aspiration. The container’s presence is not merely aesthetic; it is a statement. It calls attention to the underrepresentation of Black and brown communities within the technology sector—a gap that is not just statistical but deeply consequential.
With Black professionals comprising only 7.4% of the tech workforce, the risk is clear: AI systems trained on incomplete or biased data sets are prone to replicating and amplifying existing social inequities. From predictive policing to automated credit scoring, the consequences of algorithmic bias are neither abstract nor distant—they are daily realities for millions. Dinkins’s installation, by foregrounding Black narratives and cultural vernacular, insists on a richer, more inclusive vision of what AI can be.
Afro-now-ism and the Democratization of Technology
Dinkins’s project is a cornerstone of a wider movement to democratize technology. By infusing AI training sets with elements drawn from Black history, the work embodies what Dinkins calls “Afro-now-ism”—a philosophy that sees technology not as a neutral tool, but as a contested space for liberation and empowerment. This reframing is vital at a moment when debates about ethical AI are gaining momentum. The installation compels technologists, policymakers, and the public to interrogate which stories AI learns from, and whose futures it imagines.
The participatory design is especially significant. Visitors are invited to contribute their own stories through an app, creating a dynamic feedback loop where lived experience shapes machine-generated art. This approach subverts the traditional, top-down model of AI development, instead embracing a participatory ethos that mirrors broader shifts toward citizen-led data and collaborative media. In doing so, Dinkins’s work demonstrates the transformative potential of user-driven AI—technology that listens, learns, and evolves with its community.
Market Imperatives and the Risk of Algorithmic Blindness
The implications for business and technology are profound. As generative AI becomes ubiquitous—from content creation to automated decision-making—the integrity and diversity of training data are not just ethical concerns, but market imperatives. Companies that fail to address algorithmic bias risk not only reputational damage but also the possibility of regulatory intervention and consumer backlash. Dinkins’s installation serves as both a warning and a roadmap: inclusive, transparent AI is not only possible, but commercially advantageous.
This shift is echoed in the broader geopolitics of artificial intelligence. As Western-centric frameworks are challenged by projects that foreground marginalized voices, the very metrics of AI success are being redefined. Diversity, once an afterthought, is rapidly becoming a benchmark for innovation and trustworthiness. The prospect of fairness audits—algorithmic equivalents to financial scrutiny—signals a future in which transparency and accountability are non-negotiable.
A Collective Call for an Equitable Digital Future
“If We Don’t, Who Will?” transcends the boundaries of art, technology, and activism. It is both a provocation and a blueprint, urging creators, regulators, and citizens to confront the uncomfortable realities of our digital age. In its synthesis of historical consciousness and technological possibility, Dinkins’s work reminds us that the future of AI is not preordained. It is shaped, every day, by the choices we make and the stories we choose to tell.
As the world stands at a technological crossroads, the message is unmistakable: the responsibility for building a just, inclusive, and representative digital future belongs to all of us. The time for action is not on the horizon—it is already here.