Does a Bridge Decide to Collapse?

To start, I am not an AI expert.
But I’m going to attempt to answer a deceptively simple question:
What is AI?
Yes, you can Google it.
You can ask Claude or ChatGPT.
You’ll get a clean, technical definition.
That’s not what I’m interested in.
Instead, I want to explore the answers people actually mean when they say “AI.”
Because AI isn’t easy to box in or define.
It’s a projection. A fear. A promise. A belief system.
So what is AI?
- AI is the god of everything.
- AI is utopia.
- AI is the apocalypse.
- AI will take my job.
- AI is the imagination of a few tech billionaires.
- AI is inevitable.
- AI is biased and racist.
- AI is proliferating faster than its implications can be understood — or regulated.
- AI is an opportunity for humanity.
Over the next few weeks, I’ll explore each of these definitions (some I agree with and some I do not, but their consideration is important).
Because each one reveals less about AI — and more about us — the creators and regulators of AI.
AI is already embedded in our daily lives. It recommends what we watch, flags fraud, screens resumes, generates images, and predicts behavior. It creates pressure to learn, adapt, keep up, and not be left behind. And not just for humans but for companies.
So when we ask, “What is AI?” we’re also asking:
- What is it this time?
- Who benefits?
- Who is harmed?
- Who decides?
Recently, I had the opportunity to hear Timnit Gebru speak. She opened with the same question:
What is AI?
Her answer?
“I don’t know.”
And I loved that.
Here is someone widely recognized as an expert — formerly co-lead of the Ethical AI team at Google — who began not with certainty, but with humility.
She co-authored research that outlines the risks of large language models, including bias, racism, environmental costs, and the amplification of harmful content. She was later forced out of Google after raising those concerns. Following her departure, Dr. Gebru founded the Distributed AI Research Institute (DAIR), an independent organization for AI research.
While tech-billionaires like Elon Musk or Sam Altman often speak about AI with sweeping confidence, Gebru began with doubt.
And maybe that’s the most honest starting point.
Before we define it, perhaps we need to ask:
Who gets to define it? Why should we be a little skeptical of that answer?
Because, regardless of its definition or intentions, AI was created by humans.
A bridge doesn’t decide to collapse.
- If AI is racist, it was trained that way by humans– without consideration of inflammatory source data
- If it self-replicates, then someone built it without consideration of the implications of AI building AI without human involvement (a very real scenario for Alibaba and Meta)
- If it causes harm, then humans designed, deployed, or failed to regulate it.
A bridge doesn’t decide to collapse.
And AI does not decide to exist.
It is a human construct.
Which is why the definition of AI should not be exclusively owned by those who are also accountable for its consequences.
