Skip to content
Tech Vs Humans
  • Welcome
  • About Us
  • The Reading List
  • Vignettes
  • Search Icon
Does a Bridge Decide to Collapse?

Does a Bridge Decide to Collapse?

February 19, 2026 Thom Dempsey

To start, I am not an AI expert.

But I’m going to attempt to answer a deceptively simple question:

What is AI?

Yes, you can Google it.
You can ask Claude or ChatGPT.

You’ll get a clean, technical definition.

That’s not what I’m interested in.

Instead, I want to explore the answers people actually mean when they say “AI.”

Because AI isn’t easy to box in or define.
It’s a projection. A fear. A promise. A belief system.

So what is AI?

  • AI is the god of everything.
  • AI is utopia.
  • AI is the apocalypse.
  • AI will take my job.
  • AI is the imagination of a few tech billionaires.
  • AI is inevitable.
  • AI is biased and racist.
  • AI is proliferating faster than its implications can be understood — or regulated.
  • AI is an opportunity for humanity.

Over the next few weeks, I’ll explore each of these definitions (some I agree with and some I do not, but their consideration is important).

Because each one reveals less about AI — and more about us — the creators and regulators of AI.

AI is already embedded in our daily lives. It recommends what we watch, flags fraud, screens resumes, generates images, and predicts behavior. It creates pressure to learn, adapt, keep up, and not be left behind. And not just for humans but for companies.

So when we ask, “What is AI?” we’re also asking:

  • What is it this time?
  • Who benefits?
  • Who is harmed?
  • Who decides?

Recently, I had the opportunity to hear Timnit Gebru speak. She opened with the same question:

What is AI?

Her answer?

“I don’t know.”

And I loved that.

Here is someone widely recognized as an expert — formerly co-lead of the Ethical AI team at Google — who began not with certainty, but with humility.

She co-authored research that outlines the risks of large language models, including bias, racism, environmental costs, and the amplification of harmful content. She was later forced out of Google after raising those concerns.  Following her departure, Dr. Gebru founded the Distributed AI Research Institute (DAIR), an independent organization for AI research.

While tech-billionaires like Elon Musk or Sam Altman often speak about AI with sweeping confidence, Gebru began with doubt.

And maybe that’s the most honest starting point.

Before we define it, perhaps we need to ask:

Who gets to define it? Why should we be a little skeptical of that answer?

Because, regardless of its definition or intentions, AI was created by humans.

A bridge doesn’t decide to collapse.

  • If AI is racist, it was trained that way by humans– without consideration of inflammatory source data
  • If it self-replicates, then someone built it without consideration of the implications of AI building AI without human involvement (a very real scenario for Alibaba and Meta)
  • If it causes harm, then humans designed, deployed, or failed to regulate it.

A bridge doesn’t decide to collapse.

And AI does not decide to exist.

It is a human construct.

Which is why the definition of AI should not be exclusively owned by those who are also accountable for its consequences.


Artificial Intelligence, Vignette
artificial intelligence
Did Someone Say AI?

Did Someone Say AI?

December 3, 2025 Thom Dempsey

To say that AI is everywhere would be an understatement. It’s fascinating to see how companies are leveraging the fear, excitement, and possibilities of AI to sell their product. One visit to San Francisco will prove this to be true. But on a human level, it’s equally clear that so many people are unsure what the hell AI is, really. Other than confusing. And a little scary. Thinking back to the proliferation of the internet and email addresses, AI feels very fast in comparison. So, what could you do? Well, you learn and evolve. Like you did with the Internet, or email addresses, or puberty. AI has massive potential, but there are so many “experts” on this topic that it’s common to buy into the fear—a common way to move products or services.

If you want to start to learn, pick up Mustafa Suleyman’s book on AI called The Coming Wave. It will help ground you and challenge you to find your own personal balance with AI. It’s daunting, yes. But if you can see one way to use AI to make you more productive, then you are learning. And evolving, and that’s all you can do right now, because AI is so ubiquitous.


Artificial Intelligence, Vignette
artificial intelligence

Subscribe to Vignettes via Email

Enter your email address to subscribe to this site and receive notifications of new Vignettes by email.

The Latest Book Review

Subscribe to our newsletter!

[newsletter_form type="minimal"]
© 2026   All Rights Reserved.
 

Loading Comments...