top of page
  • Writer's pictureLindsay Morgia

Confessions of an AI skeptic

Updated: Apr 18

This week, I'm heading to the Nonprofit Technology Conference in Portland, OR. It's my first time going, so I'm excited to brush up on my data visualization skills and meet other data nerds. But the main reason I'm going is because of all the sessions about artificial intelligence – 18 in total. It's a hot topic, and I know I need to stay current on emerging technology. But I've got to tell you, I'm skeptical. And by skeptical, I mean my first reaction to ChatGPT was, "I hate it."

As a consultant, researcher, and analyst, I have three major concerns about AI:

  1. Privacy protections – how do we protect individuals' privacy with technology that's so opaque?

  2. Misinformation – it's already so easy to spread false information. Will AI make it harder for me to distinguish between real and fake reports, studies, and news?

  3. Bias built into AI systems – technology designed by white men has historically best served white men. The board of companies like OpenAI doesn't exactly inspire confidence.

A brief search on AI reveals even more to worry about. First, there's a risk that we rely too much on AI to make decisions for us. Without any regulation or system of accountability, we could exacerbate social problems rather than solve them. As a 2020 Harvard Gazette article explains,

"With virtually no U.S. government oversight, private companies use AI software to make determinations about health and medicine, employment, creditworthiness, and even criminal justice without having to answer for how they're ensuring that programs aren't encoded, consciously or unconsciously, with structural biases."

There's also an increased risk of scams and financial fraud: According to the Federal Trade Commission, individuals are concerned that AI will make it harder to spot email scams by eliminating signals like poor spelling and grammar. Some reported that they have been victims of phone scams that have used AI to mimic family members' voices, which sounds terrifying.

Finally, there's a growing risk that AI will eventually outsmart us all. A 2023 opinion piece in Scientific American warns that because AI is unregulated, the technology is improving itself rapidly. The consequence?

"Once AI can improve itself, which may be not more than a few years away, and could in fact already be here now, we have no way of knowing what the AI will do or how we can control it."

I have thought about that last point but chalked it up to watching too many disaster movies.

And yet, to paraphrase one of my brilliant clients, AI could become as crucial as the internet - many of us may be unable to do our jobs without it. AI is already poking its nose into my work all the time. It wants to write my social media posts, design pages on my website, and create codes for my qualitative work without me even reading the transcript. As such, it's my responsibility to learn more about how it works and how to use it in a way that doesn't contribute to the end times.

I hope the NTC conference will be a first step towards that education. I'll write a follow-up post sharing what I learned in a couple of weeks. In the meantime, what is your view on using AI in your work? What questions do you have about the technology? I'd love to hear what you think in the comments below.

12 views0 comments


bottom of page