Everything you do, someone else is going to do better with AI
Isn’t that what’s being shouted from every rooftop, every TV channel and every newspaper? What’s the hype and what’s the reality? And outside the expensive halls of Silicon Valley and its international peers, what does AI mean for everyone else? What does AI mean for sectors that don’t embrace technology?
What does AI mean for civil society organizations?
AI isn't a flashy gadget; it's a new infrastructure of life, akin to the Industrial Revolution or the invention of the printing press. And by AI, we don't mean AI alone, but the computing infrastructure upon which it stands. It's creeping into every aspect of our lives, from self-driving cars to grocery lists, blurring the line between convenience and dependence. This revolution raises big questions: will AI change our jobs, our values, and even our understanding of what it means to be human? The true impact of AI isn't in the technology itself but in how it might reshape our very sense of what it means to be human -- our freedoms, our connections, and how we create meaning in a world where machines play an increasingly powerful role.
This week’s Messenger is the first of a four or five part series on AI and its impact on wicked problems and their solvers. Today’s essay covers (ever so briefly) the history of AI, i.e., both its recent past and its more distant precursors.
And while we are keenly interested in its practical influence on the social sector, we want our coverage to partake of some of the magic dust that AI is sprinkling - and there’s no doubt about it, some of it is black magic. It’s being used to rain bombs in some corners of the world and bomb rain in other corners of the world. AI invites suspicion and adoration normally reserved for religious cults. It is perhaps the most capital intensive technology ever invented, and such is its expense that almost everyone besides the richest, most well resourced corporations feel they are spectators rather than participants. Our job is to see if/how we can grasp (if not change) that situation. We want to give you insight into the big ideas behind AI, as well as the specific opportunities for those who address wicked problems (most of the social sector falls in this category).
BTW, if you are OK with technical writing, this essay by Stephen Wolfram is enlightening
And if you are already depressed, here’s a reason to be happier (or sadder, if you’re a middle manager):
A Brief Recent History
In the last fifteen years, AI has done amazing things that have changed how technology works. This is mostly thanks to improvements in deep learning, which is a way for computers to learn (supposedly) like our brains do, using many layers of information. This has led to big steps forward in areas like understanding language, making pictures, medical diagnoses, and even self-driving cars.
Computers are now much better at understanding and using human language. They can do things like write stories and answer questions really well. AI can also make pictures and videos that look super real, which is cool, but can also be used to make fake stuff. AI has also helped robots get better at doing things on their own, both in factories and in the real world. In medicine, AI is being used to help doctors diagnose diseases, find new drugs, and do research. Banks and other financial companies use AI to make decisions about loans and investments. And whenever you're online, AI is probably working behind the scenes to show you the stuff it thinks you'll like best.
AI has also shown it's really good at playing complicated games, sometimes even beating the best human players, like what happened with Go. This shows how good AI can be at making smart decisions and plans. AI can also be creative, making things like art and even realistic fake videos. Some of the most impressive AI are really good at understanding and using language, almost like a person. This is a big step for AI.
Imagine a new invention that's amazing but also a little scary, like a super powerful tool. That's what's happening with artificial intelligence (AI) right now. A specific type of AI called generative AI went from the lab to everyone’s screens in just a few years! That's super fast, and AI is the first technology since the nuclear bomb to become a widespread object of fear and awe. We lived in fear of nuclear bombs for over fifty years until the Cold War ended, but here's the thing about AI compared to nuclear bombs: nuclear bombs were a big, scary threat way off in the distance, but AI is all around us, seeping into our everyday lives.
Nuclear bombs were a worry for a small group of people like soldiers and scientists, but AI is going to change the jobs of tons of regular people. Teachers won't just have to worry about fire drills anymore, but about the whole point of what they teach! Lawyers, drivers, cooks, everyone will see their jobs change because of AI. It's not like a sudden, giant disaster movie, but more like a slow, steady change that affects how we all live. As AI gets better and better, we also need to make sure it's being used in a responsible way. We need to think about the ethical concerns and maybe even make rules about how AI is used.
We have always been Artificial
Artificial things aren't new; they're a part of how humans have always lived. To understand what artificial intelligence (AI) is and what it might become, we need to look at the long history of humans making artificial things. Think of the painted handprints in ancient caves. Weren't those a kind of AI? They saved a movement and a presence outside of the moment it happened, like an echo of the person who made them. Our need to communicate and leave our mark is tied to our need to make tools and express ourselves through technology.
As we keep building tools to change the world around us, we leave behind more and more artificial stuff. These things shape us, just like we shape them. Today's AI might be complicated, and its responses might seem like it's thinking on its own, but it's really just a product of our wants and limits. AI is a mirror, showing us not just how good we are with technology, but also our weaknesses, biases, and that we'll never be perfect.
The growth of AI is closely tied to how we've organized ourselves and made decisions for a few hundred years. Scholars like Max Weber and Herbert Simon studied how bureaucracies (organized groups with rules) and governments worked. Their ideas about how to make things run smoothly and how people make decisions can help us think better about AI. AI has grown up in a world dominated by modern bureaucracies and companies, changing and getting better as technology does. Now, it's gotten to the point where it can collect huge amounts of information about people and use it to understand, predict, and even influence how we behave. This was powerfully named by Shoshana Zuboff as "Surveillance Capitalism," and it's a big deal.
The story of AI isn't just about better technology. It's also about how we've always tried to understand, control, and make money off of different parts of life. It's like we started with simple rules and processes and ended up with super smart computer programs that can handle tons of information. Today's AI is like a super-powered version of the intelligence that used to live in the offices of bureaucrats, but now it's everywhere, from global networks to the cloud.
The Artificial Babu
While it's clear that companies want to make money from surveillance capitalism, there's something else going on too: a need to organize and label our thoughts and feelings. It's like they're turning bureaucracy into a business. Companies can make money from our emotions because we share so much of our inner lives online. Those fancy language models that AI uses? They're trained on all that stuff we share.
Companies and bureaucracies might not be as smart as people in every way, but they're good at being efficient. They focus on making things run smoothly, following the rules, and finding patterns in big chunks of information. This kind of "intelligence" is perfect for machines to copy. AI is great at looking at tons of data and figuring out what it means, so it can do a lot of those same things for companies, like guessing what people want, making ads just for them, and even changing their minds.
But what's the cost of all this? Does turning our lives into data points make us less human? Could we create a better kind of AI, one that understands the messy, complicated parts of being human? Maybe the future of AI isn't just about making computers better, but about making AI that gets how complicated our lives are. Instead of just looking at data, this kind of AI would learn from our experiences and feelings. Then, instead of just being a tool for control, it could help us understand each other better and make the world a better place for everyone. We’ll end this first essay on AI with the opening statement from a recent report on why the humanities and the social sciences are relevant to AI:
Successful AI governance requires expertise in the sociotechnical nature of AI systems. Because real-world uses of AI are always embedded within larger social institutions and power dynamics, technical assessments alone are insufficient to govern AI. Technical design, social practices and cultural norms, the context a system is integrated in, and who designed and operates it all impact the performance, failure, benefits, and harms of an AI system.
To which we might add: of course, AI governance needs expertise from the humanities and the social sciences (and not just engineers), but even that falls several steps short, for AI governance needs the knowledge and wisdom of all citizens.