Hello!
It's been a few weeks since I've shared my own thoughts with you all. And for good reason: we’ve had wonderful Q&As to publish. But it feels great to be sat in front of a blank page again. Nothing quite beats the sense of fulfilment that comes from going from nothing, to something.
I’ve been thinking a lot about fulfilment lately. And its relationship to a meaningful life. The word has lingered ever since I listened to Lex Fridman's interview with physicist and AI researcher Max Tegmark. If you've not listened to it, you must.
At the time of recording, Tegmark and his peers were hours away from launching an open letter calling for an immediate 6-month pause to the training of any AI systems more powerful than GPT-4. Here's an excerpt:
"Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable."
These are big questions.
During the interview, Fridman and Tegmark discuss how AI and AGI invite us to reconsider what it means to be human. What it means to lead a life worth living. Here's Tegmark:
"Maybe it's the struggle that gives us meaning. […] I worry that eliminating too much of the struggle from our existence […] takes away from what it means to be human. […] For me personally, the challenge is one of the things that makes life meaningful. If I go hiking in the mountains with my wife Meia, I don't want just to press a button and be at the top; I want the struggle. […] I want to constantly work on myself to become a better person."
Tegmark's reflections connect to the question of AI-induced automation. If I'm a writer who loves to write and derives meaning from writing, what happens to my sense of self-worth and purpose if I can ask ChatGPT to draft this newsletter instead of creating one from scratch?
Just because we can, Tegmark argues, doesn't mean that we should:
"We need to stop and ask ourselves: why are we doing this? I feel that AI should be built by humanity for humanity […] It would make more sense if […] we think about what are the jobs that people don't want to be doing, and automate them all away. And then we ask: what are the jobs that people really find meaning in […] even if it were possible to automate those tasks away we don't need to do it."
What are the jobs that people don't want to be doing?
Another big (and loaded) question.
If you work in the knowledge sector, chances are you assume that the answer lies in labour. "Low-skill service automations" and "low-income manual occupations" are how one famous Oxford study on the future of employment puts it. Here's the World Economic Forum:
"Regardless of the specific numbers or percentages, the underlying omnipresent message in all of the above-mentioned reports, on the news and on social media, is that automation and artificial intelligence will make low-skilled jobs disappear in favor of specialized, highly knowledgeable workers."
Low vs high.
Notice the implicit hierarchy here, encoded through spacial metaphor:
Low ⇒ down ⇒ less = inferior
High ⇒ up ⇒ more = superior
If you're not convinced, think about how we refer to mood:
"I'm feeling down today."
Down ⇒ low = sad
"I'm on the up today."
Up ⇒ high = happy
Non-profit readers, does this remind you of anything?
First world vs. Third world
First ⇒ top = best (superior)
Third ⇒ bottom = worst (inferior)
You might think this is a tangent. But as Ettie Bailey-King has told us: language is inherently metaphorical. It affects our thoughts, feelings, and behaviour.
The more we hear about a job as "low" skilled, the more likely we are to assume its inferiority and, thus, condone its obsolescence.
We need new metaphors.
"Humans love separations—we like to separate ourselves from other humans, usually in hierarchies, and we separate ourselves from the rest of biology by believing in our superiority. The upshot is that the planet is in peril, and humans will fight for every last resource.
Connectivity is what the computing revolution has offered us—and if we get it right, we could end the delusion of separate silos of value and existence. We might end our anxiety about intelligence. Human or machine, we need all the intelligence we can get to wrestle the future out of its pact with death—whether war, or climate breakdown, or probably both." — Jeanette Winterson
Let's look at the open letter again:
"should we automate away all the jobs, including the fulfilling ones?"
The fulfilling ones. The is a definitive article used to indicate that the noun's identity (in this case, 'jobs') is known to the reader. In other words, the open letter authors believe we know what these fulfilling jobs are. I'd bet money they think that we think fulfilling jobs aren't low-skilled jobs.
But what is a low-skilled job? I asked ChatGPT:
"Retail sales associate: This job involves assisting customers with their purchases, restocking shelves, and maintaining the cleanliness of the store. It typically requires minimal education or experience.
Food service worker: This job can involve working in fast-food restaurants, cafeterias, or other dining establishments. Duties can include preparing food, cleaning and sanitizing kitchen areas, and serving customers. This job may require some basic training or on-the-job experience, but generally does not require a college degree or specialized skills.
Janitor or cleaner: This job involves cleaning and maintaining buildings, including offices, schools, and public spaces. Duties can include sweeping and mopping floors, emptying trash, and cleaning restrooms. This job typically requires little formal education or training."
Are these jobs people don't want to be doing?
Are these jobs devoid of meaning?
What does meaningful work mean? And who gets to decide?
Please don't mistake these questions for cynicism.
I work for an AI company. I love my job and am proud of what we're building. I'm learning to integrate AI into my daily working life in a way that is making me more productive and creative. Like Tegmark, I see enormous potential:
"We have the power now to harness artificial intelligence to help us really flourish and bring out the best in humanity."
Equally, like Tegmark, I believe this outcome is not a given.
If we want AI to bring out the best in humanity, it should reflect the breadth of humanity. Today, that's not the case.
In 2019, a study reported that 80% of AI researchers were men. In 2021, a Computing Research Association (CRA) survey showed that only 2.4% of US resident AI PhD graduates were African American. (In 2021, 14.2% of the country's population self-identified as Black.)
As for socioeconomic background? I struggled to find data. The 2021 Stanford AI Index report (which cited the CRA survey) openly acknowledged it had not considered "diversity through an intersectional lens." It's hardly a logical leap to assume that most AI graduates and computer scientists will come from middle-class backgrounds and upwards. This is what computer scientist Dr Joy Buolamwini meant when she coined the phrase "the coded gaze.”
So what am I trying to say with all of this?
I'm trying to say that the existential questions concerning AI belong to all of us. Not just policymakers and tech leaders.
All of us.
Taking ownership of these questions starts by reading, talking, informing, sharing, discussing.
Some things I’ve found helpful are:
Setting up a Google Alert for keywords like “generative AI.” The latest articles will land straight in your inbox.
Seeking perspectives from folks with and without technical expertise. I’ve learned more from reading novelist Jeanette Winterson’s essays on AI than I could have ever imagined.
Asking friends and peers in different industries what they think. Even if the person hasn’t thought about AI’s impact before, your question might spark interest.
Then there’s you. This wonderful community of curious individuals. What’s your take? What questions and ideas do you have? What new metaphors can we create to ensure that everyone flourishes in an AI-infused world?
Whatever it is, make sure you pass it on.
Lauren
Loved this read, Lauren! It confirmed a lot of the nebulous thoughts I have with on AI.