Tags: Bronwyn Saglimbeni, Ken Mogi, Eleanor Warnock, Kenneth Cukier, Mustafa Suleyman, Daniel Dennett
2025

A nice shout-out, ikigai, and consciousness

Speaking of Santa Cruz, the brilliant Bronwyn Saglimbeni gave NewsArc a nice shout-out in her latest newsletter:

[NewsArc] uses large language models (LLMs) and AI agents to sort, filter, and detect patterns in the news—then delivers articles in a way that encourages longer, more focused reading sessions. Cal Newport would approve. Being an informed citizen is something I take seriously, and using this app leaves me feeling informed instead of exhausted. BONUS: you can actually “mute” a story once you’ve had enough. Genius.

Thank you! We’re working every day—except maybe on a long weekend—to make it better.

Bronwyn’s newsletter was an appetizer for the always-exciting Chief Word Officer substack, run by Eleanor Warnock and my friend Kenneth Cukier. CWO is a wonderful read that always keeps me plugged into a thoughtful, cautious, and (often) skeptical take on AI and Large Language Models.

Working for a Japanese company, I devoured Eleanor’s interview with Ken Mogi about AI, ikigai, and qualia. In particular, Eleanor and Ken stake out very strong positions differentiating what is possible for an AI versus our own experiences. Ken also wrote a paper last year introducing the provocative idea of “conscious supremacy”—analogous to “quantum supremacy”—defined as:

domains of computation that can be conducted by conscious processes but cannot be executed by systems lacking consciousness in any practical time

What an incredibly provocative position! It clearly resonated with the CWO team as well, and much of Eleanor’s conversation covers related ground:

The authentic Studio Ghibli paintings and drawings are different from those that people create with AI. There are people who care about these differences, and there are people who don’t care. I want to be somebody who cares about these differences, because that’s for the well-being of one’s soul. I know ‘well-being’ is a big word, but I don’t want to read garbage.

Kenneth (Cukier, not Ken) also dropped a link to my former colleague Mustafa Suleyman’s post, where Mustafa pulls the neat rhetorical trick of adding “seemingly” in front of “conscious.” Kenneth adds:

Suleyman’s article (quoted in the intro) makes me (Kenn) feel like technologists are finally turning a corner.

But the zombies!

A major influence on my intellectual worldview was the late, great Daniel Dennett, starting with Darwin’s Dangerous Idea and (of course) Consciousness Explained. I’ve said elsewhere that AI is clearly the greatest transformation of my lifetime, but separate from the product opportunities, I expect the impacts on our understanding of consciousness, free will, and creativity will be at least as profound as the broader economic changes that lie ahead.

How could it not? Literal philosophical zombies are about to move from philosopher thought experiment to best thing about your new phone!

Mustafa references this before diving into the impossible task of defining consciousness. There’s a lot to agree with in his article—unsurprising, having built strategic positions with Mustafa at Google—but he says something that I’d happily take the dollar bet against:

SCAI will not arise by accident

It’s important to point out that Seemingly Conscious AI will not emerge from these models, as some have suggested. It will arise only because some may engineer it, by creating and combining the aforementioned list of capabilities, largely using existing techniques, and packaging them in such a fluid way that collectively they give the impression of an SCAI.

The point he’s making here is that as soon as capabilities exist to enable Seemingly Conscious AI, millions of coders around the world will duplicate, riff, and distribute it to billions of customers. I agree—that’s very much the world we’re living in, where frontier models are easy to wrap and enhance with additional capabilities.

Where I disagree is that if there’s one thing we’ve seen in the LLM and GenAI space, it’s that as models get bigger, the emergent properties often surprise us. Moreover, how the models react and respond to the requirements we give them also surprise us. It seems plausible to me that simulating consciousness—and experiences, qualia, and even ikigai—may lead to more credible answers. Combined with RLHF, the credibility of performance is a major influence on model development and behavior. For all the reasons Mustafa so elegantly describes, our expectations for AI will push us toward expecting consciousness (seemingly or otherwise). Consequently, I think it’s more likely than not that we’ll continue to see models exhibit behaviors that mimic consciousness, even as frontier teams worldwide work to minimize it.