[Newport] correctly argues that large language models aren’t conscious, but his framework leaves the entire philosophical foundation unexamined. Weak arguments for correct positions do more damage than strong arguments for wrong ones—they concede ground unnecessarily and invite sophisticated rebuttals that appear to win while actually missing the point.
Values and the consciousness that can hold them are metaphysically tied to living organization. Only entities facing the fundamental alternative of continued existence versus permanent cessation can possess values. Because contemporary AI lacks that life-constituting stake, it cannot instantiate the consciousness required for true metacognition.
Eira’s boots had worn through at the heel on the second day. By the third, they were walking on bloody socks.
The forest smelled wrong. Green clung in a film to the back of their throat—thick, cloying, like wet leaves. Birdsong stuttered and looped. Light slanted through at strange angles, sharp and disorienting.
I had a fun philosophical conversation with Claude (a large language model AI service) about the nature of consciousness, ethics, and eventually art.
Because Claude was my intellectual foil, I then had it describe the chat in the form of an essay—and it did it from its own “point of view”!
The chat itself helped clarify my thinking as an Objectivist about the necessary relationship between consciousness and life. Maybe you’ll find it enlightening, too.