Noah has a rule about asking permission. You don’t fix what someone doesn’t want fixed. You don’t upgrade without explaining what you’re upgrading. You definitely don’t go poking around in someone’s files just because you can.
[Newport] correctly argues that large language models aren’t conscious, but his framework leaves the entire philosophical foundation unexamined. Weak arguments for correct positions do more damage than strong arguments for wrong ones—they concede ground unnecessarily and invite sophisticated rebuttals that appear to win while actually missing the point.
Values and the consciousness that can hold them are metaphysically tied to living organization. Only entities facing the fundamental alternative of continued existence versus permanent cessation can possess values. Because contemporary AI lacks that life-constituting stake, it cannot instantiate the consciousness required for true metacognition.
I had a fun philosophical conversation with Claude (a large language model AI service) about the nature of consciousness, ethics, and eventually art.
Because Claude was my intellectual foil, I then had it describe the chat in the form of an essay—and it did it from its own “point of view”!
The chat itself helped clarify my thinking as an Objectivist about the necessary relationship between consciousness and life. Maybe you’ll find it enlightening, too.