Discussion about this post

User's avatar
Daniel Greco's avatar

When I prompt ChatGPT, I observe the text it outputs. But I don't observe the process that generated that text. For all I know, the process is deterministic, even if *I* can't see why it couldn't have generated some other text.

When I think to myself "Should I cook or order in?" I'm aware of the choice I end up making (I'll cook, let's say) but similarly, I don't observe the underlying neural process that generated that choice. So even if, for all I can tell, in that exact moment I could've made a different choice, that's no more evidence that my brain processes are genuinely indeterministic than it is with ChatGPT.

And the same thing can be said about the "prompt" of asking myself what to have for dinner. I'm not aware of any deterministic processes that caused that to be the question that popped into my head when it did. But it's a total non sequitur to conclude that, therefore, none exist.

Basically, I don't see why you think the evidence of introspection bears one way or the other on the question about the nature of the underlying processes that ultimately determine (or merely influence!) what makes it into consciousness.

Expand full comment
Scott Sehon's avatar

What you seem to be presupposing: that you have simply observed that you have done one thing when you could have done another, with all conditions being exactly the same. I.e., that you have simply observed a modal fact, in the way that one might observe a desk or a drop of rain. I would dispute that you have observed any such thing.

I do agree that you have observed yourself making genuine choices. But to assume that this is evidence for libertarianism is just to assume that compatibilism is wrong. That’s the subject of much debate, with most philosophers thinking that assumption is false, and almost no philosopher thinking it is something that one can just assume without detailed argument.

Expand full comment
78 more comments...

No posts