Are large language models (LLMs) sentient?
- When David Chalmers mean when he says “sentience” which does he mean?
- affective consciousness
- reasoning & agency
- human-level intelligence
- goal-directed behavior
- phenomenological consciousness
- What are some reasons that Chalmers presents that LLMs are sentient?
- LLMs can say that they are sentient
- LLMs seem sentient
- Panpsychism – everything is sentient
- LLMs have conversational ability
- LLMs pass the Turing test
- LLMs have evidence of domain-general intelligence
- What are some reasons that Chalmers presents that LLMs are not sentient?
- LLMs lack a body
- LLMs are not biological systems
- LLMs don’t have human-level reasoning
- LLMs lack a world model
- LLMs don’t have a unified agency
Explain why Chalmers thinks that “unified agency” is a “strongish” critique of LLMs.
Explain Chalmers’ conclusion about the current status of AI sentience.
- Do you have any questions that you want to discuss in class?