Skip to main content Link Search Menu Expand Document (external link)

Are large language models (LLMs) sentient?

  1. When David Chalmers mean when he says “sentience” which does he mean?
    • affective consciousness
    • reasoning & agency
    • human-level intelligence
    • goal-directed behavior
    • phenomenological consciousness
  2. What are some reasons that Chalmers presents that LLMs are sentient?
    • LLMs can say that they are sentient
    • LLMs seem sentient
    • Panpsychism – everything is sentient
    • LLMs have conversational ability
    • LLMs pass the Turing test
    • LLMs have evidence of domain-general intelligence
  3. What are some reasons that Chalmers presents that LLMs are not sentient?
    • LLMs lack a body
    • LLMs are not biological systems
    • LLMs don’t have human-level reasoning
    • LLMs lack a world model
    • LLMs don’t have a unified agency
  4. Explain why Chalmers thinks that “unified agency” is a “strongish” critique of LLMs.

  5. Explain Chalmers’ conclusion about the current status of AI sentience.

  6. Do you have any questions that you want to discuss in class?