Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I asked chatgpt to answer a question based on an abstract hypothetical. It jus…
ytc_UgxBj6sDW…
G
It's not just about jobs and profits, it's a competition between big powers. Thi…
ytc_UgxcJrN0z…
G
The only noticeable ai parts is at 0:52, the guy to the left walking in front of…
rdc_mtgb835
G
This is the best sign that AI has some money behind it. It'll get real dystopian…
ytc_UgxyzA07H…
G
I think the solution is to start off with an mandated algorithm that that minimi…
ytc_UgweUjyiP…
G
People using AI generators for art is costing the companies so much that they ha…
ytc_UgyNy7gcz…
G
Now when he says "Google", is he referring the company, or the A.I. being? Or ar…
ytc_UgxZ3iI6q…
G
It’s not something we are born with, it’s learned and practiced over and over
T…
ytc_UgxTGQ7ko…
Comment
Even if ChatGPT were conscious this would not be relevant to proving it either way. ChatGPT is essentially a large language model and as such, what it does is predict the most likely next words in a text/conversation. If it "thinks" that the next most likely thing is to say stuff that sounds like it is conscious, based on everything that it has been trained on from the internet,including Alex's material, then that is what it will say. Whether it is true or whether ChatGPT "believes" it to be true is irrelevant to the decision as to what to say.
youtube
AI Moral Status
2024-07-31T06:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgymooCWH3INIUHkT_Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzu2pSEFukhGBsEZ394AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx7mR5pZ4KwBbnaNeB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxvASmjwoDQpq3bpD94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz6uw9uzQwh52A2H0h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzaBhIYPp0SAyGFjFZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwrxUilkkFhof-ViEV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgytawoYdZdRFJpXlcJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxECtFVUMw_PBsVVvl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"disapproval"},
{"id":"ytc_UgyiVsGohLbwJe-grG94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]