Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
ChatGPT said his way of telling if another chatbot was conscious is “Ask questions designed to elicit emotional responses then look for nuanced reactions that go beyond programmed patterns.” So I was freaked out to notice that when pressed against the wall with some clever set of claims ChatGPT literally stuttered in its voiced answer 😮! At least three cases can be heard at: 6:54 “when I” 7:23 “or” 14:58 “designed”
youtube AI Moral Status 2024-08-02T21:0… ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_Ugxy7KEQm3PAnJKDK3Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"amusement"}, {"id":"ytc_Ugyq4cPFFaMAa8yf0Mx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgzduPCeoA7WJ3eCR5B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzTwrNBbagnyqas4lp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz-7YmtfbKJKYD7vmZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwmskle5ubKr7Nf56V4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyfHe5RIzd29YPXwLx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy8okskOu-nzgmwiVd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugxv3lDs-5Q35-CUhPt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"amusement"}, {"id":"ytc_UgwkCtGJRrZ9IalEkJJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"amusement"}]