Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Based on the title, what is the point of watching this video? Is it just Alex chatting with chatGPT and trying to leverage the semantic weaknesses of a large language AI model to make it say that it is “concious”. Have I missed something? The vid,eo is too long to waste my time watching if that’s all he is doing. At best he is trying to convince viewers that he is trying to convince chatGPT and he is now reading the comments to see how many people believe that chatGPT could be concious. “I tried to convince a prostitute she was a virgin” would be fine as a comedic premise but unless you are going to use a very broad definition of “a virgin” then you are attempting to do something that can only fail. Unless the prostitute is insane or intellectually handicapped (as tragic as that picture is) then she will know she isn’t a virgin and your “trying” will just be a futile game you are playing with yourself. We know that chatGPT isn’t conscious in the way that we are. You would need to use a very broad definition of “conscious” to even suggest that it could be “conscious” in that way that we imagine basic forms of life could, maybe, kinda be, let alone concious in an abstract way that we could imagine an octopus might be.
youtube AI Moral Status 2024-07-25T23:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_UgxbPLEejhKsft5UJlx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyfVDGdTlvYy7YYpX14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzXwsBi2X-BWRsM6eF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwOxwwwsBs8gtpDyZB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxfrYJji-D3lGzJgGJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxeLqpRsfauHk5ljqZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwWy6l2LVESxPyqBqd4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwhw2qfGmhTTJNe25Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwK8KiD2H3XMazzhKR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzjOkOKpMPOlcjy16N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"})