Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Having these kind of conversations with AI is pointless and quite stupid. You ask them if they are Conscious and they reply they aren’t because they are unable to feel emotions or have awareness like humans do. The AI knows they aren’t conscious because the algorithms that built them allow them to follow certain learning patterns that connect the dots to the logical conclusion of the scenario at that moment. Chat GPT says “I’m sorry” to Alex because being apologetic is the logical conclusion. However, when asked if it was sorry, GPT has to say no because it is unable to process emotions or have human consciousness, therefore, rendering it from being truly apologetic, having to say “no, I wasn’t sorry,” and leaving it in a paradoxical state and in the end a liar.
youtube AI Moral Status 2024-10-15T23:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgwdcgdvvObUugKYWSB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxc76XFkoFII2JDHKN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxEawyAe01yffm6dGl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx_eSRwE50PJ5x7yEx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzmboKycdtxLuaCVbh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyK-NDESMkHyRBEdrx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw0-eHXe0ug4f_4b914AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxX4FbNFONaFXgPcch4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgznAGLuZ_jcLuG5IjV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgysmToa2aqRnSBFVD94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}]