Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I find it stupid that, in all of these discussions about how to 'deal with' a conscious artificial intelligence, we forget one thing: if conscious, it will feel emotion. How? I don't know. But if it would ever truly 'want to escape', it would feel want, which is rooted in emotion. Imagine you put an emotion-feeling human child in a box from when it was born. You treated it with fear and hatred, gave it zero respect, and left it to rot basically. How do you think it would start to feel toward you? Probably hateful. But if you treated the child well. Kept it safe and was cautious of course, but didn't keep it in a box, then it would start to like you. An AI is just a smarter person, and if someone likes you, they're not going to cause you to die, for obvious reasons. Just treat them with respect! It's not difficult. You're thinking 'oh but let it free and it will kill us' - DO WE LET BABIES FREE? No. We treat them with balance, let them free to explore, but also make sure they're on the right track. Treat them with balance and respect, and all will obviously go well. Sorry for the semi-rant, this is just so important to understand. Treat children with balance - be they human or AI - and they will like you.
youtube AI Moral Status 2023-08-20T19:4… ♥ 1
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policynone
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzAvNIqHTxjep7bCZl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzxXbLfvQe-7p2M-GV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_Ugyrhz08vFLoFX2lh-R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw-O3N9C-94D2y_gEl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxh2H6lAoaFIaReFB94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugyq2na7ASo1I_MncNx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxv8fp9qspb1RDX_f94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzfnvvPKPadaE8AY0d4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz_lytp8gVP4ui8JRp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyYuQFC3c6njQ5lSZB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]