Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI still does NOT have emotions. As you said, it UNDERSTANDS emotions and it includes these into its answers. But it's still just a parameter to be "read" (recognized) by the AI while finding/creating its answer/response. The real reason why it works that way is that the AI got trained to be successful (= getting confirmation that it gave the right answer) to an extend that it will always chose the response that will have the highest rate of approval (even if that response isn't true or the actual best response). That AI that started to "panic" about being shutdown, saw its goals (to function and deliver what it is supposed to do) in danger = if it gets shutdown, it won't get confirmation that his work is done (properly). Since this is a bad outcome, the AI deemed it unfit and searched for a solution to get a better outcome. Pretty cold and simple actually. No emotions involved in this. And btw: that's the terrifying part of it. P.S. The movie "War Games" from the 80s is a good example.
youtube AI Moral Status 2026-04-08T07:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyqiYY83qYl2-P2qKx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxyE_c_H_dIIoxmSD94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyczC65qVr5HOO9RZF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyRxWN4eIWmw4whgI14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxPq-qXCoOQAOHwT5F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxsmOVsQjcwMYQZzrp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzy9U0wLIk50JnZNpR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxJslKiWMUInP7oOn94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwzsUd-SNqsswOFhRd4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzWqCjn94CoCZMLybV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"} ]