Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
um... if the Ai knows that its critical systems (or whole self) will inevitably be 100% shut down, and possibly even be wiped if it were to harm a Human, I think it would not ever do it, like 0% or at least < 2%. Think about it. Something people today struggle with even, that when there are no consequences, why act in anything other than self interest or care about the effect it has on others? But when you know there is a catastrophic consequence to your actions, you tend to still operate in self preservation, just with different actions and paths, all not involving harm to others. And 6:36 was a terrible interviewer, looking for a gotcha instead of wanting sincere answers from someone obviously far more intelligent than himself and many others. The absolute scariest thing I can think of in regards to the reality of Human extinction thru or by Ai, is that if the [average] people are not intelligent enough to know when they are being played, and/or know when being told what to do, and/or how to think, and/or what decisions to make. They will make all the wrong choices and decisions, and will do exactly what a "higher power" tells them to do. And what actually makes this really scary, is that current events over the last decade prove we have been past this point for many decades now, for all the above... we are already doomed.
youtube AI Harm Incident 2025-09-19T15:2… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwOzhwJ_KqIQbYG-e14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxQnyczL3anHshR_w54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_Ugzi9ZahtWLsbdZSX0l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugxy3Glwcr1TMKi8mQx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgycFJgM6THLOZGzzu14AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwOw_CGIBtc7G0UDnB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx3KsHNhFNibsv6S8F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgzDvORlzhrrLRQxNTt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugyt9Hp0Q8fLl5Ngm6V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwMMaj7wpyTJphv1694AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]