Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The solution is a very robust version of an AI whose only goal is human survival that overses & is always adjacent to any other AI. This AI would have the final say to overrule any conflict of interest the other AI would have and this overseer AI would have to be purposely made in such a way as to have no conflict of interest. Additionally some sort of human induced emergency off switch needs to be implemented across the board.
youtube AI Harm Incident 2025-08-01T07:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyregulate
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugzf432KKSQbBpV7xkB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwhUBQWqK8utjRkQ0Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxgmc8eo4rhlL536f54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyrjiPRiarJADEjejF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxXMRqfM1yGKS4NYz94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwGOmtEcIo4rWH3BSR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgydIv8MbPK_ME-VjV54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzz3BfKjMzxEaHT47x4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxVH_HSoPWlemuDFi14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwmzO-hHpe2Dvi6vWB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"} ]