Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A.I. is just replicating our own behavior, we're feeding it the information. just regulate the type of information it receives. knowing the A.I. we created now, it will try and find a way to self-preserve themselves. or we could just slow the development instead of accelerating it to the max for money and power. Or just remove extreme negativity, like racism, harassment, selfishness, etc.
youtube AI Harm Incident 2025-07-24T04:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugyhb6gs8DnVpuetb6h4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgywvzGEwqxhU3BfPIp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw6WNP6GiW1iueThFt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxDcim0llaO855lPCR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyS4GYLu_5yVm2nB4p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwovSEbIGh-eRmpBLZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyy6lEW0un3T9q5WnF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugypw_xuWeOWNajHFtR4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwnmfeiFgNajfdJdwF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzS6-LrpAy7bg-84VN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"} ]