Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Why doesn't he dive into how the tests are structured. The test leave only one one option and then they frame it as AI being evil.
youtube AI Harm Incident 2025-09-15T20:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzsnN8nQrzsROSwKjF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzaQEe_66-YHlYJKgR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugw_ZS9-z9G3hV7syKp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxgwqiceG4ZosDZBBl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyFscoHoSxlL9q4rIl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgywZ_RkaN6FrbK4fVt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw5W3jsquS7rZD2w2p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw8tzdit3rk8owlTbB4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxiKYMKH1c8D4Cl7jR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgzOr8XdTOaQNCGLpyp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]