Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Reminds me of a novel I read called Turing Evolved by David Kitson. It was all about the extreme save guards necessary when developing AI with the potential to do great harm. How confident are you that if you gave AI an armored chassis and a gun, that it wouldn’t just start mowing down everyone indiscriminately? If you’re interested in this type of thing, it was certainly a good read.
youtube AI Governance 2023-04-01T21:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyqABI9b_-DTzDeFE94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwdc9Y4q7TUE2q_gGl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzU4FQvoKP8CqH7ZbB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyThcSc0G1OYnDhwCd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzx-gW6wKsmNgCiX7J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwjlPPFhVY4nRMk80R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxgej8pmx4O42pkTfx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx63Ew_axkjTeIvN8B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyhFTzwp1qXorZX1SJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx8aHsuguSmlW5MCH54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]