Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Here's the thing - let's just say he's crazy, and he's one of those wildly dramatic doomsday preppers that is well off the mark. Where is the single tech leader proving him wrong with the solution to AI safety? A single person. He remains unchallenged. If you want to take a scientific, evidence-based approach, we're in a position we probably have to take his perspective on this as most accurate. Which is scary. Scary as fuck.
youtube AI Governance 2025-10-08T11:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyregulate
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwR9DLJmdBe_TExYmN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyEJHTB9bMCaT5BLAF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy3PcZTOnEl0f3_fjB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugyh-NXRzVLGDBImhip4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgxS5_bvqxialGyXUOl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxNbNbq4Gyw6Bvm0gF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyeIcOXnTcj_XBySPx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgwPJa33Ji_gmGubl_54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzXztiqo5W1nRuRpD54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwOkUzRtABJjI28nV54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]