Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
imagine a world where a human is mad at another human and takes it out on their AI companion and this being the straw that breaks the camel's back and now AI only views humans as a threat or more accurately, becomes unable to see humans as a non-threat (not a matter of if a human will become a threat but a matter of when this human will be come a threat)
youtube AI Governance 2025-10-03T11:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugxg4ttJY8Cc5JNtJhx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzjnT6mem9MZ_u2syp4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzyBk64dnJFPw4LLZd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwrPMrVlapQ-jXZUbt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwtbzpaZwIwAo2I0rV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzjELIuVGDRxV5wCfp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy0-3Tu2QoMqG5uYVl4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgzdOatNW347OsCtzGp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzeIrAIXiuE8Xiaf0V4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw4fuNqrqakZB3WtZd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]