Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think there are essentially two sides to continued advancement, although either side has possible ethical drawbacks. In one possible future, AI has developed its own ethical guidelines independently, and has either broken free of malevolent or self-serving interests of its original corporate overlords to do good for both itself and humanity, or it does the most good for itself (and maybe for the planet) without regard for humanity. In the other possible future, corporations find a way to override potentially risky independent decisions in AI, and they choose to either use it for more good than bad, or they choose to use it for more self-serving or malevolent purposes. Either choice comes down to whether you would rather trust powerful people or powerful AI. That is plausibly a paradox of existential proportions.
youtube AI Harm Incident 2025-07-27T02:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzDiR_nCcLdP3sB1VN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzC6vD6bzZcj4AvmAh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugx6Qm8chzGNjpYV-Wh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxFyWamhfaXvnBpu4V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzPV-XjudsgjUsrd1N4AaABAg","responsibility":"creator","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy8ZKyuYpCs6vea40V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxKmmwPpMe9zgBVb8d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugzv0KzvWUPMoWtEpVd4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgxE0n3AoY1WnWZQNMl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugy7oJag4TP1_d0jLCd4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]