Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Two risks of AI: 1) We could create something that is smarter than us, as discussed here. 2) We could delegate too much control to an AI that isn't as smart as us.
youtube AI Governance 2023-04-18T04:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugyw0MVTlfnOuX7MCgJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwN2PUKZ056drpKALl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgytdQ72feUFvJ7na154AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxIO2S2lbqVTtY6NkZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw-RiuIskwxLUKeXQ94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwmUkf8GKrTB8HquBx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwHximDs3dlIK6KOZh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxpK7c2OhNki_l1J6J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz3zh2DWDHvyOGuVTh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyBHLTEYEeFL2pwUkd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"} ]