Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The "Gorilla Problem" analogy is actually terrifying when you think about it. If we are building something smarter than us, we have to be sure it's aligned. But honestly, the more immediate problem for me is aligning my budget with all these new models. I canceled my direct OpenAI and Anthropic subs because it was getting too expensive to "keep up with the race" Stuart talks about. Switched to omnely so I can access all the top models in one place without going broke before the singularity hits.
youtube AI Governance 2025-12-07T18:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgylOcMtmfYPRLyA_uV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugz0s_5F0fL7Yc6h9pB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwCTFAC3tuaqQyd4rJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwmHU68lQswaDEmhOd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy9c5M8aiFACFvwDkd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzslRkuK_KSVVjq6CV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugxqiq20CC4lLEtT6Oh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzAcj9D3tb7hktFuIJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzUI-XmKN6ijviTF6N4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyW72yvXfzgauYvOVF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]