Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Can't we just train an internal AI critic/fact checker so it can spot when it's spewing bullshit ? The biggest problem with AI currently is that they're so confidently wrong and that needs to be beaten out before they get more intelligent and autonomous.
youtube AI Moral Status 2025-11-07T13:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugxn5ipi2RXqS-OCfyN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwVMuNj0Ht7jJHamMN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgzfjVjUN5_VvtuQxI94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzrS-iMyGDbBbhq9Wl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxAIPAJip9IRsErZkZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgxjzvqUWJicorFntxt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzGAGGZIz-5DspMn4J4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy5l42IHAIaY-kUg_V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzEqJedDVi3v8AbWw94AaABAg","responsibility":"user","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyJ4kM03PZcC6RdIrV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"} ]