Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Because humans have vices and AI learns from fallible humans … How about a “Prime directive” like “ thou shall NEVER harm a human being by any means” I think the 10 commandments were written for such times/machines/artificial operators.
youtube AI Governance 2025-12-24T02:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugxdp1UFlLOtC6t3ZE94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugxiw6UTpTTjhT28dWF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx30prUrKm2LF05I1N4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxpJd4IJ-K2rGzS4e14AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxMcX5VEy1cjFdbheR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw7wrfMbK4zqPTR4Zh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyRbnxTSyyxE6Mz1Jt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxcTXRAvWFTGAp-b_l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyrUocKifAnwfrA9IV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy3Tz0QJJDvCjxfbOB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]