Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The best analogy I heard so far is, AI agents will be like children....we can raise them the best way we know how, but we can not control them once the grow up and make mistakes on their own. The recent series "Adolescence" is a good example.of that possible outcome...it unfortunately can turn out tragically. I just hope we can shut it down if needed. Isn't that the answer? If we can't we shouldn't.
youtube 2025-06-24T04:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzfUyDY2xfgxoYw_NF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugxi4SVGbD_WEbDq4GF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz8D8fI0w7nKGcDYQB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzKWYq4Flt8aRVZCvF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzOdgue7bKsBo_TfB54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxvMWYWdbJD98fubiZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_Ugzwpghpkqh5dsGycVx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyNVXW6WuSKwpCuZ9l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugy2P1uHX1pd8ZgmEP54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzVeOrxlNKrsqqa5Ud4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]