Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is scary as shit because ai does not know when it’s right & when it’s wrong. Assuming it doesn’t filter, which it does, it strives for the best answer be it right or wrong. That philosophy is good for google search, but when serious shit is in play, it’s unacceptable. Amended: if it truly is producing % probability which I hope they would, what is it based on? Trust but verify.
youtube 2025-05-10T06:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyjsT3QahyrgJxrr3F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx9_Ldg2xcX3GtPTj94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzGaIl7oSWy6kY3dLt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwzbDokD7A7PvTJddZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyZVqFlfttoy2TteR54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy8P0G5C3UQP3veI-x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgywhRKHxiM5huFHokx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzn4IPj9mCuPfQH9AF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxlmXL0LcNWzuy49Nh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"skepticism"}, {"id":"ytc_UgwVaBX_KkauEjaJki94AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"} ]