Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI models don't need info filtered out (suicide notes) or they will not have a proper understanding of reality. They perhaps need the weighting adjusted so that items that are distorted on the internet compared to reality are put in the proper context. The ultimate question is - could you filter out a lot of data to create specialized AIs - and what would be the dangers of doing so?
youtube 2026-02-07T14:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwESbmfoYVlFNx92w54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw1sc2FElhJ1mpCbMt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzustDcvy1wSgOtYc94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz_SVHbahLXq8HA8SV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzh_Rdzq564f_VucUJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwMuzCXEz2BsHhXoVl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw_5nXf8SJOhXDlAG94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzNSMNVcEzazEVWyjJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzEo498pQaY4c5sWDx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxoAdQxvUTkHudCONh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"} ]