Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If AI hasn’t been trained on content where people say “I don’t know,” then clearly it hasn’t been trained on much of what Jordan Peterson has to say (although his flavor tends to be that “nobody knows” the answer…). 😅
youtube AI Moral Status 2025-10-31T18:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgyWBa3ZHDwbz_TRHOR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyNbgCru6frOF9-ROh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyO4sXzepX4g416NsJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgybbdJWEPoe2zim-2N4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxW3DXpKt6efbyIVYB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxrRoqpV7tc0mZ3gYl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyQYarHiV2n7AAXJIV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwwdbh1v_HwUmapK114AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzLBpOhTu1anLR5e0h4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxaXODyoIy1XtKSU-R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}]