Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The main problem is not AI, although it's definitely necessary to regulate AI. The biggest problem in tragedy is when a person is in such a situation, all he could trust to talk with is AI.
youtube AI Harm Incident 2025-11-10T15:5…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyregulate
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzyRopMBMghCa4dgqB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwluXfT1f6CXr0nX_F4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx0p0KQT45Yjz1qQGp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw-d5YIZhHeJmtLraV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzUU2oZRXNGeLlXDY14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxqKUbc1_spemQpe8p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugz8msgUr1LkfWfLDQJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwK_nC8wCUR5uwgyF54AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzJtrPNJV080zAHGcZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxR7Ntp0ZIbghPB5O14AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"indifference"} ]