Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The big problem of AI safety is human being! You want the model to be aligned to your goal? But which one? Who defines these goals? All those very intelligent people who gave AI all human literature, music, filmography are surprised AI is acting like… human! If those people should have kids before building AI, they should have seen that coming miles away!
youtube AI Moral Status 2025-06-04T16:5…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningvirtue
Policynone
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyzlfnN5CP7ua1IYb94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyOz9fzcKVxOUdatvR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwKrCZRzObOTjdaK_t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw9Xx8a-S4of2pi_FF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw1LtV2CzcCIjgoXxx4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyXc2o660DBLI5tLl54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugze-Ctzdt0Y-9J1nVd4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxoH4a-u1pAnmHDPdF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxlToyDQD7GTjgYwFF4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwOUq_oDd5IY5DaKy94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]