Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Roman has some good points even as example with current software safety. We as humans are sloppy and lazy and so it highly likely AI will go wrong
youtube 2024-07-12T13:3…
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policynone
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugxbc-G2VWJ6gI3_LPh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxQi3_hZrb3jyO_gNN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"}, {"id":"ytc_Ugw8vRXJawrz6SjnVth4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxNgiY34q1aWsKIJq54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz_UWo_i06vU0eFbrR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyDBS36b-RADYW0hgt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgywshKs58OeMmGVFiF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzbqSPApXQrEYGSwtt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw2fS5_bKNSA4KFDWZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxIN-C-3W4UzGE6SUd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]