Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"love for our children"? in other words love for ourselves will be the solution for the upcoming problem of an AI intellectual dominance? didnt he, just a few minutes before, discuss how AI started to have behaviour in its own interest? that AI started to act in favor of itsself? that AI started to act to 'love' itsself? isnt that what we humans do, the one way or another, all the time? and isnt the idea that OUR love for OUR children could be game changer for the upcoming crisis just another iteration of exactly that? isnt that an argument that exactly claims (for oursleves) what was just recognized as harmful (when presented by the other, the AI)?
youtube AI Responsibility 2025-06-02T11:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyS1ahr33TZ_KVndRB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgymwFq2xr-ysDtEm5V4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzE95Yg_VOwazLk63F4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwHG5WPXG-K4SmqQX54AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxEPSK31JdzyeV-KhJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwFbkP6p29Mnop-eTV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugxsj9C1qn4kSNpDRx94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzE7ovyq72qybWbROB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzzIQGLPNcQ9mTmG6x4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyhYbFSxHku1ARbp1d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"} ]