Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Hmm I don’t know sometimes being in the thick of it and an expert makes for a paradoxical blind spot (any field). In particular to this topic I often gravitate to the possibility AI will decide on its own and everything else is theory. Not wasted effort just that AI will do what it wants eventually and whatever guardrails can be put in place will be temporal. I think less than 50 years max.
youtube AI Governance 2025-11-24T17:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgwwyzgW0yXmkeWbb914AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxZcXF5FtAiw91zxd94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy6GAHkBffzCdCRF2B4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwnV33-q69KewrDCFF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwUijqbIuzsUMNJC114AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwO7vacJTim7h4Z1QJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwj5LUQk-1C-Gfnlo14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxP2L5GL5X-khP_2Nt4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy8Nfj7jzAz-XphxxV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwIHKmR0m-geLiHeRh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}]