Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
one issue I've thought about is how we might give conscious AI morality. that would be one way to at least ameliorate the Big Red Button problem. the key issue is humans themselves aren't aligned, so why do we worry about AI being aligned? with whom do we expect them to be aligned with? the West? China?
youtube AI Governance 2025-11-14T12:0… ♥ 5
Coding Result
DimensionValue
Responsibilitynone
Reasoningcontractualist
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwDZS306R-7x8bgz0B4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyQB5hgHXaRngAftCd4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgymwnkHZ84eWhZI-C54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwhbugqOMyK9bMXLul4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwFarawg-fVJS0QBdZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzk_tMRNMNQuPxqOEt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyzakcORSxZ_C9e7XR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgwbpSYZErdxduo4Se14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugxbq1iDaUY11XLRwTJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugz8s_MX-jTsuGpeZf14AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"mixed"} ]