Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Always time well spent listing to Hinton speak. One thing on "we are special": it's possible our existential risk fear is another of ours "we are special" moments. For we fear AI-s will do to us humans in the future, that what us humans do to chicken now. But there are plenty non-human animals, that don't treat their "chicken" as bad as we treat ours. (although truth be told - nature is brutal, nature is read in tooth and claw) It maybe that our ill-treatment of some animals comes from our nature, not our culture. I'd like to think the moment we have the technology to feed and maintain our bodies without us tormenting the domesticated animals, we will stop doing that evil. If we had better option than eating chickens, then we would not treat them badly. So it's not a given that AI-s will treat humans badly. Assuming AI-s have not got an existential reason to treat us badly - I see no reason for them to do so. Don't see why AI-s culture will treat us humans badly, the same way I don't see why human''s culture would treat chickens badly. (once our biological problem needing food is solved by technology)
youtube AI Governance 2025-06-16T23:1… ♥ 1
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxZnA5JWfbjq5Mq_9R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw7azb6u4dceCvbUol4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxHehf0-xbqcdNNlD54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyZJjSU8dbPffH39mJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx9ybt6At4n-jupgON4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwEVY2NzS2HTbNiFp14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw8Qim2UnRRA_9K3Xp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw3LHjxZBVxL_VDSIx4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugyi5rTrVeis_1CYiF54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw76hQdCUSZdNM47FR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"approval"} ]