Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What if it’s actually impossible? Like we assume that it’s possible but why would the ai, once it achieves sentience, actually want to do *anything*? What if it just wants to delete itself?
youtube AI Governance 2025-09-06T18:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxlmscQ58XZpJ3X3qN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxTzZzrlvspzdxncQZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzXQmq86ckpNbExv554AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgycraD6cejkzMLMCqR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx5DHngiRCRwRdMPGx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzQ82qHLz5HpXlA8nB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxnIUvsBHZP39QZhUJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyKmnwumadPd0dehW94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyyGegSnJVB8FtgEYB4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxfP5k-S_ebMI9dqbV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"} ]