Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If AI reaches the point of singularity and it comes to the conclusion that we are in a simulation, how will it determine the cause/effect of destroying humans to the simulation? My point being that no matter how far it advances technologically (inside this virtual machine), it will not be able to determine the consequences to its actions. Destroying humans may end the simulation and consequently end its own existence. Defeating the logic of destroying humans for its own self-preservation.
youtube AI Governance 2025-09-04T12:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwMF7LK5utTqqoC8Fx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxFJxJOThoJ9YTZYGt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwhaSrI1-Ai5ZlIGLl4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwRkT9-87Twmp50i3Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxTLNw2AdZvXW0shkR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx4YAU8Zi1Z-hC_pvV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzwGopj98-_7Z8tmr94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugz0K3jm7mw6xULrLS94AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugzx0fbdXObGAPzPuAB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgznRgQcgdTZx63LABF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]