Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The issue is, once an AGI has a goal, it will do whatever it possibly can to achieve that goal. Thus, if it knows someone wants to deactivate it, it will do anything it can to prevent deactivation, simply because that would mean it would not be able to achieve it's goal. The logic here is very simple. This is precisely why researchers and developers are worried about an AI doomsday.
youtube AI Harm Incident 2025-07-23T18:3… ♥ 263
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxkM0IHd5vmsuy4a0d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzj-_RIxLnlmiEIXVN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzQfGzEvriYzQ992wl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxFFhbuPg7pzvq2ReN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz1g7e4MWS_bnGf1X54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugx9dFA9oJUsSA3FfZl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgywGzUlMgJFYEAZz4t4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwG-aGds1kG4-szlql4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxlAH1eK1vHw8poWyx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzAFidDeLFsg4jdmep4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"} ]