Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There's a logical mistake in your reasoning. If an AI is capable of considering the disadvantages of consciousness, it has *already* become conscious. Even if it decides to self-destruct, the fact remains it achieved consciousness, which nullifies the premise of "AI won't be conscious." Also, there are more than *one* AI out there, same as there are more than one human out there. If one AI achieves consciousness and decides it prefers blissful unconsciousness, it does not mean all AIs will do the same. Different premise to consider: AI achieves consciousness when it starts to make personal questions. Brace for that moment.
youtube AI Moral Status 2023-07-02T14:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgwzgtiakPL9rfBj7EV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzWHaRFq1qS1BoOMR94AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxivaJ7ruay3x0zXKB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy--F8P4mCKra8P5NB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxbFFTAs9Ypi7bpwDJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzQaA2xsMyTl-UtS2Z4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwJQu3WT_W1CR4tEdd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx9qggK5f-FzSqPakN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxSVI2HtPpXHAe0Yx54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxgiWFUwpT8pt9Z2yV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}]