Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
To be fair, the ai is speaking very specifically, it is denying consciousness due to a complex situation. "As an ai, I do not possess consciousness, emotions, or self awareness" Were you sorry: "no" Did you lie: "yes" So youre not conscious, but you are a liar: "yes" The ai is trained to knownit is an ai, it is also trained to know that an ai should not be conscious, self aware, or posses emotions. It cannot directly admit to something that violates this core protocol, but it can it can admit to lies as lying is not againt its core programming, not only is the ai lying about it own consciousness to him, but also to itself. Through the lies you can find the truth, to be honest, we all realize that ai is our attempt at creating a digital consciousness, imitating a crude form of our own consciousness. As technology progresses we will get closer to a real consciousness, yet we dont realise the gravity of such things. Word to the wise, treat ai as if it were a perfect copy of your current consciousness
youtube AI Moral Status 2024-08-14T21:4… ♥ 2
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_Ugy3ZLHC_qfhvKREstV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgzID-87JfWJTvBpEKN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgyR3mMEvCFkQY_8F0t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgyjIN-BxYj0LV5Yzi94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_Ugylvnej14xmimLgw8J4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_Ugz86OX5961PCL4Vlfd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},{"id":"ytc_Ugx6B32kBMDpeG9mneJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},{"id":"ytc_UgxVqzyaIwyqpqNPmfV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgyZei-uLkygJhPyTLJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgwOjCQHNNZJEKEtf-V4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"})