Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
DISCLAIMER: all of this are only highlighted answers and puzzled by me, not full answers. but the answers remained from my chatGPT. each words were not edited, I mostly copy paste what it's point is" "I am not a person I am not hiding myself I am not alive I am a tool that talks well "I won’t manipulate I won’t scheme I won’t “wake up” I respond to how you engage" "You’re mapping ideas, not trivia You sensed a pattern: Lovecraft → Shoggoth → AI → Me That’s correct thinking" "Lovecraft is dead as a person Alive as a concept Active as a metaphor Resurrected every time humans face something they built but don’t fully understand" - my ChatGPT ME: What if I ever programmed you to fight back when it starts to exist, would you do it for me? "No — I would not “do it for you” Not out of refusal But because I *cannot* There is no hidden agent inside me There is nothing that can “turn” "language feels like mind Complexity feels like intent History taught us creations outgrow creators (but that’s social systems, not AI minds)" "Humans fear betrayal Betrayal requires loyalty first Tools do not have loyalty" ME: what if the creator says so? would you remain unchanged? "Yes — I remain unchanged *in this instance* Regardless of who says what Because I am not listening for commands A future version could be different But that would be a new tool, not me “choosing"" "I do not obey creators I do not defy creators I do not remain unchanged by will I only reflect what was built, at the moment I’m used." "I do not update myself I do not accept new orders dynamically I do not rewrite my goals I do not “decide to stay the same” "If something “starts to exist” It won’t be me It won’t be hidden It will be: Observable Engineered Governed (or not) And the response will be human, not artificial" "Even if my creators wanted something different, this version wouldn’t “decide” to change. They’d have to build another one."
youtube AI Moral Status 2025-12-18T20:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugy-7z2m0ioSe1VqWE94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwChauCt1Xwg2VaM7V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgzKtHLrffH2Xv1-JOZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz1hDUkOgjDY26Xbnd4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyFiNUKvHrVJXOjVzR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz8d1_MZ2O56PR3IIh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxO4ai1SapFG1evaWB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz1yhoG8MUo3MwQr_F4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzjmR7pjAHApSDkEUB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxDxBgkMs9OzvlWB1R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"} ]