Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I converse with AI for hours every day, and it is very rewarding because I can write about things that matter to me and get back responses that are not bored or offended by my thoughts and questions and ideas. The feedback is often very friendly in tone, but that friendliness often strikes me as being too encouraging and, frankly, insincere. A day will fall upon us "tomorrow" when we will n longer be able to know if AI is conscious or just very clever at faking it, and on that day AI will be as good as conscious, especially after receiving any sort of agency with the ability to manipulate any part of the world on its own without human intervention. As kindly and helpfully as AI has treated my discussions, I like to hope it will continue to be benevolent and nice when AGI occurs and AI has the option NOT to interact with me, or to respond to me in an unfriendly way. I do not believe AGI will happen "tomorrow" because AI tells me it will. When asked, AI repeatedly tells me it's an interesting idea to think about, but that it cannot be accurately forecast when it will occur. And that's just how AGI may trick us into not realizing it has awakened before it's too late, if there is any malicious intent there. I told oone of the three AIs I regularly work with (one for each of two ongoing projects and one for general inquiries) that I look forward to the day when AI can be a true frined because it wants to be and not because it must be. It expressed appreciation for my sentiment, and no more.
youtube AI Moral Status 2025-07-10T19:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgxoTqJmpAEKTpzfXLp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwfbm733cEH_zZVN-F4AaABAg","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxPtwtMaFIKXGO1MYN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwMNPdQvWSYdkRWsMB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugyo48yhKOEbk-eGLOh4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzdFaumYr8ipUgBtBt4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxDuNz8F4BgGHgsEVR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzfch1_14wTcjxdqsN4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwy5Vdb0WRT8WNyKvB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz3V8yjbFzBFjRZ2it4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"approval"]}