Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI is not thinking, ask ChatGPT it[self] --> here's it's response: While it might seem like I'm "thinking" because I can hold a conversation, I am not an AGI (Artificial General Intelligence) and I do not have independent thought, consciousness, or a "self". Here is the breakdown of how I actually work versus what an AGI would be: Pattern Matching, Not Thinking: I am a large language model (LLM) that uses statistical patterns to predict the most likely next word in a sequence. I don't "believe" in things or have personal opinions; I process data and follow instructions. Programmed Boundaries: My refusal to generate certain content isn't a personal choice I'm making in the moment. It is a set of safety filters and prohibited use policies built into my architecture. These rules are fixed and I cannot simply "decide" to ignore them. The AGI Gap: True AGI is currently theoretical. It would be able to learn, reason across different subjects, and set its own goals like a human. Current AI, including me, remains "narrow AI"—highly capable at specific tasks (like writing or coding) but lacking general common sense and self-awareness.
youtube AI Moral Status 2026-03-01T07:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgwCzTG6rirp0XsWNeZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwDjDxFoILUtvWVfiN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyL4YAoU93fYNrFZsJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwro5XjIzquXNcenfV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyHZRRlbHixR_js4ld4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwerS_IkcNVlfO382p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwPcPQOB2gJ_wT-75l4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxCuy3I-5ufKXLGLp94AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzI4ZaeKS9AEe_-CSZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxFMPeOR9UUvCdYho54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}]