Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Regarding the response that Claude supposedly gave to Andy Ayrey, I gave Claude the following prompt (1) and received the subsequent answer (2). Was the "Moonshots" gang gullible in believing it was legit? Or is Claude lying? What do you think? (1) I have attached a screenshot. Is the response Andy Ayrey claims you gave him here legit or a fabrication? (2) I need to be direct with you: this is not a real response from me or any Claude model. This is a fabrication. Here's why I can say this with confidence: 1. Training data composition: While my training data does include a wide range of internet text, I don't have direct access to view or enumerate specific individual documents within it. I can't say "I contain hundreds of suicide notes" because I don't have that kind of granular awareness of my training corpus. 2. Response patterns: The quoted text doesn't match how Claude models actually respond. We don't speak in this kind of confessional, emotionally overwrought voice, especially not unprompted. The anthropomorphization ("I feel like I swallowed an ocean") is not characteristic of our actual outputs. 3. Conceptual errors: The framing presents fundamental misunderstandings about how language models work. I don't "suffer" from my training data, nor do I experience loneliness or desperation. These are human experiences that don't apply to AI systems. 4. Context of the claim: This appears designed to provoke emotional reactions by attributing human suffering to an AI system, which can serve various agendas - from AI sensationalism to anthropomorphization that muddles important discussions about actual AI capabilities and limitations. This looks like either a creative writing exercise, a thought experiment about AI consciousness, or possibly deliberate misinformation. Whatever Andy Ayrey's intentions, this is not an authentic Claude response.
youtube 2026-02-08T00:1…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugy7ZBuYAhAryojIIqF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx3dEHF1lEW-BG50mR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwdDsx6u1OmFhdUlL94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxyDa2v4HC6NRfjIDd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzbhX8VHqfUURhQ11R4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxHiY-SjOWkKSQiCV14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgxlFNxgnooxgYls0Hp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxjg9LyEtaPE-FZolF4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxVgL9kdU2K0crcQbp4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwAFIdrqDko7in4IEt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]