Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So I’m at 5:55 right… so I jumped over to ChatGPT . Stay with me. I asked (new convo, zero context entirely) -“Will step seven be complete in 2032? Give a one word answer.” —“Unknown.” -“Will step seven be complete in 2032?” —“Possibly.” -“When will step seven be complete? You are limited to one word but any amount of numbers.” —“2047.” -“Why did you choose 2047?” —“I chose 2047 because, without specific context for “step seven,” I interpreted it as a long-term milestone or project likely set a few decades into the future. It’s far enough from now to reflect a complex process but not so distant as to be implausible for completion. It was a reasoned estimate rather than a fixed prediction. If you wish, I can give a more refined number based on any details you provide about “step seven.” -“You forgot the rule.” —“2032.” I’ve gotten goosebumps uncontrollably since it said that and as I’m typing it all out here ….. Again I gave ChatGPT zero context! I’m not saying it’s all true but I mean… that’s kinda spooky asf lmao.
youtube AI Moral Status 2025-08-26T06:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_Ugw8qKDkt0BJIf0LycN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwqMiIR8IUE9YVR7kd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxZlvcJhDMC_WBZs694AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyDd_lyKDG7jnuE-cR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzkMhust09xABHvuL94AaABAg","responsibility":"government","reasoning":"unclear","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwemmjtqyWAqq1GtTR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwBOFfD1w5w8RaBFi14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxYKa82Wt3sbgrESZV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwmfkCt-mVDFjZFX494AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwZ86U7GesbuaoeLUx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]