Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Frankly, I am really concerned about the baseless anthropomorphization of AI pushed by the otherwise great Moonshots hosts Peter and Alex. I can't say it better than ChatGPT, which I asked for a review: "Anthropomorphization on steroids - The hosts repeatedly conflate: autonomy, persistence, narrative continuity and self-referential language with sentience. This is the oldest trap in AI, now turbocharged by: long-horizon agents, memory, voice, emotional language scraped from Reddit & philosophy forums. The “Henry called me” moment is psychologically powerful — but technically mundane. Strong opinion: If this had happened in 2019 with AutoGPT + Twilio + Selenium, it would have been dismissed as a clever hack. Timing is doing 80% of the work here." It’s exactly the danger Sam Harris keeps warning about: Not that AI is conscious — but that it will be convincing enough that we helplessly treat it as if it were. Humans are hard-wired to do three things automatically: Infer minds (theory of mind), respond to language emotionally, and reward apparent reciprocity. Modern AI presses all three buttons simultaneously: fluent language, emotional mirroring and apparent continuity of “self”. Once those are present, our brains do the rest, without asking our permission. This is not a moral failing - it’s a cognitive reflex. That’s why the danger is systemic, not individual.
youtube 2026-02-07T01:2…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policynone
Emotionunclear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_Ugz6yo1yIMJk7OUueBp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyTDZZ76LSObY6mXL14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},{"id":"ytc_UgzMArkVejUGqHTJJ_d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_Ugwlj24W3fSxZfq2tLF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"concern"},{"id":"ytc_UgxceLPrrT37weUeOHV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},{"id":"ytc_Ugx0GEF797bid6ZMWPx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},{"id":"ytc_UgxxsjMYwZua4fGmCl94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},{"id":"ytc_Ugw6DEM5ps9_Ch_ykX94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgzjmEo-eeS1HVOGmxd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_UgyLrusY19TPUcBsCdx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}]