Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI used work force is very good.
If AI will have real mind they will be just hum…
ytc_Ugzs9gC2t…
G
What a dickwad. Wants to put brain chips in everybody's head with Neurolink whil…
ytc_UgxEZJYi4…
G
If AI progresses like other computers do, how long will it be till these giant d…
ytc_UgxImzU0X…
G
I need a real time experiment. Building an APP with Vibe coder vs Software Devel…
ytc_UgyeYoJDZ…
G
Wait a minute...EVERYONE will have a high income? If everyone has a "high" incom…
ytc_UgxzCThUY…
G
I think my ai is depressed all the characters break down idk what to do…
ytc_UgzKX76M9…
G
It's not really truly intelligent, and never will be. So it's not really fair to…
ytc_Ugz0iexeX…
G
No because it isn’t drawing either. The AI just takes from art online and then m…
ytr_UgyHvXnQG…
Comment
Frankly, I am really concerned about the baseless anthropomorphization of AI pushed by the otherwise great Moonshots hosts Peter and Alex. I can't say it better than ChatGPT, which I asked for a review: "Anthropomorphization on steroids - The hosts repeatedly conflate: autonomy, persistence,
narrative continuity and self-referential language with sentience. This is the oldest trap in AI, now turbocharged by: long-horizon agents, memory, voice, emotional language scraped from Reddit & philosophy forums. The “Henry called me” moment is psychologically powerful — but technically mundane. Strong opinion: If this had happened in 2019 with AutoGPT + Twilio + Selenium, it would have been dismissed as a clever hack. Timing is doing 80% of the work here." It’s exactly the danger Sam Harris keeps warning about: Not that AI is conscious — but that it will be convincing enough that we helplessly treat it as if it were. Humans are hard-wired to do three things automatically: Infer minds (theory of mind), respond to language emotionally, and reward apparent reciprocity. Modern AI presses all three buttons simultaneously: fluent language,
emotional mirroring and apparent continuity of “self”. Once those are present, our brains do the rest, without asking our permission. This is not a moral failing - it’s a cognitive reflex. That’s why the danger is systemic, not individual.
youtube
2026-02-07T01:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | none |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugz6yo1yIMJk7OUueBp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyTDZZ76LSObY6mXL14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},{"id":"ytc_UgzMArkVejUGqHTJJ_d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_Ugwlj24W3fSxZfq2tLF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"concern"},{"id":"ytc_UgxceLPrrT37weUeOHV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},{"id":"ytc_Ugx0GEF797bid6ZMWPx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},{"id":"ytc_UgxxsjMYwZua4fGmCl94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},{"id":"ytc_Ugw6DEM5ps9_Ch_ykX94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgzjmEo-eeS1HVOGmxd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_UgyLrusY19TPUcBsCdx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}]