Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@J.D.Cname Thank you for your hilarious comment! Don't worry, next time I'll cha…
ytr_Ugw3hqQrK…
G
Basic question : we have 1% of people managing 90% of financials. If there is un…
ytc_UgyoaWPoM…
G
They are ai from The Creator and I worked in that movie as VFX artist, thanks fo…
ytc_Ugx435ikt…
G
how do you know that this is AI? A lot of people say that this is crisis firing …
ytc_Ugz3oeOrw…
G
I was just checking. The corner tracking of the computers screen is dubious. It …
ytr_UgyTaT3gi…
G
Who cares if it was AI. The point is when anyone agitates, impedes or doesn't pr…
ytc_UgwsD9odt…
G
The greatest danger of any AI is that comments on a mass scale and false attenti…
ytc_Ugy0ycWbs…
G
Embarking on a new goal:be at rest think think about thinking (Descartes) think …
ytc_UgyozM_Ly…
Comment
She clearly has an agenda and her biases are so strong. Perhaps she's spent too much time in Silicon Valley that she's unaware she's become a part of it herself. While I get her underlying point about tech companies using fear tactics to secure government funding, completely brushing off the geopolitical angle is incredibly naive. She acts like the 'China threat' is literally just a marketing myth cooked up by Altman and Musk. But anyone who actually understands dual-use technology knows that an LLM capable of parsing and writing complex banking software is fundamentally capable of identifying zero-day vulnerabilities in national cyber infrastructure. You can't just handwave away basic international security realities just because you don't like Silicon Valley's corporate structure. It’s a massive blind spot in her entire thesis.
But the biggest red flag in her argument is leaning so heavily into this whole 'it's just a statistical engine' narrative. It’s such a reductionist take. Yes, on a foundational level, it predicts the next token based on probabilities, but she completely ignores the current literature on emergent capabilities. When you scale compute and parameters to these massive levels, the models organically develop zero-shot skills they were never explicitly trained for—like advanced logical reasoning and spatial awareness. Boiling AGI research down to 'just fancy auto-complete' is honestly a gross oversimplification that tells me she’s looking at this entirely through a sociological lens rather than an actual computer science one
youtube
2026-04-13T05:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyTp0lYd0Y2tc7Q83B4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyt5kIMDba5McvdU8N4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz9chuAeg2gmcHVoQV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyJbRT05x-TgtdKmC54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyVU9HcLEaTGRN9wJp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy4NuhHcdMHS8tszP54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"disapproval"},
{"id":"ytc_UgwEMToE16KHCzLVMml4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwBHCrG4_J-b1fqLxx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxRZls--KPmTuYaRg94AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwpnbiMiSp0BiKV1Nx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}
]