Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This doesn’t prove anything at all because unless the child was saying something…
ytc_Ugy9U6F4B…
G
could be much scarier with way less intelligence. AI doesn't need to be self awa…
ytc_Ugy0ifdMy…
G
I suspect investors will desperately \*want\* there to be a "next big thing" and…
rdc_nkb4yhm
G
5 hours? i remember googling 'lacking purpose in life' some time back and the to…
ytc_Ugwx_LvOY…
G
This video basically explains my perspective on this whole thing. It's going to …
ytc_UgylR_xof…
G
“People will come to love their oppression, to adore the technologies that undo …
rdc_oi2i5kz
G
Perhaps someday AI can help us elect the politicians than can do us the most goo…
ytc_Ugwgczi9x…
G
i use AI just to help myself to understand how something work or how something i…
ytc_Ugz93gPK2…
Comment
Great interview, but at times quite disingenuous. Let’s not pretend OpenAI invented the scaling paradigm; they were simply the first to apply it decisively to language models and follow it through to its logical conclusion. And it’s hard to argue that this approach wasn’t spectacularly successful, at least in the early years. After all, the only real difference between GPT-2, which is barely coherent, and GPT-3, which triggered the global race to AGI, is scale, since both ultimately rely on the same transformer architecture.
And scaling works more broadly than just for language models. Richard Sutton called it the bitter lesson: to paraphrase, throwing more compute at a problem tends to outperform hand-crafted, domain-specific approaches over time. The idea that one could achieve comparable performance or range of capabilities by training on small datasets using few chips is simply not credible. It’s not even in the same league as what modern large language models can do.
youtube
Cross-Cultural
2025-07-11T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzdRXsJFV2xDkq4Vg14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxLM9l8Wb5i-_gCB7F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyxzMuxBGjG7FtszNp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzDcZ_JcGfuHaO_9cZ4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz11tMYtVPjHsToUZN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyAVVR5KUJ9SNffTZF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzEurs5anu9OM_iNwd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzzHU0u0uyfMXlFKpB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw3F9HXEPjNpqQTSAd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwXkovWOzRFycK7TwF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"}
]