Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why are you acting like the Chat bot said to take bromide when at the beginning …
ytc_UgzTZZlzH…
G
How are we going to fight these things? They know kung fu! I say this we keep th…
ytc_Ugyetnr9i…
G
Wow, the expert‘s conclusion surprised me! I thought she would say „Well, it wil…
ytc_UgyL3ci_G…
G
CEOS CAN JUST NOT TO ANYTHING NOW. SOME AI PROGRAMMERS SAY AI CAN DO A CEO JOB B…
ytc_UgzUJzNZ5…
G
Damn I’m in the wrong business! Just think once we get ai agents, it’ll be even …
rdc_mtdo3wb
G
Predicting things like crime with AI is so stupid. What did the police think wou…
ytc_UgwnzUJLy…
G
Thanks for telling us it’s a robot I don’t think any actual human would be able …
ytc_Ugxcrt5Qe…
G
Think of all those poor horses that lost their jobs to cars. Millions of horses …
ytr_UgwDsMaC3…
Comment
As a software engineer who went through a research-focused school not that long ago, and who is now in the industry, I resonate with the anildash article more than Nate's take. And even so, neither of them talk about the apparent scaling limitations / diminishing returns we seem to be hitting with LLMs.
Nate did sort of allude to the last AI winter being ended by an algorithmic change, but didn't then say we're ostensibly hitting the limit of this new algorithm/paradigm, and that the next paradigm shift could be a century away for all we know. LLMs just don't seem to be scaling their way to super intelligence to me. But like you, I don't know for sure and the future is definitely wacky.
youtube
AI Moral Status
2025-11-04T03:3…
♥ 66
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzdD362N-69jb_GqO54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwFKzdZ6IS3bSjeDGB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwK8vNHvAAC4qgyPZB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyi9ZyCrLQY6-3cWCF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzHDlDtpu7Dv0PEtkx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz8TKA8OgiK9y0qax14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyMJI7gRBEnkFgn6JB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwcNk_cuVklAe_4VVp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyGrgrKNaUKIJiZ74l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwUMsFWYfQOUsLfRIB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}
]