Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Interesting comment. When do you know if you are speaking to a Department person…
ytc_UgxTMDDIG…
G
This is terrifying and amazing,
This is AI maybe for the first time starting to…
ytc_UgztNnXyI…
G
I worked at a call centre straight out of high school. We collected on unpaid fi…
ytc_UgwSLuqAS…
G
Read the AI 2027 report yourself, it's not fiction, it's a paper published by AI…
ytr_UgwjTFOb7…
G
There wasn't even a competition. Ai has not nor could it ever taste chocolate ic…
ytc_UgwfiTAL-…
G
If people were more intelligent and worked together, they could reject and stop…
ytr_UgzF7rdhu…
G
I scrolled down just a little bit after this video and got an ad for “getting ri…
ytc_UgxQ62JdG…
G
Bruh, if Tesla autopilot fails, and you have thicc chunky obstacle ahead of you,…
ytc_UgxJP_4d4…
Comment
The key point that almost no one makes is that LLMs do an ok job at generating code for problems that *humans already solved* and there are 100s open-source examples for. But, to solve new problems, we need a completely new "AI" architecture that actually understands concepts. It'll probably happen in 10 years, but we are nowhere near that.
youtube
2025-03-15T22:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxchuzqHJO_RgB7-iN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxY5HHknInbPqO2aEZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyk7uBJS0CjMftHpEJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzRnXJldKQedztZoG94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwpfeMy6_IEuZ7PjgN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxJjNOAEHt94xGETBZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwvVGcWKUcEe5ILRjl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugygkcj0FqSgy1r5wBh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzyZ5e8JtIrkHO102Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx3Uexmhpji0pu262Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]