Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A ploy. To distract us from the fact that bug tech are already researching nerfe…
ytc_UgxlxL6Mz…
G
Beware! Scammers can use deepfakes or voice manipulation to sound like your chil…
ytc_UgzVyfnbh…
G
On the AITube channel, we encourage all kinds of questions, even those that may …
ytr_UgyEVBxyQ…
G
The only actually useful thing I’ve done with ChatGPT so far is to take the over…
ytc_UgzZ7WksG…
G
This 100%
Didn’t love him but he’s handling this situation very well and it sho…
rdc_fn5ka8w
G
No me gusta esto ,, ¿ Hasta donde van a llegar esta 💩💩💩💩 de gente…
ytc_Ugy5WWnHV…
G
this comment gives me the same vibe of the commenter in another anti AI post tha…
ytc_UgwiNzBg8…
G
So rather than Ai have a compassionate teacher at one of the worlds most prestig…
ytc_UgwbnWYgr…
Comment
I don't buy this at all. AI is already "smart" enough to do a lot of white-collar jobs and has been since gpt 4. Engineers forget that we are not the center of the universe; most white-collar jobs require much less cognitive labor than being a swe, yet these people still have jobs, and companies are still hiring for roles despite investing billions in AI. Jobs are about people, not just raw output. LLMs are great at creating boilerplate code or fixing common bugs, but they will not replace human collaboration in making architectural decisions or even fundamentally understanding the code changes they're making (which is why they will often do something technically right but make no sense from a human UX perspective). Lastly, assuming AI will just keep advancing at the same rate and all its flaws will be fixed in due time is just intellectually lazy and not scientific at all. This isn't something as simple as Moore's law. LLMs do not just get exponentially better with more compute and more data past a certain point, and we've likely already reached that point.
youtube
AI Jobs
2026-02-26T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzUSY0ewfvQyzduccF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwtZx9kLCtoqPLVSR14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw6zEok07ly1M3ix_94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx0gYPsIjePbdwgL8x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxvFuffjl1RsoAd80h4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyoQBjLe-FetZIjirB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugym4-csoQ2bFrc2zzZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyYLuD6dHoZTwyGOFF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw-6k_OVhHOjSesd6p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzkZEUJKG0nwSEU_zx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]