Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Box checkers, box packers/stackers, task schedulers, assistants/secretaries, mid…
ytc_UgxA9q6vU…
G
The inexplicability of neural networks and LLMs is not the issue. In essence, th…
ytc_UgxKC3XJz…
G
I hate genAI but I hope you read parts of the article where they explained it wa…
ytc_Ugz0WFBUh…
G
you're taking context out of it. He's not commenting on the harms of AI and why …
ytr_Ugwy3Je9V…
G
I felt like this was more a competition to sound smart then to actually get some…
ytc_Ugx0w3H6R…
G
The two robot is controlled by project of evil illuminati government or a human.…
ytc_Ugw0ubIxZ…
G
@jonikuttan7037Well, AI does not take an item away. It creates a copy of the it…
ytr_UgwQrSYMx…
G
anything tech that is given FREE is bad-ever wonder why AI -all of them-are free…
ytc_Ugy2SC7S5…
Comment
Just to clarify this - no decent software engineer thinks this. Because any software engineer worth their salt would at least look at AI-generated code or get a basic understanding of how AI works before coming to a conclusion like that. The people actually saying this are either incompetent software engineers, or managers and execs who are trying to threaten workers and/or tell shareholders that the company's profits are going to go way up as their costs drop by firing programmers.
The truth is that AI as we know it isn't ever going to cross the finish line. AI fundamentally doesn't have the ability to understand what it's writing. It's just imitating code that it has seen, without knowing the why or how of it all. It's inherently very error-prone and unable to handle larger-scale tasks. Once you get past basic boilerplate code, AI's incompetence is undeniable.
youtube
2025-05-19T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwSsAqOMzbCzLQT8OV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwJGxfHE60KhBeoRKV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugznumg3TeNgF7TQXT54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz8bLkhMpSNkV8ViCl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwJT-Msyep1Nyhbkf14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw6oqJ0yV9BbyiO6mJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxkRLjLyPtrFbsGrRV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx7aaZCFltqUuuLSGh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwypLWw6YQxrzLvEoN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxJOfzhpitFMZ8hiqV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]