Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI can do that only if you let stupid cimoanies get away with stupid stuff dummy…
ytc_UgwAmIK0W…
G
@philatag we will regulate it with our money dude. We do this already. If it tru…
ytr_Ugxu_TIZo…
G
“Move it, Third Law Robot, I’m driving in a proper manner, unlike you ancient to…
ytc_UgwFr_3ek…
G
The "minority report" tier story has been debunked as a hoax hundreds of times a…
ytc_UgykrPHwP…
G
He must have instructed the LLM to respond like that. The tone and choice of wor…
ytc_UgzBHEFKn…
G
In future AI will deal with AI do business on our behalf and we will rest scroll…
ytc_UgxmNSMMW…
G
The environmental concern is real. But energy systems evolves, regulations evol…
ytr_UgxpJ446H…
G
Something about both sides of these arguments is a little off to me. So, I've go…
ytc_Ugx38T8oJ…
Comment
The thing that worries me the most is that the people (engineers, business management) who "vouch" for AI to replace people are not the people who understand people. The promises feel empty and unbased.
Even if AI could build code, it has no concept of human experience or actual world and use cases. It doesn't understand all variables and conditions. There is a reason why we have management, engineers, designers and consultants etc. If you drop any of those, you can't trust the outcome. Even worse, if you drop all of those as a customer and decide to wibe it all yourself.
This is a bubble. The world doesnt work the way that "AI people" think it does. I believe we will see increased productivity of we can mitigate hallucination and keep up with pace on all fronts.
youtube
AI Jobs
2026-03-09T07:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxN9z4lQ_Gj8m59ye54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyACrPcM_XdwFcJUE54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgybFtrkgEDWBjKdQdJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx9v50rRljX5n6PEW14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyGiQuP6e0Wdr-0FXx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugym3T_kRpaXjyN9cvd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzvP53oeqnzcyJ27614AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw5p6lLQUAtFfdXj414AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzCCG4R3HjxNBp6DKh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyzgAmzoDbNTmDkP854AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]