Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Tbh, if you desperate enough to use AI as therapy, your chats being used as trai…
ytc_UgzWoi-h3…
G
The warnings are dire and AI will wipe humans out but we will keep pushing it ju…
ytc_Ugyc2dnw-…
G
Johan R are u joking? We should embrace automation and give people a financial/h…
ytr_UgwON-wB1…
G
I have told ChatGPT numerous times that what is going on during complex, well co…
ytc_UgxbEd1Yk…
G
If you treat them right, not like pleasure toys inthe futute, we're safe. Just b…
ytc_UgyoxZeRA…
G
If the AI feels that humans are bad and should be destroyed, then why would thei…
ytc_UgzJw85Fo…
G
The 'secret sauce' is trusting older more basic versions of AI to detect if the …
ytc_Ugwu4CqZs…
G
As always it’s not AI thats dangerous - it’s the people in control of it.
Human …
ytc_UgwFCUiJR…
Comment
The statement that programmers using AI are 35% more productive is exagerated and misleading. Multiple studies have shown that the big productivity boost happens only during the initial phase/boilerplate but tends to 0% the bigger the project becomes and even causes productivity issues.
Also, we have studies showing that people using AI constantly face cognitive collapse in the short term. So it will invetibly lead to less productivity because you are becoming more stupid the more you use it.
youtube
AI Jobs
2025-12-16T09:4…
♥ 271
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugwc5mxAR3U1qu6xp0x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzzCW6HcChCc94fQJl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx5aj_2OVBzxGvbEXp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxCjjHmlfTIXDnKywx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx1y02mA7YFgmj4HWB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxD6FyMngs4Tnvc3CR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"resignation"},
{"id":"ytc_UgwvcPh30jtiNQJzBhp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzpyXgqLyIAI9Gj5id4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwUyrdzcymAcYcxe0Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz7CmPYGtMSU_2Hj4t4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]