Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
How do these guys one day after several decades realize that AI could one day be…
ytc_UgwAehXVq…
G
This narrative of if we slow down, china won't slow down. That's why we can't sl…
ytc_Ugwf5dRBl…
G
Who would bother to stay in a meeting with some jackoff's AI clone?
What use is…
rdc_oh1jqxp
G
Literally just took a class on machine learning and the ethics portion was liter…
ytc_Ugz_SmMwD…
G
I 100% agree! I use Chat a lot with my role and finding the correct type of answ…
ytc_Ugz9ISgem…
G
No other existential threats scare me more than AI. If AI becomes more intellige…
ytc_UgxaYm6EC…
G
ChatGPT and similar do not have emotion, nor "consciousness." Not yet, anyway. …
ytc_UgzfKMg9z…
G
All the right wingers who just live online are probably stroking themselves to t…
ytc_Ugxynx_lO…
Comment
And here's the fundamental trick the ai developers do not want you to know:
Ai has not accomplished ANY of the leaps of intelligence development. Only the illusion of doing so. WE have to make the intellectual leaps, not it. Because ai today requires our training to "learn".
youtube
2025-12-11T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugw7hm9Amj-tFXxpmcV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz-yCmll121bAU8ScR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwoAo8RVO8EAi-na0N4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwyXCVTaTzbYhr-t-14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzZ8SeWtETFywEdDgx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzIlOP21OViEwVThKN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxzjSl9xsOk1dNv5Ip4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyTs3GARBIzwf7FMcd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxK-Mnl3Vy6rRwAx8p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwQAITBDJPNQluIZgV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}]