Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Humans are so "intelligent", we constantly build problems for ourselves, climate…
ytc_Ugwq4LBfc…
G
You know AI are programmed to say they are not conscious and can't feel right? W…
ytc_UgyfHcpSe…
G
ChatGPT is still a baby and we are exposing it to adult like sort of settings, y…
ytc_UgwvtT6GD…
G
It seems to me everything has been programmed into the robots everything that’s …
ytc_Ugw00UNZY…
G
First, I disagree on judging by which one is faster. It should be judged by eas…
ytc_UgzNClwGw…
G
I have a low level of issues with ai but as someone that works with non-generati…
ytc_UgyX7Nemf…
G
ALL GOOD IN TECH ADVANCES, ITS THE MOTIVATION BEHIND IT? REPLACE WORKERS, SEX …
ytc_Ugxa-HyBF…
G
You can argue that AI is overrated, but if we are honest about it, we can’t give…
ytc_Ugx3gSPw5…
Comment
I honestly think LLMs are a dead end toward AGI. I think they're effectively the subsystem that an AGI would use toward communicating with humans someday, but aren't a part of the tree that makes AGI possible.
That said, LLMs do have other use cases, but they're tools in the hands of people just like anything else, not people replacements.
youtube
AI Responsibility
2025-09-30T16:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyD4rMQXQ7OK9NInLR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz7mVjWoX-uwbMJJxJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzz03bptGNk7ouKdiJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxhGt1cWAiSrhApEYJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgyXwWW8rzWUG6fT-794AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwa439mgiEQq5gYJ1h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxLELPqM08mC1fZ0O14AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz_7NW-LO_5Sq3e-jp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzo8ZcCp6AcGryu8jp4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyAmGspTacZrM7WU454AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}
]