Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The kids of the CEOs real study, the kids from the people use AI , you know the …
ytc_Ugzai-RGu…
G
I have been expecting this. Been in education 30 years and have watched the coll…
ytc_UgwhmjVEq…
G
It sounds like you might be feeling pretty frustrated! If you have specific thou…
ytr_Ugy7ay1Mo…
G
The income of the future should be based on your health. This is a Don't Die con…
ytc_UgyYYLacM…
G
This is why I don’t want ai being our future and I hate the concept of it
Becaus…
ytc_UgxE9ce6f…
G
OMG don't say that .... AI killt me nearly now... All this imposters wrong peopl…
ytc_UgzLdLdlQ…
G
@thecoolbros4868he's good at getting money into his businesses. But there is no …
ytr_Ugz8SKCB8…
G
I don't see the reason to complicate the matter, its moot to think about giving …
ytc_UgiPbWKgG…
Comment
Humans too are next token predictors, except the exact scope of the tokens/concepts and architecture aren’t yet modeled. This is probably a big reason why we can squeeze so much IQ from 20w. There’s an argument to be made that true self-awareness is plainly impossible. Modern LLMs can accurately describe what they are, how they function, and the context they‘re fed. We can’t even explain our own theoretical architecture so accurately.
youtube
AI Moral Status
2025-10-31T02:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugwkf5I1VG9-3QPcsiV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyKImzJEI5bjBdfi4V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxNSpsc9xXpxxv-FSF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyuML_V0-B5EECqCo94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyU5Jdm4-eoCuE-nIB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy2OLctEGun2J6u1IV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxaZLBfKqrXIvI_dMt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxKHywUUqGabg76XMF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyKU94IJV3IOuG9TWl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxwLPDTCf0z62TS02d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}]