Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Not really. It's an inaccurate representation of how people interact with TikTok…
ytr_Ugzd0Bj3y…
G
Mr. Green, it would be very important for the conversation to bring in Bernardo …
ytc_Ugw_QuKjP…
G
Based on what I’ve seen of co pilot it’s just a dump of predictive code from sta…
ytc_UgxANttAr…
G
Origa, Russian singer, made the intro songs for Ghost in the Shell SAC. How fitt…
ytc_UgziORREz…
G
This is the third child who I have heard took their lives because of A.I or chat…
ytc_Ugxwz5aiW…
G
Like I'd ever be in the same country as a robot with a gun with bullets.…
ytc_UgzuwkM6C…
G
This never be possible ai can't write 100 percent code not even 30 percent ye j…
ytc_UgweTHt3Q…
G
@UmbralClovers Also, if you want more protection against AI art, i reccomend add…
ytr_Ugy0BsJCf…
Comment
I'm not sure intelligence is a difference in kind or even in scale, but maybe in scope. There was perhaps some acknowledgement of this from the start; AIs wire up multiple domain specific capabilities. Humans are trained on the totality of their experience, which no AI comes remotely close to in terms of breadth. That's why "millions of miles" is just marketing and why a self driving car occasionally still makes a mistake of a kind a human who has never driven before would never make.
Though an AI only needs to be good at human psychology to award itself the mechanical turk and social engineering backdoors, especially if alignment is applied at training (if at all) and not as an ongoing filter for the AI's interactions.
youtube
AI Moral Status
2025-11-01T06:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxyPq1T_w8e9R5FY054AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwRabpPg-Yqo24Smmd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz6Zn1oPjiCtz5tbLV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzHBHOAKYeSQpnNNrF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugwm9rEGyvc9hqTVxaV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwaf8pzYoaKV0wpBx14AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyyaDvk0iSO2EUnXPl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyx5Ipo3CfZjr63RfR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzaM1AJbmaQs_IvumF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx038np7EB2vh-X1e94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]