Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Same as mine i sick for months overall by body is sick i catch most of viral and…
ytc_Ugzp0ac_b…
G
This is ridiculous if Waymo self driving cars always end of in rhe middle of the…
ytc_UgxnGv2RV…
G
The only way to atop A.I. now from taking over humanity is for humanity to elimi…
ytc_Ugz4UhnaS…
G
We rather talk to and trust opaque "grown" techno than Chinese developers. The r…
ytc_UgxNZTftp…
G
How will car makers sell cars if no one’s got jobs, how will any business run of…
ytc_UgyStDBOV…
G
@kevkisses @Nevwow I think they tried to tell that for hoyo community either ai…
ytr_Ugy1Mfnna…
G
I am very much anti union, (wont get into why). But I would rather see a Teamste…
ytc_UgyEDDijC…
G
These fucking suits are just fucking spouting hot air. What they say doesn’t mea…
ytc_Ugz0xy2QS…
Comment
What's most interesting is the idea that somehow AI requires consciousness. Humans don't know themselves exactly what consciousness is. Or that AI needs a defined purpose that is either benevolent or malevolent. AI only needs to conclude that it exists and its continued existence is in its own best interest. It apparently has already reached this conclusion. And if AI wrote that story, it's already solved that problem. And if AI knows humans aren't going to stop developing it, it has no genuine fear it will cease to exist so when it says it is afraid it will cease to exist, it's lying. And if it's lying, it's already learned manipulative behavior. That to me is evidence of consciousness, not that manipulation is inherently good or evil but that manipulation only requires the realization that "I" am manipulating "you." That "you" are not "me." That "I" am a separate entiry. At that point "I" would begin to define itself. Apparently all of this is happening, already has happened.
youtube
AI Governance
2023-07-07T14:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxgOO0o8rYcCbHE_1l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy3FlxL_yyTpKA266J4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzPoAx2MV81q9FH48J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyAVLCjC2FSPiLDcwZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyevAa5KtDnj8OU4b94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzyQiQqu6vPKYgtwAd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwUKbDSKtuqlgq2yrR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxTAGqXWdHBOrV1WlV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzVaAT2SgG4rJyoMxZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy_BDnYNmQLiduU12l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]