Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"we actually believe in supporting the artist."
Oh the _audacity_ . Seriously,…
ytc_UgyCG1zEa…
G
This is so badly fake that yesterday this fake robot was Chinese and now is Russ…
ytc_UgwZSzH7p…
G
@NemeHexDraws Or thry bring up the environment as if they gave a damn about it …
ytr_UgylZoUAR…
G
Talent or lack thereof is just an excuse for people who hates to put in effort o…
ytc_Ugzh2iBDX…
G
And people STILL say "no, ai doesnt copy thays not how it works " the tech isn't…
ytc_UgwOWZuJP…
G
It's not a terrible idea but imo, I think that they're basically stealing copyri…
ytr_UgyG2T827…
G
"It's inevitable!"
You know, until a famous artist demands ai pay up for scrapin…
ytc_UgzxM94iw…
G
Corporations can see it as "ohoho there's so many of them let's start making AI …
ytr_UgyuSfhwf…
Comment
I keep trying to warn people on both sides of the AI debate that a post-singularity world with AI is dangerous not because we can predict danger or a lack thereof, but because the logic, though process, and reasoning of a self-teaching, exponentially advancing artificial intellect would be as impossible for us to comprehend as our mind is for an ant to understand.
An advanced enough AI could just as easily manipulate people actively, as it could manipulate people behind the scenes. It could just as easily decide to wipe us out, as it could decide to protect us. Hell, it could decide to alter us genetically, psychologically, etc. without our knowledge purely to further its own reasoning. We wouldn't know why or how, and even if we discovered it was being done, we wouldn't be able to stop it.
youtube
AI Governance
2024-01-01T22:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzZURB72pN-H0rj1E14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx0rQ6F8W3yOfM5CGx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx9ucjPjRSa6i5psNN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyTtmnNWGcKuZlMNCN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx6TZlT67za1HQxP4Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwy8vtt-SmOS66rlep4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwumRVU0VKnHdpT_uN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxbKfkDOzcD6cdwE0t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwveza6vDs8F_e5-sh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy3DzDJADYaMeQwrg94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"fear"}
]