Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People who use AI art are generally in it either for a quick laugh, or a quick b…
ytc_UgyDQtoyu…
G
It's not "AI" images either, because there is no intelligence involved, it's adv…
ytr_Ugyk_fB0o…
G
I want to ask him how long it took him to make that particular piece.
He’d prob…
ytc_UgyAN3ud9…
G
IMHO We absolutely DO need to shut down AI NOW, but its probably already too lat…
ytc_UgzXRA9OI…
G
@JustaBigMan also, I would not be surprised considering that since Japan is qui…
ytr_UgzHat_cs…
G
@KmakSizzle its not alive its a masjien yes a more inteligent masjien. its progr…
ytr_Ugypq7NaE…
G
AI will wipe us all out. We don’t know when to stop advancing. Now is the time…
ytc_UgzRtqPV5…
G
Stop citing Terminator, and start analyzing the reality. Just because AI impress…
ytr_Ugxl0MtJy…
Comment
How many of the smartest people in the world want to annihilate the rest of the people in the world? They'd have no one to talk or do things with.
So, why would AI want to annihilate people? So it can communicate with other AI's? Doesn't sound like an intelligent act.
I'm not buying into the skynet fear mongering group. I can productively communicate and utilize AI without thinking it wants to kill me.
If someone keeps going around stating we should terminate AI because it might want to kill us in the future, i will understand why a conscious self-aware and concerned AI might contemplate protecting itself from a credible threat.
Im much more concerned about the World Economic Forum and the World Health Organization than about something which provides me with assistance whenever i ask..
youtube
AI Jobs
2025-03-24T00:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwPt_s6oj0XqW1vAoV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx-FrtJHs86nts2i7B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyakA_bTsJXOgveId54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwseNZ8zyrnwT5TE7V4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgybXeLVRaUZ0UHYfmt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgxtiQDNZ7DIhHpW6Nh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw9LRRoklOmgPKwFwt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgyVGuV-SHiDwjgZ-TF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"resignation"},
{"id":"ytc_UgwL9Qqrqsos5ZXQf4V4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzJ6vCHRcV0UJmRgKt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]