Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think this snake will eat its own tail. But unfortunately, it's a large snake,…
rdc_nklqd20
G
Captcha Code:
if var(x) == "robot or human" and
I can't kill myself := ✅️
T…
ytc_UgxZFd91h…
G
may not be a popular opinion but it's something that had already been raised and…
ytc_UgzMwMwGO…
G
The biggest problem I see is security not jobs disappearing. AI used improperly …
ytc_UgyjpH2rJ…
G
You can’t. People saying the version of ChatGPT he was using didn’t have safegua…
rdc_nnjuqq1
G
Next time artists go on strike, they’ll be replaced by AI. Technology can be use…
ytc_UgwgylPNY…
G
UBI is a pipe dream
ZEROOOOO governments will give it
It’s a dream, nobody bu…
ytc_Ugz3zHM5f…
G
Natures law will still be same , no one can beat that not even AI…
ytc_Ugz7GzOo6…
Comment
People need to look at this dilemma from a totally different perspective: Are the odds of AI destroying humanity is greater than Humanity destroying itself by any other means such as nuclear catastrophe or anything of that matter? On the other hand, are the odds of AI actually preserving humanity from destroying itself higher than the odds of humanity managing to survive on it on?
youtube
AI Governance
2025-09-07T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugylh35WqrsE9OGrKeh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxILDl40fY120qgr014AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxFPIvKy3oq3kAMOft4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz09XTiu-w-wVE3SHF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy4APaWuPnIm8L9Bvp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxpdvgFLfRqRTiDfT54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxnPJjxSIxw_eQOoxB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyPh5-twXXOoqP2jSV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyXkwor3DSun0cbFwh4AaABAg","responsibility":"media","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw_tGB4Q9aOhp9DUBd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]