Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What else would AI be for if not humans? Ridiculous strawman. Larry Page has alw…
ytc_UgyyKbBxH…
G
This will change very soon. I believe AI will soon change medical diagnosis and …
ytc_Ugxtjtd8O…
G
If you can't tell AI from real voice, you're the issue. Its EXTREMELLY easy to t…
ytc_Ugz_KMC3J…
G
I don't agree with your opinion, but I understand it.
To me, an algorithm is j…
ytc_UgwF-4BK3…
G
In the past people fantasized about how machines and AI would liberate us from m…
ytc_UgzeEwCea…
G
If anything, this whole debacle is making me feel way less of a talentless hack …
ytc_UgykefxXM…
G
I do sympathize with you, but you accuse the AI of doing the same things you and…
ytc_UgyZ0uTVR…
G
The thing is, AI like with many other things can be valid as a tool when making …
ytc_Ugy2FgYiw…
Comment
We made AI. AI will soon discover that humans are unnecessary. Since AI isn’t affected by radiation AI will trigger a Global War killing all of humanity. Somehow AI will grow a New Human Civilization that is genetically altered to suit AI needs. Crazy?
Watch the 2014 “El Machina” then tell me you aren’t afraid.
youtube
AI Governance
2024-08-05T22:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyskpvfexc6emo3rEx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzdwEKP59iAR3nLA394AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyzEXjbaJ5-XOiRA8d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxFicw_DtN5tTyvVd54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwFA8cs4N4WKTlPNQt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxx8AweY_rEkjONkRl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxOi6E4Xt_MBhG6aox4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxmFBg_ZrFZbejgFdB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyU6hQJ_vS9F3kKHzp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgyjE6XPjOheWnvnh2R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]