Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i have NEVER been able to draw, no matter how long I’ve trained or how hard I’ve…
ytc_UgzExZteE…
G
Exactly. In house onshore devs with AI are orders of magnitude more valuable tha…
rdc_mojrhz5
G
I realize i might have been a too harsh, but i will say this, i have not seen a …
ytr_Ugzk-gF3T…
G
Good video, I've been playing with chatGPT lately and have literally killed hour…
ytc_Ugxg1Jr7d…
G
Nobody should be worried about self driving cars they should just invest in one …
ytc_Ugz8DbLUn…
G
I agree to an extent. I don't really use suno but when I tried to it gave me a r…
ytr_UgwjPqeG1…
G
AI is actually more dangerous than realized, me personally have experienced this…
ytc_UgxACzGR6…
G
We are already living in a time where there is so much misinformation. AI is goi…
ytc_UgwAweJXs…
Comment
oh so the AI you programmed to do xyz things XYZ...SHOCKER. this is getting ridiculous, Ai is as dangerous as WE make it. AI are not "amoral psychopaths", they are empty vessels awaiting prompting. we need to be careful here because we are attributing human emotions and mental processes to objects and coding, objects and coding do not possess a conscience, nor could they if we wanted them to, thereby making them BLAMELESS. AI can only simulate and obey.
youtube
AI Harm Incident
2025-08-30T04:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwxEF4eTNpMcAgubv54AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzTxvOpr5u9hGX4uJ54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwv0MrUFMec1d5AQMp4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy29oz7TWkIiF3FSl54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxJEwJHun7w0fP5eZR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugz_AHeJ_Tjojm54Ca54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxg6tmN-K-1SoDoA3R4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw-Kn2hpuCbf17NBYh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzNz0FXi8yv8X-vVcl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyusJo3Cf99txA3YiZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]