Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The MIT paper was flawed in that the companies that "succeeded" in using AI only…
ytc_Ugw4KJrnw…
G
The fact that AI does backend services easier is proof that humans have miscalcu…
ytc_UgwWazDwd…
G
I’d rather look at 1000 pictures of doodles on school papers or homework or note…
ytc_UgxgRYgvV…
G
This is so fricking cool. I always had a dream to work at an animation studio, b…
ytc_UgwyJ2LZM…
G
He have speziel ChatGPT 😂😂😂😂 my ChatGPT says it’s easy possible for humans or AI…
ytc_UgwcmMpsh…
G
these bitches are slow as fuck. if i’m gonna get murdered by a robot, let it be …
ytc_Ugy8hc5Y6…
G
using ai is the biggest roast to your self specifically if youre calling it art.…
ytc_Ugx9tab23…
G
When you say that all jobs will be wiped out, your implying that all people will…
ytc_Ugz6E7tiw…
Comment
"Do you think we're in danger of that happening yet?"
Yes. We're already able to create deep fakes of other people and use A.I. to replicate their voices and essentially steal their identities and create filters to make us look like entirely different people. We can also use A.I. to look for weaknesses in secure databases and also successfully ask a.i. to write codes to exploit those databases potentially jeopardizing anything from personal privacy and safety to national security. We should have taken steps to ensure that A.I. was used responsibly a long time ago.
youtube
AI Harm Incident
2024-03-08T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy5W73yNe2ZMnGT4sh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyneNTDAKP7gE9cUyp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwxlQDnrT2NQCLpjix4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxdsBbzM48iXnm-sTJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxHneIelnn3zUEUBZ54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz7i_Rj-ItOFafoABF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzF3_AESDU16d65MRB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugydgkr-X6DMY4g_xqt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxc-vr1hR03u7HYYKx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw0FIlSvpdhDxq4dHx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}
]