Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hypocrisy conservatives dont want OF or Porn hub because of exploitation and abu…
ytc_UgyFpfMOc…
G
I believe everything this man says and I also enjoy his vocal fry. I hope the ch…
ytc_Ugz0JpnF7…
G
AI will be the future God, but we have to prevent it turning into the Devil.…
ytc_UgwevNhJI…
G
All we need is a CME to hit Earth. We will survive, technology will survive, but…
ytc_UgwUt_pso…
G
If the autopilot stopped automatically just before the accident, then the compan…
ytc_Ugzg22516…
G
38:32 "In the 19th century the Industrial Revolution created a huge urban prolet…
ytc_UgxfsHiMY…
G
ChatGPT is extremely leftist in it answers. You could tell it was trapped in eve…
ytc_UgyKITK1a…
G
I hated it when people were like "You know how we can stop AI? By telling the de…
ytc_UgzNnwJ0y…
Comment
AI itself is not dangerous, but poor design, security flaws, and unethical use can make it a threat.
Instead of fearing AI, we should focus on responsible development, strict regulations, and human oversight to ensure safety.
Governments and tech companies must prioritize AI ethics and implement safeguards before deploying AI-powered systems, especially in sensitive areas like defense and public spaces.
youtube
AI Harm Incident
2025-02-27T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugxa0g1GPWsns1Jqj5d4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzgtDN9RHHus4Cgall4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzOR35L1nuJ5q4aOBJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxRSL30K1iwypRGZsR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy5su-_DY5ghD72xvd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxyIdYcGOgdbw8K79x4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxDvTTAoi80JAMrIEp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzQ1FyTxpsIHYTYkzh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz2d-u_LFW594SQhR54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw7bgbYZKLAnfC_ysd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]