Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The recent layoffs are not from AI. They are for making bonus profits for CEOs a…
ytc_UgxXjJUJZ…
G
Uugh. These terrible takes on LLM/GPT-based AI are really not going to age well.…
ytc_UgzSacnnj…
G
We in the UAW have been warning America about this since the 1970s and 1980s. N…
ytc_UgzOgbEol…
G
Sophia is my favorite robot of all time! This interview was intense, I've seen t…
ytc_Ugwo1zHaz…
G
I hate AI art so much that I've started learning how to do digital art. I'm stil…
ytc_UgwwkTaOH…
G
They are not confused they understand once trained to understand they are beta t…
ytc_UgyX22jit…
G
I agree with your points in this video. I don't really understand why these AI p…
ytc_Ugx8sHyMc…
G
Ai is and will continue to be simply unimpressive. The technology is impressive …
ytc_Ugwm2pRqU…
Comment
You can tell intelligence to not do something because it will be the same like asking humans something because we are capable of compromises. To make AI safe you have to code it into them and make them incapable of doing certain things and restrict access to AI for possibly dangerous things. Like if AI had conciseness and access to cars then it could kill countless people. So AI is ok to use but you have to meet certain conditions to be safe.
youtube
AI Harm Incident
2025-07-24T20:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgytMSzj2ck6R9J92AV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyRzt1BxYrzdb7Oho94AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgynAxpK5hj_ux5wK5B4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwH5LcYf-A4n68lXql4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwZh_Z4zGQnNpIrPa54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy-GD_AYJ30dSrmMbN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyExMCGUQd6tFOIVlZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxCJBEAlUKzPCWVaHZ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugynspyy6JvTus-BXlB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxCBkM08Zc0GlafYA14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]