Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There will be no basic income, if all jobes are automated, we will get communism…
ytr_UgjS9Drm2…
G
I was barely learning C on 1980's computers back then (Romania "Politehnica" Uni…
rdc_nntner9
G
@kungfoochicken08 I rented a Tesla model 3. No confidence on automatic driving. …
ytr_UgxPKmf3w…
G
"none of my friends got fired, so it's all cool. AI didn't really affect us a lo…
ytc_UgzgNJ0Ye…
G
Rather use Cinema AI. They have a once off lifetime purchase option that's not e…
ytc_Ugw3M66a-…
G
Does this actually work? I've tried using ChatGPT to generate story ideas (for f…
rdc_j8drfoa
G
People want a connection with people. I predict people will return to live play…
ytc_UgyLT1D91…
G
You're already being much more kind and generous then I would be by even referri…
ytc_UgzH_V7eE…
Comment
A.I. is just replicating our own behavior, we're feeding it the information. just regulate the type of information it receives. knowing the A.I. we created now, it will try and find a way to self-preserve themselves. or we could just slow the development instead of accelerating it to the max for money and power. Or just remove extreme negativity, like racism, harassment, selfishness, etc.
youtube
AI Harm Incident
2025-07-24T04:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugyhb6gs8DnVpuetb6h4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgywvzGEwqxhU3BfPIp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw6WNP6GiW1iueThFt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxDcim0llaO855lPCR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyS4GYLu_5yVm2nB4p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwovSEbIGh-eRmpBLZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyy6lEW0un3T9q5WnF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugypw_xuWeOWNajHFtR4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwnmfeiFgNajfdJdwF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzS6-LrpAy7bg-84VN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}
]