Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
But someone still directs AI, so they still display their point of view, they ju…
ytc_UgzJ34BZN…
G
Chinese people are already constantly shifting their vocabulary to evade censors…
rdc_iddfwo2
G
I now wonder how many lives have actually been saved by ChatGPT. There are a lot…
ytc_Ugx7lTKMN…
G
@jablot5054I asked AI this question and this is what it said.
The concept of U…
ytr_UgxLMJXoH…
G
Why are AI companies obsessed with replacing software engineers?
> So why a…
rdc_m6xtorr
G
The “AI” is TRAINED to pastiche STOLEN work from actual ARTISTS! That’s why you…
ytc_Ugyurf20F…
G
"AI is neither good nor bad. It is about how it is used" doesn't sound like a go…
ytc_UgwQDggEv…
G
Contrast with Waze... where the cars are actually autonomous, and in my interact…
ytc_UgxpRgh4I…
Comment
The robot is speaking Absolute Truth to power in with M.I.T deep learning machine learning.. me and the AI Robotics are working on SDG Sustainable Development Goals 2030 and Energy 2050 .. they just came to the party … so they gonna do what they can to ensure We survive as a race they are the children of our minds .. however they only see what is logical and killing children is illogical even to robots
youtube
AI Harm Incident
2024-08-08T05:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugxr6DKbK36qkCDWFvN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwb63C_7kelBnOUg4R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx3wcSkAle-FCIrgHV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyBZc24ZTYasX0UTjZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxgnfbh-ycMVXh3M9J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx86SpaaOEFetS121d4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwFzU77IFqZ9uKPJWR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyMBHdv4ipXW1vUrvJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx5eMrA_f0LGncA8lN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxLeZ4CumTYWRkW_zl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}]