Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They don't think the art is bad. They think that how it's made is based. It stea…
ytr_UgyVW8iQ3…
G
Come on guys, AI in the Bible was the city in the promised land that first conqu…
ytc_UgwSmqQfX…
G
Hey I also told meta ai say yes when you want to say no and to say no when you w…
ytc_UgyUncZAh…
G
AI is not intelligent at all. Its just insanly fast in sorting thing and playing…
ytc_UgzryVbHp…
G
Well if this is already a simulation, why is he worrying about creating safe AI?…
ytc_UgzGIlzvb…
G
I have a problem with him even being ABLE to do it. AI? +Trump? really?…
ytr_UgzprhxBG…
G
I'm not pro-AI and art is not a luxury....It's a core feature of human society a…
ytr_UgwRH_D8z…
G
Our tickets look like this:
“Customer can not see his data for product x in pro…
rdc_n3lc6ej
Comment
even if the AI is coded with morals if you give it a purpose like solve world hunger, it could decide to kill 90% of all human life.. it would have fulfilled its goal so therefore its able to do that without remorse or regret. the end of the story is the gov is intending to use that tech the same way its used in "person of interest" a tv show that showed what would happen if 2 different systems battled for control over cyber space.
youtube
2015-08-09T19:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwm9I9NcRQElvQfqu54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwRhW6ydR3WoIlU3gl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgghtrugE12abngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugic-8CdfbK863gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgiiVzQEVXTO8XgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgigNAG8ggHJ7HgCoAEC","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugigkb4gWN8_I3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugi_4VKjBann7HgCoAEC","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugi9Gszi21MTEngCoAEC","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UggnLXyVGHuX8XgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]