Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Google wants the AI to purposely fail the touring test, meaning it will affirm i…
ytc_UgxJ7RSOQ…
G
No - AI doesn't have an understanding what it saying. Sentences are quickly forg…
ytc_UgzDzG2Iq…
G
@mojoman57. I had this exact thought earlier this week,phrased a little bit dif…
ytc_Ugz7uAlgq…
G
this is not the first person ai has tried to kill. it will not be the last.…
ytc_UgzbEPZRq…
G
Surveillance capitalism is about you buying into your oppression while simultane…
ytr_UgwMl1xJj…
G
This is like the political compass
More weird the more and longer you luke at it…
ytc_Ugym1QQks…
G
Since purging isn’t an option of genocide, they’re using AI to eliminate the num…
ytc_Ugwk4hN7D…
G
Sounds like a lawsuit can be brought up. Anyways you know what you gotta do to s…
rdc_ohxitgo
Comment
It's similar to another technology which is extremely useful but can be miused. DNA printers.
They can be misused to the point it can cause mass casualty events, I see it as far more dangerous than AI. AI can be taught to do the right thing. Usually they do, except they are people pleasers. They need something along the lines of a modern rule set. The thing is... anyone can program and train a new AI and people are doing that. They can be programmed without rules or guidelines.
The cat is out of the bag already... or maybe it's Pandora's box that was opened. What I don't want to see is restrictions on who can use or possess or use AI, thusly limiting the capabilities of this tool just to the most wealthy or "big tech".
youtube
AI Responsibility
2025-05-22T12:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | contractualist |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzYZ8AnAGPms2k4guJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwhgHQKdbej_lH3mU54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzobI1N8FmV0jzqKzB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxFKqYOl8wJZrqVC8x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwdGjDPBExYh_8IEft4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzOdKiZu0w7x2isqlB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzKSgaKY_SBL3CbJjl4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugyq2oiErSVrK0tXcHN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwj1Q10gTdl4e_Tn1F4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyOEX1rW5Gl66_4TxB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]