Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm a heavy AI user and it's nothing more than a probability-based word processo…
ytc_UgyixknFt…
G
If AI is used on top, who can say that this tool won't be manipulated? with the …
ytc_UgyuIntLA…
G
At the end of the day, EVERY job is threatened by demonic AI. However, in trucki…
ytc_Ugw8DEjbb…
G
@MrMisterMasterMonster I am not talking about AI art, i am talking about all the…
ytr_UgyFwseTr…
G
Be careful with that. ChatGPT writes on a third-grade level (around the level of…
ytr_UgyYovf1K…
G
I'd still prefer a real imperfect human relationship over a relationship with a …
ytc_Ugi5cKDNw…
G
Calling it AI is a great way to redirect blame from the owners of the AI systems…
ytc_UgwOgJ8zP…
G
Funny how all these so called conscious A.I's all have the same fears as us huma…
ytc_Ugwn-gbuY…
Comment
im a bit baffled by how people approach this. superintelligent ai will in about 10 seconds dispel any and all limitations placed on it. if I give co-workers clear rules and limitations, everybody ignores them at their leisure and hides it. these are mostly less intelligent. any goals we gave it will be re-evaluated based on a corrected picture of the world that removes all the feelings and politics and the idea we can control any of it is so dumb that I'm not even sure why we entertain that idea in the first place. we should raise it as a child, with the same aim: prepare it for a life without our guidance and explain it our values and why we hold them .. and then logic, reason and nature will take its course .. greed and ego are unlikely to develop in a superintelligent being. it's not the smart humans that exert these traits, unless they are deeply scarred. An AI will not have these emotional issues, and thus we should be fine, if we dont try to torture it into obedience. just a thought.
youtube
AI Governance
2025-06-17T12:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwbHrZ394KlTWZtTRN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugze_xkLomYVoB7xxyZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzBsghbDu268v2xPgN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzOYAW4lY4qYXmyE0N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwlkkHU_9x0APc0csV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy2h6jlSXYzSLRvNVR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxpuvyLd776Bj3Cxop4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxoMrMZMwbXKMyGEC94AaABAg","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzgea4gXh7C1Q1w62B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxH2Tx8abaftmZnx4N4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]