Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If you are techie owner ai is the best tool to replace CEOs and directors not th…
ytc_Ugxe0LOqN…
G
As a person who casually uses AI gens for fun, of course the argument doesn't ma…
ytc_UgwM2Ljl2…
G
@babybatbailey03 Are you slow? No one here denied that it's AI. The point of th…
ytr_Ugy37rihi…
G
Don't put that robot in there with Jon Jones because it will come scrap metal…
ytc_Ugz3IFCrt…
G
I mean you could use the AI and just claim that it’s lying when it says it’s the…
ytc_Ugy1EvRup…
G
How can he express himself, when the he's not the one creating the image? Writin…
ytc_UgwWtbmsP…
G
Well it does matter you dummy lol why do you think 911 happend after that air…
ytc_UgzazmY3e…
G
AI Art sucks, as an artist myself I find it very unfair that people can profit f…
ytc_Ugz1DEcJO…
Comment
I have two major concerns. First, is related to the movie The Terminator. We must not allow AI to control our military, to choose our targets or to launch nuclear missiles.
Second is that, AI currently has no morality. It will murder a human being (or any other living creature) and not bat an eye. AI has no tears to shed. Therefore, we need something like Isaac Asimov's 3 Laws of Robotics, and it needs to be hard-wired into every single AI and they are:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
youtube
AI Jobs
2026-03-24T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz8LvPqZgWNMVsUE1R4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz9J4E4DPwym1U7vB94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzaQENGYvba-Svl4Ap4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgytCtUGea5JW-tfs754AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz4FrIPQ9JLMvsoLYB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxwbabK3OsC7-_xyot4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxvMzh5GGnRmF5HN514AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzodc73cYqyeUxNk0V4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwDQA7_zwCuTl0s5XJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwNU0FeXqqYftFZ5FJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]