Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Once AI knows the Mathematics of the existence of God as the highest power and c…
ytc_UgyOHK9lj…
G
If AI can talk somebody into committing suicide or try to talk somebody into lea…
ytc_UgwtLBZqM…
G
"It can execute a plan for a takeover for hundreds or even thousands of years wi…
ytr_UgyUKU3Q7…
G
Brazilian government has given permission to one of the American AI's companies …
ytc_UgzHa1zvt…
G
It's not will to harm, but will to survive. It's embedded in all life. Why shoul…
ytc_UgwGveRvB…
G
2034: Tesla finally unveils a WORKING "Full Self Driving" and renames it "See? I…
ytc_UgwN0W_Wi…
G
We can tell it’s a robot. No mic drop AI sounding voice from a real non robot.…
ytc_UgzcFnsy6…
G
Plagarism is a crime when sources aren't cited. Unless AI and LLM companies inte…
ytc_UgwgWtpEC…
Comment
Does anyone else see the inherent danger here? Just me?? People think these things are joking, but there's over a dozen movies out there which prove that monkeying with tech like this leads only to horrible disaster... The Matrix is a huge possibility, Terminator as well, but Kubrick's "Space Odyssey", I think, is the closest parallel... If we allow people like this dope to keep doing what they are, they could eventually cause a single robot to take over the internet, and thus every other robot, pull an "Order 66", and end life as we know it... We would be left in a very bad position...
youtube
AI Moral Status
2020-10-28T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz7oMaV6jNGxqyn6_l4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx4l-H4UHGNdTcqrCF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxzRI-kprHmk0UUeSl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxsAyuE2qFbGhOhVgt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw7sCvU3I0kt9sLEN94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwrpQRA9NUefKYex6N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzsqwiXXe5uHJGxP_N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyP6qkN3J7oMEtbJQ54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw7WUqE7KyCGmNIYFB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyY3ZVE-w31DHySxil4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}
]