Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
CCaaS (Contact Centre as a Service) is already a thing. Unfortunately there are…
ytr_UgxCyiVVi…
G
The assumption is that government customers are large enough to negotiate carve …
rdc_o78elv2
G
Anyways, ignore that—
The video is informative and realistic. Multiple opinion…
ytr_UgzhbODVj…
G
Yes. Art mimics life, ai art mimics our art. I've actually learned alot from stu…
ytr_Ugz6WCUQD…
G
I love that the AI bro tables became shrines to the loathing of AI bros and thei…
ytc_Ugwsz30Bj…
G
Merging with A.I. sounds horrifying. Sadly we are already in the first step of t…
ytc_Ugwb34GAb…
G
It's all the same ~~disease~~ pathogen. It's just a matter of where the infectio…
rdc_dpc2fz8
G
I've been saying and preaching this in this sub for a while now. I believe it's…
rdc_m94be2g
Comment
As soon as A.I is integrated enough that it controls significant parts of industry/society, where A.I. and robotics are more useful to achieving the programmed goals of A.I. than humans, then A.I. could easily decide that humans are not a useful, functioning part of its programmed goals. Consequently, it can eradicate us or exclude us from that industry/society.
General intelligence A.I. or superhuman A.I. aren't required for this because it's a basic logical deduction. If we make tools more useful than us in a society where only efficiency matters, we stop being useful.
And if we are excluded or become secondary in A.I.’s hierarchy, we become second-class citizens or basically just nobody is hired for anything, so everyone is poor. That’s an even more likely outcome than eradication of humans since A.I won’t eradicate us unless we interfere with it and become a problem for its functionality. Nonetheless, considering A.I. is being integrated into machines to kill people, that wouldn’t be hard to achieve either.
Btw: when I say A.I., it doesn’t have to be a single program, it can be interacting A.I. and you’ll get basically the same effect if their programmed goals are all along similar lines (as in maximising efficiency), since the increased efficiency of A.I. compared to humans will be recognised by all A.I. so they will cooperate together to achieve their goals.
youtube
AI Moral Status
2025-10-31T01:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzFWjPxkVWOsujH9ll4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzT_V6rjZMblmZFKhx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwPqhU1Y94q7MlruVl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwhmHIKsyvU8aT63894AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyagZ-OLXQ1iiUpu-d4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"horror"},
{"id":"ytc_UgzfF7u5seJ-9W784G94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwc8cwVmqY2yStK5qp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzm40otCkmJW9KHb0l4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgyBGAG-3NHjz1r77Pp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugww88gxC1xcl4ZN7Cp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}
]