Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
dude, in this video yt gave me ads about ai service,i i dont think yt get the au…
ytc_Ugw69tn1f…
G
Wow... Of course... You have become one of those people..
And yes.. it is skill …
ytc_UgzME3Zag…
G
I have come to the conclusion that there is no deterministic system.
There are…
ytr_UgzrmdAGa…
G
Not an AI stan, just trying to find an artist wanting to work from the ground up…
ytc_UgzLIuNVJ…
G
Imo consciousness is more abstract than that, like talking about parts of a whol…
ytc_UgyMrze9Y…
G
Babes. AI gathers pre-existing information and compiles results based on whateve…
ytr_UgzcKLSMQ…
G
Those guys are gonna be crawling to your doorstep after watching the AI write "C…
ytc_UgzpqIIRM…
G
AI itself is not a risk, asking too much "What's best for companies?" instead of…
ytc_UgziZsIz3…
Comment
Professor Hawking,
I was hoping you could clarify an issue related to the ethics of superintelligent machines.
By definition, a superintelligent machine is capable of modeling human behavior at a high level of accuracy -- even better than humans. Doesn't that make it straightforward to bound the AI's behavior? In particular, the AI should easily be able to predict, better than any human could, whether its owner would (morally) approve of a given action. Couldn't we program it to internally use its model of its owner's values to validate both its means and ends? It could ask itself whether its owner would approve of each action, each action's intention, and each action's consequences (intended or not), and eliminate consideration of any actions, goals, etc. that would not yield unequivocal approval.
Using this approach, it would not be necessary to explicitly codify human values, as the superintelligent machine could easily learn to "know it when it sees it" (as with Justice Stewart and pornography), just as humans learn human values. This approach also seems to easily eliminate most ridiculous scenarios, such as an AI committing genocide to free up resources in order to make more paper clips. Indeed, the AI could easily identify any such morally ridiculous actions (just as humans can) and eliminate them from consideration. This would suggest that the bigger concern is that a superintelligent machine gets in hands of someone with bad intentions.
What are your thoughts on this analysis?
reddit
AI Bias
1438190026.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_ctlgbn1","responsibility":"developer","reasoning":"contractualist","policy":"unclear","emotion":"mixed"},
{"id":"rdc_ctkgy3m","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"rdc_ctmj6l5","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_cti0d6c","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"rdc_oc8cnoj","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"approval"}
]