Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What if we are a simulation but our system was remained left on because that lif…
ytc_UgwK3fB_J…
G
Even when you have an autopilot that is safer than humans, that actually makes i…
ytc_UgxbQJn5b…
G
But what happens if one of these trucks hits another car, or a bike, or a person…
ytc_Ugz22LNJH…
G
3d still requires mostly human work you doorknob. 3d still takes tens and hundre…
ytr_Ugx4Za_0s…
G
What will happen to those of us who scold ChatGPT for not giving satisfactory an…
ytc_UgzV91KaA…
G
For future notice, ‘AI disturbance’ overlays don’t work. You’ll need nightshade …
ytc_Ugyatj2w9…
G
Why not be down for a ban? Do you know just how bad the usage of generative AI i…
ytc_Ugwg_RPR-…
G
If you have only a little amount of money do something to keep it without any tr…
ytc_UgyaSENIX…
Comment
HEY GUY, why is the evil imagined by the computer that terrifying? What would you expect? The computer is a mirror of those who created it. That computer was created by fallen humans. Fallen as having the propensity for evil. So naturally the computer intelligence is going to act like a fallen sinful human that has the proclivity to imagine and behave like humans, which indicates the potential to think and do evil. How will AI act evil out? When its systems are smart enough it will be able to control events in various and novel ways which might seem smarter than what humans are capable of accomplishing. Although, creative and intuitive individuals like Leonardo de Vinci might be able to outwit its capabilities.
youtube
AI Moral Status
2023-02-25T21:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugwy0iAnPCqcIGPnPy54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxdIaH0y7Q8JALZeVh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugwmz2RAyZSLMIKqDsd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxLmJVSvBCRXnW09U54AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgytH70rwPDD45Q6ahp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzevmvm9USjKTLATvB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgwWj36uEH_nZv33xYF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgwQ7tb2DSkLH7TaSrx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzHz5jOoIjxdMN9L2p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy5Pohxf7Vdi0ZcuAd4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"outrage"}
]