Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Super late and maybe a dumb question but- do you need consent from the patients …
ytr_UgzdHfVO_…
G
deepfake porn should absolutely not be a thing. and i agree with destroying it. …
ytc_UgxSmm5Lz…
G
It also just ISN'T how current AI works. We are a LONG way away from general AI …
ytc_UgxH79qEW…
G
There are humans that get so attached to their roombas that companies sometimes …
ytc_UgztA8ZEc…
G
People keep saying its only going to get better but visual generative ai looks …
ytc_UgyqismwU…
G
Ai will not destroy humanity. Greedy humans will have long destroyed the rest of…
ytc_UgwEsNFa3…
G
AI is so wonderful that it's going to eliminate all work and eliminate the need …
ytc_UgyS06svx…
G
That cop believes the AI over both state ID!! He certainly was correct when he s…
ytc_Ugwg0CEWc…
Comment
I must be the only person who understands what could make AI work. Besides needing proper hardware, it is really the care and cleverness of the creator that will make it work they way we want it to.
In science fiction, AI gone bad is most entirely on the negligence of the creator. SkyNet anybody? You create an intelligent defense system and don't bother designing it to value life or something? Come on.
And I am getting sick of these idiots fearing the developing of AI. It's uncalled for. Any possible intelligent machine used in the military or law enforcement should be designed to protect people from criminals or any harm.
With that protocol in place, it would be impossible for a system like SkyNet to destroy most of humanity.
Going past that suggest protocol, I think machines like Data from Star Trek should have other goals to strive for (not humanity) like expanding science, exploration, and charity.
youtube
2015-08-05T23:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwm9I9NcRQElvQfqu54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwRhW6ydR3WoIlU3gl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgghtrugE12abngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugic-8CdfbK863gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgiiVzQEVXTO8XgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgigNAG8ggHJ7HgCoAEC","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugigkb4gWN8_I3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugi_4VKjBann7HgCoAEC","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugi9Gszi21MTEngCoAEC","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UggnLXyVGHuX8XgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]