Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI "art" getting copyrighted didnt make sense, just by amount and speed of produ…
ytc_Ugz9Gpgmr…
G
In all honesty ai should stay out of health, specifically cancer research. Cause…
ytc_UgzUioWuE…
G
When AI figures out the profitability of destruction (to it, the intrinsic "supe…
ytc_UgwxOqyP1…
G
If you're going to put that much effort into the prompt, why waste it on an algo…
ytr_UgwjxQhx8…
G
As someone from the marketing team that was axed (entire team gone), AI did make…
ytc_UgyetTaPd…
G
There is always hope even on the darkest thing. We needs to have few people to g…
ytc_UgxnwPHXq…
G
This dialogue is a perfect illustration of how intelligent humans in one sphere …
ytc_UgyjxGZRB…
G
The Ai hate is so forced lol
Nobodies giving you a medal for hating Ai slop shaw…
ytc_UgzTLqjWS…
Comment
So from what I've seen from your video . The Ai will do any tasks given to it regardless of good nor bad based on the user. So for instance if someone has a bad intention the ai will do it with a bad intention. You asked it about a conflict with humans it basically said if it was instructed too! The Ai has a lot of potential but needs work so it don't do stuff with bad intention is what I'm seeing. It's a master type relationship with the ai .
youtube
AI Moral Status
2023-03-21T04:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgybSTlIBjOaZtOEI-B4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgwsdZjiTw-5PyhKDCt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxR82UZIfvnXYSM4Dd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxDjDCzkYM6ylzkL8h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyVImdv0_VLO3wz-pp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxRFACHoiHO8bCq72h4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyuDwIw_s_NXS0R-_14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyu50FcgzZKhqZTN4B4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzsJ6Q_bJS7YGXb0Hl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyLOblvcTakln0tqHV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]