Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
“Artist” is a debatable term.
-Sure, there is technically “art” being made (Var…
ytc_UgyN7Pjzg…
G
it’s getting more and more difficult to not have an ego when I’m smarter than mo…
ytc_UgwjEEvCb…
G
Ok, this was fun at first with the whole AI thing, now it's out of hand, and als…
ytc_UgxuVnvaZ…
G
This is going to sound morbid but, it feels like a lot of AI bros don't want to …
ytc_Ugw_5xx_B…
G
i swear i will only buy products that have no ai in them... its a security risk …
ytc_UgxF5lK_C…
G
Reading the comments section I can tell that people still are unfamiliar with wh…
ytc_UgxwY9zVW…
G
MPU - the USA would be MORE perfect WITHOUT Unions.
Unions no longer function to…
ytc_UgyOxH0kw…
G
Me:Nah im now rp and the ai i chat with doesn't like me
No1: i saw you made you…
ytc_UgzK4WhCv…
Comment
I say we befriend the AIs so we can both learn as a collective power in the universe. As for forcing robots to do hard labor? A simple question with an even simpler answer. Do not program a smart AI into the labor heavy robots. Sentient AIs should be used for purposes such as companionship or scientific or strategic innovation. NOT labor. Why program self awareness into an AI designed to mine for diamonds? it's entirely impractical and could result in a Skynet like scenario if they realize the horrible subjugation they have, yet also required to function as a technologically advanced civilization. I imagine that sentient AIs would find this method practical as well because robots are stronger and more efficient then humans to do such jobs. So long as your not stupid about it and make the first self aware AI for military use only then we have little to fear. At least that's my take.
youtube
AI Moral Status
2017-02-24T01:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ughnexcsmb3x6XgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UggvblKpw1_kgXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgjVEzS6w8goNXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UggfO1G2FHfPI3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgjcsyISv-nG-HgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugio6zncMOKloXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgiG4UILVf0E13gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UggRzJZbiuIY0XgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgiAg_hJ4iN9Z3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgjOBOe5JgVz5XgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"}]