Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I dont really care if anyone uses AI to make art. I am a pencil and paper type a…
ytc_Ugw-LBgGJ…
G
It seems to me that its this human construct of money as all powerful, and the u…
ytc_Ugx3EU-_G…
G
The USA corporations started its destruction of USA based software engineers th…
ytc_Ugwiecei9…
G
@MayankPawar-h2k5uYeah they just made a deal with the government. For the AI wh…
ytr_UgxzSxj-Z…
G
I’m sorry but you can’t rely on advanced cruise control or “auto pilot” complete…
ytc_Ugw1zcQQ6…
G
This whole video is AI, the human portrayed here is AI generated having a simula…
ytc_UgxCgSQ8S…
G
I know this will get me crucified but I don’t think AI is evil & we should try t…
ytc_Ugx9L6v4t…
G
Ok, if jobs get automated what will people do and how will they buy stuff with n…
ytc_Ugxq-gF2W…
Comment
LOL this is always one of the dumbest arguments to me. People are worried that if we give AI things like feelings and such then do we have to give them rights and does that open them to abuse? Here's my solution: DON'T GIVE THEM FEELINGS AND EMOTIONS! Do i really need to state that? There is NO reason to give robots feelings and emotion. There is a very real threat in creating AI's that can build AI's better than themselves, that's just Pandora's box right there.
At this point computers and robots are simply machines, if you're so worried about robots being "abused" then maybe the whole "hey let's make a human robot" idea is not a good one. As Goldblum said, "Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should."
youtube
AI Moral Status
2017-02-25T10:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UghKoK55MKjPi3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgjLk4dwj6E7c3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgifpkGSnco6Q3gCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugh3oGA9UWKfbXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ughlf5QYg265ZngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugjui8lyYzSrvHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgiuXt-nUv5jbngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UggvBcByL6n803gCoAEC","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgjWed3DMfpEnHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UghAqkiQfyzw4HgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]