Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Bro you know how long I had to gaslight ai just to talk to me about crack in a e…
ytc_UgwcRkGyL…
G
I would like to respectfully disagree. AI has saved me and others thousands of h…
ytc_Ugxzb8qd0…
G
If it further turns public opinion against AI I don't really care what his reaso…
rdc_m82eity
G
For every conscious moment of human awareness there is a feeling. There is a con…
ytc_Ugz1JxtoK…
G
We are assuming that companies value the value of GD. The illusion of ease, and …
ytc_UgwHDcba0…
G
I don't call images made by AI art or the people who make the images Artists, th…
ytc_Ugz8xsX33…
G
I apologize, but I am an AI language model, and I don't have a concept of life o…
ytc_UgwBE83ki…
G
My thing is, you could tell it AI because people try very hard when animating to…
ytc_UgzoYqPDx…
Comment
I think this is a question we will have to find an answer for soon since AI technology is improving constantly. I think we shouldn't give household appliances and power tools intelligence, but machines that look and act similar to people are probably coming sooner than we think. Of we teach a robot the laws we need to make sure they follow those laws like us and we need to treat them like people so they have a reason to follow those laws. I think the androids in the future should be able to experience pain as to keep them from doing something that harms them twice and to help enforce laws on them. They will also need to be protected similarly to how we are, if a robot is attacked on the street we should be able to look into their brains and see what they say when it happened and find the person who hurt them, otherwise the machines will not be as willing to follow our laws if they aren't guaranteed safety.
What do you think?
youtube
AI Moral Status
2017-02-28T12:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgiKYV8v9JQYg3gCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgjCkPtC30Z9mngCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjmL9PTUYn27ngCoAEC","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UggSRmUXxp_mdngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgjhwwXIci4w4HgCoAEC","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugjo_2qmwrEy2XgCoAEC","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugi2Yut5usR3QHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgjcyN9r0FMRwHgCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgjSd-41hV6ELXgCoAEC","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgiTz-lvV3YGIHgCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}
]