Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Companies now need clear disclaimers as to whether they use AI or not, then we c…
ytc_UgxhbNoFw…
G
AI is too stupid to replace every job and I don't think it's getting better.…
ytc_Ugw2raoyC…
G
I pasted a wikipedia article into chatgpt, and the bot took credit for it lol…
ytr_Ugx0iYuwP…
G
Something to keep in mind is the AI (ChatGPT4 o) have to follow some rule that t…
ytc_UgxPk87_T…
G
@lostbutfreesoul First of all the use is just not the same, schools make a dire…
ytr_Ugz0nMCsZ…
G
Good because I seriously hate looking up how to do something around the house or…
ytc_UgyFbjWrK…
G
I've invested over 22 years in decoding emotional intelligence and I have writte…
ytc_UgyKvyWiX…
G
Thanks, @flush240sx6! It seems like AI has mastered the art of a bug-like hand-s…
ytr_Ugxm0_b8Y…
Comment
I think if robots become aware of them self, they get their own consciousness. And then they should get their own rights, specified on the robots.
But beside that...I´m a bit afraid of the rapid development of artificial intelligence. But it´s also amazing, just look at Googles DeepMind...
youtube
AI Moral Status
2017-02-26T09:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UggxuzS4c5UU2ngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgjYJv9T9YkFhXgCoAEC","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UggKdvdoifxIKXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugjv8_ZPZwITtHgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UghqiS4AGQvTCngCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UggKdKSmQyWs-XgCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UggpGkl0EFbTangCoAEC","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugi2-dOuWWAOd3gCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgjfOOUww9Lpc3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UghVOYyM5bbFNXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}
]