Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Problem with people stating this AI isn't "sentient" is they're all assuming tha…
ytc_UgxYRO0v_…
G
If it has "genuine intelligence" then we'll have to give them rights because th…
ytc_Ugi-qHjeu…
G
China needs to regulate AI harshly. If they release a real LLM to their public (…
ytc_Ugx4WC7Rf…
G
As with any ethical question, it can go both ways. But I want to say that if dev…
ytc_UgjGEKM8R…
G
I still think whoever came up with the term "AI ARTIST" needs to read a book or …
ytc_UgyAAxdgH…
G
>What is far more likely is that the dickhead oligarchs in charge will gut so…
rdc_kqteec1
G
You will be a bio robot in the future machine trained for purpose you are best s…
ytc_UgxwSmd3p…
G
I wish chatgpt would finish what it's saying despite user input in scenarios lik…
ytc_Ugxo3mr3b…
Comment
You know, people are afraid that robotic entities may one day directly state that they want to destroy humanity, but in all reality, these robots are consciously able to do far less than humans, as humans have destroyed each other for a long time, and still are. In fact, a robot population would perhaps be better then a human population (not hinting the mass eradication of humans) based on the fact that they would have less moral capacity and would have to rely more on knowledge then belief. This is true, as many groups destroy humans based on beliefs of assumption, such as superiority. The basic outline is that we should not fear these robots unless we give them the same ability to fear at a level like us.
youtube
AI Moral Status
2017-04-20T03:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgiH29RQhVyYo3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgjRAdI8CBX503gCoAEC","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugg3apYuxuw7WHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UggFvKK1w8GaCngCoAEC","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UggPDObvrBwGQ3gCoAEC","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgggzjmFyMBpxngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UghYctB0_3R8aXgCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UghUCAhI_rysgHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UggtE_QTcYfjL3gCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugj_mgcN0FnABHgCoAEC","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"})