Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What is digital drawing? drawing, but on a pad, with a pen with no ink, its stil…
ytc_UgyEJCPHl…
G
Can’t have regular folks using AI to level the playing field—imagine if it helpe…
ytc_UgyQ4sSJh…
G
@dakshs9528 The more I dig into concepts like this, it makes more sense to me th…
ytr_UgzNh00LG…
G
Setting aside ChatGPT for a moment the approach the lawyer seems to be taking is…
ytc_UgzAFdomn…
G
Robot 1 : * messed up *
Robot 1 : this is hard!
Robot 2 : yea! Humans are cruel!…
ytc_UgzRdHy6H…
G
While it is sad that we have to resort to deterrents for art theft like this, I …
ytc_UgwEuLIg8…
G
Learn low-level tech.
Compilers, C++, TCP/IP, algorithms, and Assembly.
All t…
rdc_ohmyjl7
G
My province here in Canada has outright banned autonomous and other "self-drivin…
ytc_UgwBreF_P…
Comment
first of all, being as imperfect as we are, we shouldn't try to achive AI with selfawereness mainly for 2 reasons: N° 1 we should consider trying to solve our problems as a divided species first. when all humans would agreed with each other and petty problems like those which roots are economical benefits disappear, then we, as a species, should try to create AI! N° 2 it's likely that would happen what every movie of cyberpunk showned us, robots would determine that humanity is sthe biggest threat to the planet and try to exterminate us!
thats my opinion! we are greedy, even with the knowlegde and we should be carefull...
youtube
AI Moral Status
2017-02-23T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgivyiW_ofw5JngCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgjA4P2zvsANW3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgjgGblLUS576XgCoAEC","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgjKd8TV71HVOngCoAEC","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgibNi4ZcKPCQngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugi3O5ApRuIrVHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UghEXl3QjOuqa3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UghvTASVmKa1qHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UggpYqw-oawjaXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UggeerIRJVSAs3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"}
]