Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Oh! AI understands the rules, that they are true and can’t manipulate them by no…
ytc_Ugz9zf1-S…
G
Learn the backend of how ai works and research the tools that are creating 3d mo…
ytr_Ugx-Udp5F…
G
As soon as Stuart Russell said "Not Yet" to the question, "If you had a button t…
ytc_UgxxvgP5i…
G
so much of this ai forecasting dialogue is so skewed towards hyper exaggerated c…
ytr_Ugyyix_RF…
G
When AI gets so clever to take control over its own programming that will be sca…
ytc_UgxPSBRfk…
G
They've been using AI to create bombing targets for months now and not a single …
rdc_ku59w9r
G
The AI is very strongly biased during training to be helpful and do what's good …
ytc_Ugyz7-V_F…
G
They could ban algorithmic suggestions and force users to search for a specific …
rdc_nkq0h1r
Comment
unsupervised AI (UAI) is tricky unlike supervised (constrained) AI. Human intelligence can be seen as an accumelation of unconstrained learning. Machines can be programmed with basic rules to devise trial and error to realise reward and penalty. Imagine a machine is doing this 24/7/365 without the need for breaks, sleeps or holidays! theoretically humans can be extremely inefficient in comparison to an advanced UAI. Another thing, UAI will realise quickly the optimal way for collaboration unlike humans who usually tend to be protective for self interest. The next decades will very exciting, maybe economy will be run by robots, politicians won't be needed and discoveries and Nobel prizes credited to UAI! who knows what will the robotic revolution bring to us :)
youtube
AI Moral Status
2017-02-23T22:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugg6uOok2VP5QHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgglKdwIP2tvZ3gCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgiBj8trrN2T_3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UghgtcFB4IEzDngCoAEC","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgiWpbxpfu9p6HgCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugj2C_TxSi954HgCoAEC","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UghHl86Xngak0XgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgiVMj0Ws70W2HgCoAEC","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgjRmuxIb5d8XHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UggGny5a5uCQDHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]