Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I like to draw. And I like to commission art too. But honestly, considering how …
ytc_UgxC7_YQA…
G
I don't think we even can achieve human level intelligence, let alone superintel…
ytc_UgwJm-OAr…
G
Commission an art piece from an artist and call yourself the artist and see how …
ytc_Ugzq5JYpv…
G
I am very much for AI , Robots and entities doing the work everywhere it can. I …
ytc_UgzU2o6e_…
G
And they are already begging the people to come back to work because the A.i. ke…
ytc_UgxijahDA…
G
***** , we are inconsequential, bud. Every human thinks the other is inconseque…
ytr_UgggWgOiF…
G
20:58 huh. So basically if one robot has a plan to take over the world the knowl…
ytc_Ugwfi8BTi…
G
I'm pretty sure the AI developers knew that Congress will not regulate AI until …
ytc_UgwfgKVvX…
Comment
Current AI development is lazy design. And AI designers know it and just keep making it more powerful, and hoping their sandboxing is enough to contain the rest. They know safe AI and overall AGI systems take hard work, and they just don't want to put in that hard work.
Think of how the average user just wants magic AI results from a simple prompt, that's too how AI designers are going at it, they want to endow a results machine with as much power to get desired results, and understanding how those results are being made is getting lost in the process.
youtube
AI Moral Status
2025-12-16T03:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugy9jqVwW4Cv1mhTAQp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyS1m40RcF3lRlHrs14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyFJx4OyWqyaG3RJIt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyYRhA5O1iL4_ME1VR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwdT05ulYf0LhFfF0l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwQX4aEUwKvWlPuGMJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx1zY5xLTtF6QKOH-94AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzahSgXScP44i6_Ygl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxw4FBoXameA8-30eR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz4JmFPwXWOclPVeCB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]