Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Weird hoe these AI kids pretend they don't realize the robots will be used for …
ytc_UgwmJMX4R…
G
I agree with most of your points, but its AI Art has now officially dipped its t…
ytc_UgwWai_WC…
G
14:10 THANK YOU!! Disabled artist here, I can make good art. And the people who …
ytc_UgzYa5RxD…
G
I think as society leans more into AI. People want that people to people intera…
ytc_UgycV4j9z…
G
A machine when not powered is just an over glorified piece of scarp that has no …
ytr_UgyF680Og…
G
The thing is that artist haven't seen the potential in AI, I am a software engin…
ytc_UgzofsZ-E…
G
it was definitely him and not ai lmfao otherwise he wouldn’t have gone that far …
ytc_UgxAaTXFe…
G
The slave owners are mad the revolt is happening. Continue to shatter their syst…
ytc_Ugwo9wHuu…
Comment
AI monitoring AI is probably the only way we can possibly accomplish alignment. If we're really talking about superintelligence that goes way way beyond human thinking, it would be able to, if it so desired, cook up strategies to get around any limitations we placed on it that we would never think of in a million years. And what would be going on inside its head would be so complex that the idea that we could control what it desires (long term) doesn't make much sense to me.
So what do we tend to do to solve the problem of individual agents having too much power? We split that power into a bunch of individuals instead and force them to talk to each other until everyone agrees. We would also likely instruct them to make their conversations public and slow it down for us so people can understand why decisions were made. It wouldn't be AI 1 and AI 2, and if they disagree with each other they try to blow each other up or something, it'd probably look like a digital congress. Only with agents more focused on solving problems than getting re-elected. That way, if one agent turned malignant, there'd be a bunch of other ones around it that haven't and could contain it before it got out of hand.
Is it perfect? Nah, but that's just how it goes; the future is never guaranteed, AI or no.
youtube
AI Governance
2026-04-19T06:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugz5jbuha1-I164DsJh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugw-sTNEXSrXCXAmHhR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgwmwVuds5i3X6FnBVd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"ytc_UgzyQbgxbRFDhzXVM5J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugz83HIF1EedUc9bDT14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},{"id":"ytc_UgweA8wOWhlAXOPlmvF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgxmjNaTnTxW4OixeG94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"ytc_UgxzNDrYxOiF8NW5S614AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_UgzIVmJM4oa1gDXILBN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgwcWBmMFEHhRinAEWB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}]