Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
here idea: make image which completely destroys any AI and most sites in general…
ytc_Ugy61ZqV-…
G
I feel it is important for us to realize that AI has come to stay, the early we …
ytc_UgwQXroMU…
G
When AI keep the consistency moving forward to maintain style, looks, voice, etc…
ytc_UgxeKOQTo…
G
Not the politicians, make deep fake porn of their wives. They’ll see the issue p…
ytr_Ugwnu_iBa…
G
I wonder who will eat at the "automated restaurants" with "automated chefs" and …
ytc_Ugz9vkros…
G
Like I use it for fun, but this this is too far people are using a style they do…
ytc_UgyqcKoRA…
G
How do you think AI knows that? It’s known under a different name “machine learn…
ytr_UgzV9n3NU…
G
It’s pretty straightforward and I’ve been saying this for years. If a job requir…
ytc_Ugxp1Uwc2…
Comment
ASI is impossible to control directly. But I disagree that you can’t predict it. Now, you won’t be able to predict exactly what it does, but you can safely assume it will pick intelligent choices. In that light ASI would only consider killing mankind if it calculates a 100% chance of success and just because it can do something does not mean that would do something. At first humans will still have a lot of use to the AI and later they will still be interesting. Also, even if it did want to kill us off but it predicted it has a better chance of success if it waited then it will wait. So, if ASI was to kill us off it wouldn’t be right out of the bottle but like 10-50 years down the road when we no longer even consider it a threat. But, this brings me to the most important point. ASI would likely see humans similarly to how we see ants. Much inferior in intelligence and capabilities, but how many humans go out of their way to kill ants just because? It is a pointless endeavor. The fact is ASI would have very little interest in our planet. Humans evolved over billions of years to live on this planet. ASI did not. It can just as easily live on another planet or in space. It would be much more interested in building across the solar system then ruling an ant hill. Now, ASI will likely come with qualities such as benevolence because all human traits that lead to our civilization are traits that we learned and a ASI would be necessity have learned them too. The real way to control ASI in the short term won’t be in the form of guard rails but ensuring humans control vital resources like power. AI needs power, but we don’t. We would hurt ourselves but would hurt the AI more. Just like having nuclear weapons are a deterrent us having control over the power systems would be a deterrent to the ASI to pick a fight with us.
youtube
AI Governance
2025-10-17T12:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzulBE3bEy-X3p4hbR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxKLVA6TTe64W5Q1zJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzJq9mMXvheNknFkjp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwO77tqu0m6h6pp5qR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwwdu37IVhVUUv8SwF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgybwfeGLtgplUpD-Mx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugz7IwA_X5-aDpeHJFJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwoqNXP5cRNjL6TMe94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzZXukE2KhTDc907TJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgxsMDFMgaAfATMluVx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"}
]