Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't think we could create an AGI, that would only work in humanity's interes…
ytc_Ugzm0F6DA…
G
I agree fsd has a long way to go and I agree, cameras are not enough to do the j…
ytc_UgyTuAbb0…
G
If you don't you go in cuffs and get an automatic charge.. no win situation.…
ytr_UgysclNkG…
G
And this is not at all a subtle way of advertizing, basically a 'fear appeal' te…
ytc_UgzIp14Vn…
G
What a load of hype bollocks for the stock market. Have any of you spent any tim…
ytc_UgxUO_Q-j…
G
I don't think the real problem here is it replacing customer service jobs. In fa…
ytc_UgzSpbgQW…
G
I’ve always noticed that AI generated woman typically always have their mouth sl…
ytc_UgwE6rwal…
G
I can guarantee that food cooked by a AI will look and taste like shit…
ytc_UgzRQ7wNY…
Comment
I think I'd have to argue that you overestimate our ability to pull the plug on AI. It's not like nuclear power which has a resource governments can control. It's a computer program that can run on laptops. It's much more easily distributed, duplicated, hidden, grown.
And even if we halt, China & Russia won't. For good or ill, they'll get ahead of us, and once ahead , it could be impossible to catch up as more powerful AIs will build more powerful AIs much more quickly, leaving us behind in the exponentially accelerating dust.
And if the worst case scenario happens, it won't be Hiroshima. It will be a super intelligence that can outthink all of humanity created in a despotic country with the infrastructure to control their populations already in place. Our only defense will be a super intelligence of our own.
Frankly, I think the genie is already out of the bottle. We're going to have to learn how to build guardrails at the same time we're hurtling down a highway without brakes.
youtube
AI Governance
2023-03-31T15:5…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgweKnRttT0A3_N4xVZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzq2lzR2K4bd2AXWJ94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxVJzYBd4PntY_Q9714AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwKEuY1oPO2Czbr09d4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxRYpkAu3CbG6W0sWJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz7tqqVqDr5pMIhSC54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxCF9JFQRQbu8b_KrV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy30UJP1X_qhFhXTuN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxOSOHEgsY1BNy1eR94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgytlOUcdWnU1p0IRcB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]