Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i always laugh when people say AI will take people's jobs, given that we're stil…
ytc_Ugz-Tchps…
G
And then you join a company or institution where they only care whether your lig…
ytc_UgzUHCjsI…
G
If you ask Gemini if blackness should be eliminated, it will chastise you for ev…
rdc_ks2oo4x
G
It is a very difficult task for humans to properly judge distance in the dark wh…
ytc_UgxlZO5_A…
G
Nice try. Classic propaganda video. Imagine the ammount of energy and maintenanc…
ytc_Ugyt8rb7S…
G
AI is not intelligent, it has no opinion and no personality. Human can connect t…
ytc_UgxzHEaMd…
G
So as an artist idk how but I’ve managed to tell what is ai “art” and what is re…
ytc_UgziZFRvJ…
G
If there isn't much difference between using AI and doing art, then why not just…
ytc_Ugy9s8YJj…
Comment
Yudkowsky and Wolfram agree they want humans to go on living and not be subjugated or eliminated by an AI. Yudkowsky believes the risk is high enough to warrant heavy government regulation and, if needed, intervention to minimize the risk to the maximum degree possible. Wolfram does not see the risk as being high enough to invoke heavy government regulation or intervention, apparently relatively certain the artifact of his theory of 'computational boundedness' will act as a natural barrier to any act an AI could conduct that would significantly hurt us. I personally am in the first camp but possibly for a different reason than Yudkowsky. I'm in the first camp not because I believe AI is highly likely to 'evolve' to a point where it will initiate actions that will significantly impair or destroy humans, but because I believe humans will develop AIs that they will purposefully enable to initiate actions that will significantly impair or destroy other humans.
youtube
AI Governance
2024-12-11T05:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugzyw7P6UIG7qr9orm94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz5qfO2p5ouopqxF9J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw5jx3JN_iJjVdgF-V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxgabcdIuRhNkDAGoZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzK0cxdklJv4XjEKQV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugwk38JoiF5nupttEiV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgxUpWrqOtfeJUqbHoB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw9Yn37_qtH16HPxL54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzRiCvRXTjY9wSaOpB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxrxwC9GQeGPZSOxHV4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]