Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I have been driving a Tesla Model Y with AutoPilot and a subscription to FSD on …
ytc_UgyRHf6EN…
G
As a disabled person who's hands REGULARLY flare up to the point of pain and inu…
ytc_UgzZYppGh…
G
Machine Learning is a sub field within Artificial Intelligence where a machine w…
ytc_Ugzsy3xwY…
G
AI doesnt do what it does thoughtfully, so its not the same thing with humans.
…
ytc_UgyGXUUfh…
G
Here’s the thing though.
Hiring a Ai, is not cheaper than hiring a human.
A A…
ytc_UgxvmpTz5…
G
As far as I'm concerned, the only way to have consciousness is to have feelings …
ytc_UgiwpEgnk…
G
Ai do not think they function on set of commandments pre-programmed by the human…
ytc_Ugy0Bg7i1…
G
With how the opinions of the people in the us seem to not have much sway on nati…
ytc_UgxDmiAWh…
Comment
The belief that we as a society, would relinquish the power of an AI or worse, AGI, to be exclusively used by a Worldwide Government, is absolutely ludacris. Why is Geoffrey blaming companies? Because companies and their utility of AGI would exceed that of the Government capability and would be able to, along with society in general, keep the Governments use of it in check. AI is the future "weapon" society will need to protect itself from bad actors that include Governments. Guns will basically be effectively rendered useless against this technology in terms of protecting yourself, your property, and your inaliable rights. If you lose, in effect, the 2nd Amendment by losing access to AI, society will be left defenseless. Geoffrey has substantial benefit to gain from his vision, holding a prominent political and technological role in a world-wide government as its chief AGI architect. I cannot imagine a worse situation. Giving an all powerful government, with absolute global control, absolute power. Geoffrey vision is worse than the modern day Oppenheimer.
youtube
AI Governance
2025-06-16T16:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzbfr58Vi_2kWOEvSN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxSBX2ZWxLgE1SfIAh4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxnjEJigcgIkcpmEmp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyb4zVfTw9Z7ez4EIh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxHDJWYmnovNazeDh94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxt41MZMXzszpssEPd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxXA3t6K4KFYSdhdbR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz4oAsKkKsfWeFlhpJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwT5Sv5doQu2QZnDe14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyHjNkMj28YtJmmqLF4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}
]