Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I feel as though in this scenario, the self driving car should chose the option …
ytc_Ugz3NDLJm…
G
pure scare mongering. AI is simulated intelligence not actual intelligence, it h…
ytc_UgyxMQ4-x…
G
Good, they need to enforce these arrest and punishment as a deterrence of any ki…
rdc_oi1mclk
G
the scary part is, they can create an IA of you doing something horrible that yo…
ytc_Ugxh6f_RC…
G
That was a reasonably easy drive on reasonably intact roads. I wonder how it'd …
ytc_UgzVYDFAs…
G
Please stop misrepresenting how AI art is generated. You say the AI can take art…
ytc_Ugw1RwJSy…
G
Why do I feel like these stories are AI generated. Some of the language used thr…
ytc_UgxRQQL2k…
G
Also, to what degree of conscious are we talking about before we give AI rights?…
ytc_Ugyl7VVrh…
Comment
Meanwhile Google used its "Gemini AI" to smear Conservative politicians and content creators by fabricating lies about them; implying they are p3dophiles or engaged in other criminal activities when there was nothing of the sort. They were also caught blackwashing historic figures (even the founding fathers of the US) and completely eliminating straight white men with any imagery shown depicting minorities (which was rather hilarious when we suddenly were faced with black people in Nazi uniforms).
So it seems Google doesn't care one bit about AGI ethics and safety; only on how AI may benefit Google's political agenda and power. Only after they got caught redhanded did they claim to do better but I suspect they will only try to hide it better.
edit: If the world will one day be destroyed by AI; we can be almost certain that either Google or Microsoft will be the cause of humanity going extinct.
youtube
AI Governance
2024-03-02T05:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxD2aHzdcmX_pvWSbZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw8iXRUXDWSfZPnaHJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx72F4PC8z_4lnFLAx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxbyR7GQIOS3bKGCqx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz24bCTlC-ruqvX_N14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxKEUedxDI4mG5lBOx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyVZzU9a5nINBW7lqp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgwQZKf1v6ckoe4oldJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxtUn4q_-kyLSgm2GZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyYKAsGUZwbEZnECbF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]