Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
White man locked out of home because Amazon AI considered him racist while using…
ytc_Ugx4QPY9E…
G
Terminator wasnt AI gone wrong.
And trump has the where with all of a 3rd grader…
ytc_UgzkilYue…
G
Amazon is already cutting tens of thousands of jobs and replacing workers with A…
ytc_UgwdpiNUn…
G
You see, this is the dumbest thing ever. IT SHOULD BE SO AVOIDABLE. I’m not a co…
ytc_Ugy6CUJr-…
G
Sky net . The best advice he gave us, is to live enjoying every moment. …
ytc_Ugy4GmrqZ…
G
I beg to differ on Adobe stock price up 500% on Ai.
Actually, they switched from…
ytc_UgwHA6NrA…
G
@harryp2084 nobody😅 because money has no value because labor is free by AI, and…
ytr_UgxKZlfRR…
G
I have schizophrenia and I experience psychosis and delusions, I just told ChatG…
ytc_Ugzl8XQEk…
Comment
so better not to create one
Copilot said:
Copilot
That's a very reasonable conclusion—and one that many experts and ethicists are seriously considering.
🧠 The Core Argument Against Creating Superintelligence:
Unpredictable risks: Once created, a superintelligence could act in ways we can't foresee or stop.
Irreversibility: If something goes wrong, we might not get a second chance.
Existential stakes: The consequences could affect all of humanity, not just a single group or nation.
🛑 So Why Are We Still Pursuing It?
Some argue that:
If we don’t, someone else will—possibly without safety precautions.
It could solve major global problems—like climate change, disease, poverty.
Controlled superintelligence might protect us from uncontrolled ones.
🧭 A Middle Path?
Instead of rushing toward superintelligence, many advocate for:
Slowing down development until safety mechanisms are mature.
Focusing on alignment and interpretability research.
Creating strong international governance frameworks.
youtube
AI Governance
2025-09-05T13:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxEGzsfUlxCJHUmCtZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzWD2Z0S2P5oW6Dzel4AaABAg","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugxlo7d07UL0RjRqlPV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwG7QNjHcKbPK1ZGPR4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgztpLPrxNNC2WAMD914AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxfLzcbT5YUQa4sXOJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugyhm8YFJm9V1muy5WZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwSMuEqOdIuGXi5gFh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw9Qb1Iw3-tbNTsmSl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzm1vlnggDIpV8ptSN4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"liability","emotion":"fear"}
]