Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You are way too trusting in people if you think in 5 years that the majority of …
rdc_ecz6iu5
G
If it's not a natural disaster or a nuclear war or some religious apocalypse, th…
ytc_Ugy5C9om1…
G
It's not just the security aspect. Corporations want to have a readily available…
ytc_Ugx1oOK5m…
G
ChatGPT already makes 100% of the code you ask him to do. Still there are errors…
ytc_UgxVz3QGb…
G
Absolutely agree Andrew. It's insane that they won't agree not to use ChatGPT to…
ytr_UgyjpcU1S…
G
TEACHING!, i dont think a kid would want to be teached by a soulless robot,also …
ytc_Ugy65N6_v…
G
we do this to ourselves, so what would a cold A.I be capable of. well dont raise…
ytc_UgxNcaA8c…
G
They should train an Ai using only old bibles and books released before 1850 and…
ytc_Ugxd_VMFm…
Comment
Those on the pro side already lose based on it being a slippery slope fallacy. Classic way to prove this is by using examples of past technology. Who's to say someone with a fully automatic weapon will not just go around constantly killing? Or a rocket launcher? Or a nuke? Because we have systems in place that guard rail against these actions. If it's possible for there to be an existential threat in the pros perspective, then there is equally just enough chance for humans to develop defensive systems around AI. The pros perspective essentially is suppressing technology by default.
youtube
AI Governance
2025-02-16T09:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyUGgUlM0sPlrHlY5B4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyXWgnfVXMyJcEoaAN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy19qng_0mU1MAslHN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyqwOM9j68xGe2aCDV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz-XTIElT6SaBW8e1p4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxLKvjVULwWpr_9zLB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy9wRiF9AtUw3XsMs94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw-YUD7igKvxKOso-Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxSIcgpqCmm1DQ2PDJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz7pjPwQp36kXUqWMJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]