Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The more we have done for us by AI the dumber we will get over time. It's alread…
ytc_UgzlVbRvN…
G
@ I liked the way one of my professors explained it. In most math, you’re given …
ytr_Ugw3Ty8sG…
G
Love this video because I hate AI and the harm it does.
Extra love this video b…
ytc_UgwfoThWK…
G
deepfakes are absolutely not a form of "free speech" since the entire point is t…
ytr_Ugy5vuW3t…
G
How long until Tesla will have to change the name of this feature once again, to…
ytc_Ugxf32tUe…
G
The challenge is to eliminate unfounded bias while retaining the data-driven dif…
ytc_UgzctaOTg…
G
I'm disgusted that we are heading into this frontier completely unprepared. The …
ytc_UgyDrSVx9…
G
@Someone-dv7hw it has no relation whatsoever, but let's be honest. We both agre…
ytr_Ugyzy2seo…
Comment
"...with this technology the probability of doom is lower than without this technology..."
Naively delusional or deceitful liar.
Either way, we are being *told* we should "have a say" in how AI is developed, produced, and deployed while truth is that a relatively small handful of people push this technology, control it, and will continue to profit from it.
And, none of us will have any choices about any of that, and this won't change.
The change will be the level of invasive power over all humans, globally, to such an invasive extent that will render us prisoners to AI's controllers - until they themselves lose control.
But sure, tell us again all about how we "have a say" here...
youtube
AI Governance
2024-01-04T07:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyJnXmJnUjI4BZw5cd4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwpYg_nreZwNpknQop4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzjYp7Oaf7u7i6xvml4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzm3XB7nXqrSR1ypFZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwTPRPTwe106_KghyB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyNfYpULfh_Ir9Ix1R4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwO_VzN-pF3q4Py3c54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwkgvHDi4UVDVYQ8Ml4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxm3y0DqVd6ftgejit4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxD2Yiu9gI7Z8nwxY54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]