Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI learns using evolutionary principles. Humans are not necessary to inculcate …
ytr_Ugyxq-YIK…
G
Well I'm sure you're right, but don't you find it amusing that Alberta has moved…
ytr_UgzUKa7fv…
G
AI is just gonna benefit the super rich, and not lead to people having to work l…
ytc_UgzZbzBVs…
G
Investigate why Google was allowed to buy up Canadian AI technology.
Investiga…
rdc_jfr5fcj
G
Consciousness - ´I think we’ll stop using this term,’ it concerns me that someon…
ytc_UgzCc7JEY…
G
Your experts are incorrect. I use self-driving nearly every day, and it works g…
ytc_UgyUtLVfu…
G
AI will be purely Satanic. Wait and see. But of course by then it will be too la…
ytc_UgzWJJohP…
G
The only problem with AI is that AI needs humans more than humans need AI. Witho…
ytc_UgwAF1Nur…
Comment
This whole thing is horrible, but youd better believe that theres a segment of the population, even beyone LE and the government, who think its great, except that there's no oversight on the biggest sovereign citizens of our nation - law enforcement, who have used them to retaliate against citizens, used them to spy on their family and friends (ex wives & girlfriends), and will be sure to use them for any future nefarious reason may convenience them next.
Why not just make every citizen place a 360 degree camera in every room in every house or building? I'm sure that would solve a lot of crime too. Hell, while we're at it, why not have a GPS tracker on every single ankle in America, that also tracks your vitals, and have that connected to LE and the NIH? Then both LE and the government can keep us all healthy, make sure we all get our 8 hours of sleep each night, ensure our TV programming doesn't excite us too much, make sure we eat right.
Well, Hell....if we're going that far, we'd better let the government determine what kind of media we take in, and what kind of conversations we all have. That way, there will be a protective uncle (Sam), to make sure we don't stray from legality, of ever roll of the peak of "moral highground hill."
Well shit! If we've already gone that far, I guess we should also start the real life "future crimes bureau." That way an AI algorithm can figure out which ones of us might commit a future crime, and just incarcerate us before hand, just to keep everyone else safe.
youtube
2025-08-21T02:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgywErIZ7Zf9AWnkqT14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzvEGpdx6J8o4bR7sl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxwxAJy6GRukxBu_yZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxakgLdeI7jQCrExed4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgyzIE-i5pMEydHOE1J4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxeDQHFq-JLQCRMuSt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwk89a1PfgKzrS4kvZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzX1Twsqd5mhQhrfTl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwHDtBbwcg7KYRaLu54AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxSzj8_hiXMdOfa4RJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]