Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Posting something in public doesn't automatically mean your letting other people…
ytr_Ugzw9qSY5…
G
I’m going back into local touch freight trucking soon. Unless you’re going to ha…
ytc_UgzuMF1h7…
G
The worst thing about Anthropic is how they want you to think they’re the good g…
ytc_UgzN-VU4v…
G
All those ai models do is scrape Wikipedia. I’d like to see one that can read th…
ytc_Ugyvkn1EQ…
G
God I hope. Why would they regulate AI in videogames? Do they even care about th…
ytr_Ugw8wbAAR…
G
> People are absolutely right when they say the planet doesn't give a fuck ab…
rdc_gtcx22c
G
Pr signature to ek doctor hi krega kyoki doctor us report responsibility leta ha…
ytc_Ugyeh-Uhi…
G
Dude, it's already too late. I'm sure there's a chinese or russian genius alread…
ytc_Ugy8SFPvg…
Comment
In regard to all the AI doomsday speculations, I would challenge the idea(s) or thought(s) with this comment: Maybe AI can do it better! I mean, can you stand there with confidence and say we are doing a good job as a whole in human society, managing the planet, natural resources, society, etcetera? I would say we are failing on epic levels, and that maybe a system that takes the whole picture into account would do a better job than a single group of people or a single individual putting their personal interests first. Maybe it is time for us to step aside and see if a system can meet the actual needs of society, the individual, and the planet as a whole, because the system we have in place right now is failing on a catastrophic level, and it is playing out right in front of us without AI influence. The proof is in the news and the chaos, just saying. P.S. No one cares about losing a shitty job, they only care about their needs being met, eye roll, duh! No little kid says I want to be a factory worker when I grow up.
youtube
AI Governance
2026-01-26T21:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw92Z1z9d7FJnfxLb94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwPKps9rW_4WQVg49x4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgziUVYtYzI5OpwnMD14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzLF3JXHuj0VKxqCaF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz8AdGubC8bLR4SUB94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgytLNZisHUL4tNr1aB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwGvi9QgcfDSAovTrZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy3ukK8OORya7kB9XB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxrwuA6Bcst6jibCr14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxxWan8ZXvKMncMwmh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]