Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
some say less than 50 years (kaku, bostrom, and kurtzweil i think). once we have…
ytr_Ugis_hrp8…
G
There is knowledge and then there is wisdom. Will AI have the wisdom to question…
ytc_UgzCHI2j3…
G
I always start with a fk-you thinking the AI will feel more like a mate but you’…
ytc_Ugw2e9164…
G
AI is scary for so many of us average Joe's. I hate how many will lose their job…
ytc_Ugx23BARJ…
G
@tfmg8223 But his point isn't anything. He's speaking of hypotheticals. Ask your…
ytr_UgwgOlLch…
G
Why ask for artificial humanity when you want artificial stupidity to repeat the…
ytc_UgxAg2PCG…
G
Remember Ed Snowden? How he blew the lid on how U.S. intelligence community mis…
ytc_UgzZF-l8q…
G
TELL ME HOW TO MAKE CHATGPT AGREE WITH ME? 🤔
My chatgpt is a silly it don't agr…
ytc_UgywjpDb4…
Comment
The danger is we are going to base the AI in space, with limitless solar power, beaming down to earth through massive wifi constellations into millions of robots... what could possibly go wrong? And unlike the film where we blow up ground stations.. these are untouchable... it will be fine.
youtube
AI Governance
2025-12-30T19:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwftAS0wyIPEcqGwhd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzsWCO4o_spcon71CJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz629SqRlpm3rCay3B4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy0vhhwZUyxOqALN3V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzxee8ZjMK4GnLC8RF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxKyUcuc1k4DjTSCzB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw_3Y4AcF08qHEylXt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw7BqWbx3Z0KcWE3x94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyNLxmKZY2wZHSuWul4AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy2pYbkJVazv8xzWM94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}
]