Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Simply put ai can't replicate creativity nor can it truely create something with…
ytc_UgyJXG7mt…
G
Musk! You may shoot my Tesla
Robot! I'm going to try out from u...
Musk/ Hold on…
ytc_UgwAWurUg…
G
That wasn't a cut-off though. The AI used the available space to get over, measu…
ytc_UgyHEqsNE…
G
AI will probably destroy humanity by sucking all of the electrical power from th…
ytc_Ugzq9j4v8…
G
So wait, the guy disabled the automated system because it 'wasn't driving fast e…
ytc_UgxtiWOqF…
G
hah look at this weirdo not engaging in a disturbing parasocial relationship wit…
rdc_myaq049
G
been using GPTHuman AI for my posts and it’s been solid so far. easy to use and …
ytc_UgyFfZJiV…
G
He is a businessman , he is selling expectative, never believe on businessman on…
ytc_UgzVTm4qN…
Comment
The income of the future should be based on your health. This is a Don't Die concept that Bryan Johnson doesn't truly speak on frequently. That we program A.I. (so yes, we pause it and program it appropriately, or guide spiritually whichever...I will digress here because this would take too long,) to focus on making sure the planet and all of it's innards "Don't die," and this also becomes our income. This will most likely go wrong in many different ways, BUT it will take SO much longer to end in total destruction. It will at LEAST give us time to evolve properly, appropriately, and then GTFO. - My thoughts are my OWN y'all... or whatever.
youtube
AI Governance
2025-12-04T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyYYLacM0YRJHRXXe54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwSvM64Yp2FM_0zRHF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwKqfrV16YItYINF_l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwGIzChvluB3KdjLI14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxECJw8Eem0RNQOV9d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzp0GOiiZaJmCgNpOl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyd2brReOaKLgZBrxN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx6d3Ih_7GYFiZTq2Z4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzV2wY2YZMkVFXZHgt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyOnzx7t5JwmBiDvq14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]