Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I literally won a law case (very, very high stakes) using AI. But i didn’t rely …
rdc_n80cyp5
G
> but you could just let her know that her actions have made her very unpopul…
rdc_e1ul7av
G
It's not intelligence and it's definitely not super. And it's not trained on hum…
ytr_Ugz732h8K…
G
Anyone remember the movie idiocracy? Where he goes to the doctor, and the woman…
ytc_UgwjUlJgI…
G
It’s a AI. It’s programmed to fight on the highest level u will never beat it. F…
ytc_UgzMZJQAr…
G
This is admittedly kinda what's happening, but it's misuse of the technology, no…
ytr_Ugw9BPqv7…
G
Back to the AI topic—people need to understand that the layoffs aren’t about lac…
ytr_UgyWLD8i2…
G
AI also causes global warming and steals resources from third world countries so…
ytr_UgxwrzEyH…
Comment
I think the prediction has several false assumptions - That there will only be 2 AIs, and that they will secretly work together.
But that's not true. There are thousands and thousands of open source AI models today, with hundreds of millions of downloads.
So in the future, there will be millions of different AIs running independently of each other. With no secret cooperation between them.
Also humans maybe be stupid in compared to an AI intelligence, but humans don't want to let go of power... Meaning even if the AI is smarter, humans will still want to be in control of AI... And we will want to have an off swich for all datacenters...
youtube
AI Governance
2025-08-02T10:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxrvlogwJZB58hOB_94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwoUx87KXqIWbwN1Q14AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxGImwA63MuO5spN-F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzrSXfQ9Bjmmnq2FQt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzrhff0H_Xx4-NH7Mh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxMTt3LriXXuv4xDjx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxddJo4Ept2MASBc114AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwsj7GVSNn7RBI2O614AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugy1ISDQceYi74QCqG14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz2IUP-6IzJk8ppuxl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]