Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The cr*ck pipe at 03:58 05/01/25 told me this discourse is dangerous because onc…
ytc_UgwkG3MfD…
G
Ai wows technically unskilled people and unfortunately all managers and company …
ytc_UgxBzljUU…
G
I see a lot of people giving ChatGPT pronouns like "he" or "she", and talking ab…
rdc_mlgjgdq
G
Ideally, ai detectors should target if something is AI by limiting the general-n…
ytr_UgzlQKBbC…
G
Do not use AI in any context and call out every single person you know who uses …
ytc_UgzscHGwG…
G
I doubt they can become conscious, that's like a toy robot for kids becoming con…
ytc_Ugio6zncM…
G
The self-driving car's reaction is faster and will prevent 99.99999999% of accid…
ytc_UgjIfcNAo…
G
James has also done variant covers for the Dynamite Disney comics and those clea…
ytc_UgxXnh3Po…
Comment
this is hella stupid.
It is so full of holes that you could use each of the holes to the double the brain capacity of any given “LLM’s are totally just 1-3 years away from AGI” proponents. Apart from real world practical stuff, such as people not noticing a data center deciding to build (and somehow power) hundreds of new data center, there are just some theoreticians that need to go and read some neuroscience papers., Eg even if you end up building a skyscraper sized data server in three years and get it to run so that what we would recognize as intelligence manifests as an emergent property of the hardware-software, it is pretty arrogant/stupid/technohyper optimistic to think that *this* form emergent intelligence would somehow know what makes it become a sentient intelligence. There is just no reason to think that an AGI in a huge server would be able to detangle what makes it it tick, anymore than we can explain our own emergent intelligence. Even modern neural networks just run as black boxes and cannot explain their own functions, and an AGI would be millions of time more complex.
youtube
AI Governance
2025-08-14T20:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugw6zQ3tbOI2Y04Cbnx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyyxSQz13FMxlaATM94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgydsWrUaghYf1ElErt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzoKvibx8VavjvHGsd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwguGzCfJs4KwjWZKJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxC7SL1jgHJUwJMkpl4AaABAg","responsibility":"society","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzpqZfwTr4Ya5Z10hN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwVHGVKkjc6axcbzI14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugx9VMoa4XEQAEZpGcl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzlEhDUifLS8lfcSlB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]