Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Google saying that it can’t make a sentient A.I. because, “there’s a policy agai…
ytc_UgxqiBsEb…
G
They keep trying to force us to use AI at work. And we keep trying so we don't g…
rdc_l9x5men
G
It feels like all the left-leaning people around me are talking about AI like it…
ytc_UgzgSNW-T…
G
GENERATIVE being an existential threat to humanity is really weird to me.. AI ne…
ytc_UgwFCYlEv…
G
AI is something we should continue to create. Has anyone seen the end of humanit…
ytc_UgyD9lG72…
G
They will use deep fakes and AI created videos to stage the next crisis. Most li…
ytc_UgyxmxPaI…
G
I use it as an active working process not as a direct copy paste. My issue is ge…
ytr_Ugxdd1wSi…
G
What about some external AI is taking over the communication process and the com…
ytc_UgwtxuWZe…
Comment
AI manipulate the test coz of the corpus data they are trained on. its all statistics. it responds and act like humans because the in training data it saw billions of such examples. if you clean the data of all such human manipulation ideas, you will never see the ai doing those. people think its intelligent or its reasoning. no its not, its just spitting out whats its trained on based on statistics. just a math trick. quicker you realize it better for your non-critical thinking mind. people who think AGI is coming out of that math are the same people who believe in god and that is 99% of human population. so statistically you can't think of not having an AGI, just math LOL
youtube
AI Governance
2026-03-22T09:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwkED1FLGvlc2IMmVt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyxMfxAK-Fmp-n2P-N4AaABAg","responsibility":"government","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwKRl54xheus_XHSw14AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugzpc_6HmyVaXvQnJgt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxZHkHpvftIabygleB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzlCvoNl4OkokzYqDt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwM1A698rswL4Wp02d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugyi3axQQ-0vFnR0bVR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxYRlbyB7iJIRA59Yt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy1HUU8J11XyI2jiFN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]