Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The blueprint is the computer revolution that started in the 80's/90's. Tons of …
ytc_UgzpNBckY…
G
Nuclear weapons are a threat weighing international decision down with MAD. AI …
ytc_UgzmhfLrc…
G
I'm very worried about AI. I have seen Terminator, I have seen the Borg, and bey…
ytc_UgyKvN5aX…
G
Well, as a chicken… My logic can only tell me that it will attempt to eliminate …
ytc_UgzZSrYeC…
G
Nothing I hate more than AI. Well there’s one thing… those recipes that has 7 pa…
rdc_nudhlj1
G
Don't consult AI for anything: I want nothing to do with it! Down the road and t…
ytc_UgyyJ8RkP…
G
All LLM's will end up being skippity toilet on repeat. The internet has become a…
ytc_UgzEUX_kG…
G
4:54 THIS. I cannot get the thought out of my head that all of this AI bullshit …
ytc_UgwrpRxPZ…
Comment
When algorithms are built to capture our attention by reaching deep into our most instinctive, unconscious drives, they stop being neutral tools. They begin to manipulate, not serve. And when their sole purpose is to keep us engaged—feeding on emotion to fuel endless growth—they risk hollowing out what makes us human.
If we let this continue unchecked, it won’t just damage people—it could unravel the very system it was meant to benefit.
This is where regulation should step in. Not just to measure efficiency, but to ask what these systems are doing to us. And maybe the ones setting those boundaries shouldn’t be politicians or those with something to gain—but people who truly understand the technology, and who still hold a sense of responsibility for protecting what matters most: our minds, our communities, and the world we live in.
youtube
AI Governance
2025-06-16T18:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzDiZU493yEun7ATSB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzIAAizQJRZBPaJIth4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzekWLeHeiRbQwqyTN4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzjCy_k7-vjywMvCp14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxatEKB_4tImoezFsp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyowGAVf4v7z_9d6cV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgymaSC8979G1MjsnGB4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyIvnGaE9CrNLLshl94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwFQh-P2b2k3VIHIJF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyNCgv5_tk1CMZER8R4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}
]