Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
For all the techbros and CEO's. One moment the AI would be able to take the CEO'…
ytc_UgwzIV0OB…
G
Weird theory but bear with me: If Suchir had some underlying reason to commit su…
ytc_Ugz0bOjPg…
G
@mattbeisser3932
I could be wrong then, and they could be lying about storing t…
ytr_UgxgBQWvE…
G
It's different because fanart is very clearly that, fanart. It's clearly fake. A…
ytr_UgzXAyNqR…
G
Personally I think machines will be considered conscious on the day an AI tells …
ytc_UgzSN3QL9…
G
One of the reasons I left my last job, you already got my entire handprint now y…
rdc_iyytbiq
G
Senile senator is using AI scare to bring in socialism. We see a sample of it in…
ytc_UgwoQbzTO…
G
You’d be surprised, this is dripping down to everyday people having their lives …
ytr_UgxzKQHd_…
Comment
The video totally overlooks the fact that AI is hitting a massive development wall. It’s not just about "safety" or "scary tech" it's about the economic bubble and scaling stagnation. The tactic of just throwing more powerful hardware to the LLM's has totally stopped scaling. And with an internet being filled with AI, we’re running out of high-quality data, and the cost to eke out even tiny improvements of these models is becoming exponentially higher than any actual ROI.
The real reason researchers might be jumping ship isn't just "panic" over AGI: it's likely the realization that the current AI path is a financial dead end that costs way more than it can ever earn back... A bubble that is gonna pop no matter what.
youtube
AI Governance
2026-03-17T03:5…
♥ 12
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzpZB2TwIDXQwv6Zcd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxWkOCjBPsxNPNil3p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwwES1E11rMPgx7jlR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwcuJSMfFvBtvURPZV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxieUY_SVr4KHRGD_t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxll8Gz7SsOcqr3SpB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwY_vzB08zSWLljGGl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyEYKV9ah0Y7WJ3eiZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwCbE5lMXvD9rR_Hmp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgyXoPUvHeehIzSBZP54AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]