Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As long as the AI does not get control of nuclear codes and starts creating robo…
ytc_UgxBJvYq8…
G
I wish ai art could be cool and an artist aid and ethical but when company what …
ytc_UgyIIzvvI…
G
using wayback to buckmark this bullshit is the best... Whose job did AI take ove…
ytc_UgxB0VtXc…
G
This foolishness building AI platforms that are dangerous enough to threaten hum…
ytc_UgwsPveh8…
G
What an absurd lie ---How can AI create weapons ? With what resources /raw mater…
ytc_UgwwpS7oD…
G
In my opinion the only aspect that avoids this catastrophic scenario is that:
1-…
ytc_UgzqIGnbk…
G
@dotdot5906 If you think that human art saved us from evolution, then I know yo…
ytr_UgwlKdf78…
G
Imagine autonomous, collaborative ropbears. You designate a hot zone, a ranked l…
ytc_UgwkUxLHg…
Comment
Then you got morons like me using several instances of LLMs counting to a million for no reason. Yes I know Gemini and others stop at 10 or 12s unless you word it correctly.
What I leaned from this video? I need animated videos of steaks counting to a million in 4k.
youtube
AI Governance
2025-12-16T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyGyCTRAIbSSX2EmaB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugx1UBPuiXrOYThXSSN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyyIj3khKoNlIlY1dt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyEARzUFBNTtIzupal4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzVsFfnhGQVekT1aVl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwCT8QBFZnolJnt-UJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxx1HCkkvf-QOlxR_N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyUitp8O2MdK3I-bPh4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwSlgEo1djSvD6YfAp4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgweQtUnPTPrUXmLPoZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]