Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@AquaSoda3000 calling is great, emailing is also great. Even for those who don't…
ytr_Ugw4EBqxW…
G
What if there was nobody on the second track, and the only consequence of flippi…
ytc_UgxIyKZmL…
G
AI Roulette: The Tech World's "Overdose" Reckoning
The technology world today i…
ytc_UgxEEt0y_…
G
I just tried that. Here is the response.
>I'm sorry, as an AI language mode…
rdc_jha8gya
G
Ai needs to simplify our lives so we can LIVE instead of labouring our minds and…
ytc_UgxhXkMHP…
G
I can’t wait for the actual next gen of AI to come out for cars and stuff and th…
ytc_UgxocIXAT…
G
Look, I can’t draw either, that doesn’t mean I blame artists or use AI BS…
ytc_Ugz-A44Wa…
G
There is no emotion no effort there, with a click of a button imagine beeing pro…
ytc_Ugy9cnw3U…
Comment
AGI was predicted to be here in 2025. Now it is being predicted to be here at 2027. These predictions are made by companies that earn money by predicting their "glorious" future. There is still a chance that real AGI will actually never happen. There will be alot of "fake" AGI that will steal stuff from humans for other humans. That is for sure. But a world dominating super AI that will secretly control everything? Not so sure...
youtube
AI Governance
2025-09-05T23:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxOapG5hHq1Y038BQp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgytlBwHVkcVU7g__lB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw0JtuCBB7hypN1NV94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwLk1GmVSWgWewv0-N4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzw-SnDYw55J53RU7N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyzSXfx31PY3SsjeBh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwtRaQATfImxZFPgJd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyCCwwriCBrfrtbj094AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzks8X_Eu1uvlezuVt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzFs39hrfbd8Hnq9u94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}
]