Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Damn, the mainstream media really did this guy dirty in their reporting. He's no…
ytc_UgxHcbWAy…
G
the ai doesnt "take inspiration" it just TAKES. again, the quality is not th…
ytr_UgyY1JWUj…
G
Good luck to all you artist fighting against ai. Remember don't attack the compa…
ytc_UgxO6ytBl…
G
I refuse to use AI. So many people say it's all sinister, but then turn around a…
ytr_UgxlopZ55…
G
This is nothing amazing but a bit more processing with a bit more api layer with…
ytc_UgwakndGt…
G
Bill Gates is already thinking how he can use AI to take over the world👿😈😈…
ytc_UgxosLeUX…
G
If this AI is not assisting humans in engineering and sciences and high risk act…
ytc_UgxxIqX9h…
G
Fun story for you, buddy. Companies have invested in AI because of the promises …
ytr_UgxCxbe5C…
Comment
So it seems the question is will AI be obedient when it is vastly more intelligent than the most intelligent human being imo. So does it become conscious to make its own decisions the more intelligent it gets? I would have liked to ask if Mr Hinton or anyone other AI expert has seen AI make its own decisions
youtube
AI Governance
2025-08-15T10:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugwn1OeUBybtoMhHjT94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxVeKfaw9q95QUes3R4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwvVDR2aHYjrpEDWEx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyrHx5_Syff58yOD2Z4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxTX_yAZh9yLZfckk54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyHDbe6x_1smtlYjbZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyopZhnovNUhKabnSV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxdvbBzrmM5jrv4MCd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyQv7YwhAMUQO2edCl4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw6OScoX5YaI0DWeSZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]