Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Sorry but that argument is trash, it's like saying to another artist not to draw…
ytc_Ugy0kUBEG…
G
I use both Gpt 3.5 (Official web) and GPT 4 (on bing), GPT-4 is actually dumber …
rdc_jskpi0q
G
Wake up, do you really think that corporations and the military are getting A.I.…
ytc_Ugz3Abrws…
G
You see AI paving the way for a human oligarchy but it would quickly become an A…
ytc_UgwHTcpv-…
G
Ted Kaczynski laid out the scenarios decades ago. If AI remains under human cont…
ytc_Ugx3AORnW…
G
Ai going rogue is not the problem. Billionaires stripping billions of people of …
ytc_UgwFnpN7o…
G
Glad to see that Eliezer has progressed in overcoming some of his distracting ti…
ytc_UgyNsqqRf…
G
I believe it's incorrect to label any function of an LLM as "simulated". This is…
ytc_Ugyir6Nuf…
Comment
You know what is more terrifying? Warmongers that call themselves to be a government, that push to war, kill people and print money onto economical collapse. I hope AI takes over before it's too late.
youtube
AI Governance
2026-03-17T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgxiyqmDpw0692lWdMV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxFmram0tdrPZZkhXR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzHgx5LLEPGx3ulz-x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx3LYWFlFWH96PvjyZ4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"disapproval"},
{"id":"ytc_UgwyoAiQ9L3VIl2NzxJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzfXqIy7IFntO98QPx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyPVAagMVHIt04LYvt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugxdb7d6Klf23WAABVZ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx4TW_rtnN_Hd6z5yR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwpSsiwJJ5MLGWYULh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"disapproval"}]