Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Green energy, Virtual Reality, Electric cars, 3D printing and now AI. All big m…
ytc_Ugxg7swDP…
G
strange as Einstein once quoted the only think infinite is human stupidity how i…
ytc_UgwKkS-bd…
G
What were they expecting people to say in the survey? AI should make its own dec…
rdc_gd8ymev
G
Dear Professor Hawking,
If you were 24 or 25 today and just starting your resea…
rdc_cthnv9b
G
Half the robot's jokes went right over the interviewers head. That's why they're…
ytc_UgxzX7yrD…
G
I’m a 75 yo pretty into tech but I find myself scrolling. I had to take more ti…
ytc_UgzKu_aDb…
G
The chances of being exterminated by intelligent robots in the future are very l…
ytc_UgzVA3SSI…
G
I agree with your ai opinions, it really can helps enhance your writing and it’s…
ytc_Ugwar_q63…
Comment
AI is far more ominous than the atom bomb. All these founders who developed AI and now declare someone needs to stop it, piss me off. Bloody cheeky to cry 'I'm scared of the Frankenstein monster I created stopping it is now everyone's responsibility. No, it is their fault and they should be held accountable. There will be no breaks because nations want to exploit AI for military purposes and will never stop or ever trust other nations halted AI development.
youtube
AI Governance
2023-07-07T12:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyYK6Pl_7tuhZ-0z4B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw_lS5Ed2T8VWsT4bZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwR4e2yVi1QTz60BTJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxfRLTioDl4jNoKKWN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzlKMXO626NIvk9jr14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy7Y0w3NnD1kv9Vm494AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzW02AiHkfiSy1TUjF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgylhP1uYsj_w84MEi14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzG1JH5Rq6nzhjSIh94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzRYFYqTtKUreLXXUJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]