Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI might lock you out of your spacecraft if you're not nice, and it reads lips.…
ytc_Ugwfo_Ti2…
G
Sooooo the whole video boils down to "AI act in self-preservation when threatene…
ytc_UgyEUx8DT…
G
Investors should be looking at AI data centers from a year 2002 WorldCom/Global …
ytc_Ugz7hGNAe…
G
the left loves the idea of deep fakes, because that way they can claim that all …
ytc_UgyyBJgLs…
G
in 5-6 years people will be able to tell AI to make them an anime movie give it …
ytc_UgzLz30Uj…
G
@hexlemorte5201 James Li literally interviews someone who thinks its china and i…
ytr_UgySYzk3L…
G
@buffmonsterenjoyer3959
More like this.
Someone on your team has started usi…
ytr_Ugz26ZNzz…
G
Chatgpt is the corporate psychopathy thoroughly distilled into a form that doesn…
ytc_UgxtPJIsa…
Comment
AI must be in the hands of Humans with empathy.
It's similar to Nuclear Power and Nuclear Weapons in the hands of the responsible and trained Humans. Safety and Security is no 1 inuclear. Because you can start war and extinct certain group of the planet with humans.
We need to do the same with Ai to priorities Safety and Security otherwise I foresee evil humans starting digital wars and wiping out digital information of other nations etc.
AI is good if it's used ethically, but it seemed governments are hands off and relying to the Oligarchs to develop this technology without putting governance to protect humans or the world from exploitation.
youtube
AI Jobs
2025-12-14T07:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyD_pT1SPbEORWBR854AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyyZyZxSzF_AkDt2Mp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz1sMxigyFppQGnF9N4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw4Dcxb6MVCk6q-VKh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyvPM7SAdCAPp0eRIx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxNnWJIvd025-fy1w54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy181EU7sTWU4BZQ654AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugw1mOmdvcaLBKLalA94AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzmDSWGnOQJ3DFpLrx4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzKhQcKz6J1R3wX8K14AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]