Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"75% of companies said they use AI in some form". I bet that form is internet se…
ytc_Ugyk9v0U1…
G
There are many out there already in a very fast growing market. Midjourney, Dali…
ytr_UgxGicbci…
G
But if they did make one you can technically use it and distribute it cause AI i…
ytc_Ugw5KiX8j…
G
The hype and doom people are either selling AI, or have been sold a dream (or ni…
ytr_UgwzwVy3k…
G
Us military can't get a Reaper drone to fly with out problems I can't get my new…
ytc_UgzFTDkoa…
G
2 minutes in and they're already being incredibly disingenuous.
Even IF their wo…
ytc_UgwqlsW1Z…
G
Ive ALWAYS said (years ago) that computers and AI will be the DEATH of HUMANITY!…
ytc_Ugy86Z7o-…
G
I build enterprise AI software. We test performance for all LLMs and Claude is c…
ytc_UgwYcxcza…
Comment
Please rewrite this " I mean if you consider psychological weapons the internet should be regulated in the sense that no place should jeopardize someone's psychological safety i.e. no person or entity should be allowed to emotionally and mentally harass any other entity or persons. For example, League of Legends has dealt with trolls and player-to-player abuse for years, and over the course of more than a decade, they have implemented stronger and stronger penalties against those illegal behaviors if you consider any form of abuse illegal. Games and websites that don't accurately moderate this should be regulated, for example, in Warcraft: Reforge you can abuse players emotionally and mentally without any penalty. If you plan to regulate AI then you might want to truly address morality. " as it is addressing Congress and clarifying the main point while recommending call to action and methods for solving the overall issue of regulating human technology in a moral sense.
youtube
AI Governance
2023-05-23T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyFAdbHbs0MVQAjds94AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwTsRy-bB49HxDpvdh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyHVRYtvZZpdK0b2Dt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwBrVAa4Spdg9oq07B4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugw3HU4tdR1fggThEMt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwWwEOoUr8kOszriIV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxMSI8vrFnbtMCXuDJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz1lszndqdFBQ4w8rF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwsoSmggmLH5wH1j9B4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyE9Q2PxIeJqtevMk54AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]