Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
And when I though I couldn't hate AI more things like this happens. He stole fro…
ytc_Ugw-NPRGb…
G
Thank you Bernie, We should have voted you as president when we had the chance. …
ytc_Ugwho8KQC…
G
AI will take the easy hardware jobs first, such as office tasks, then ones with …
ytc_UgyNhZN9r…
G
What if there was malfunction, the robot would have turned the gun on the human …
ytc_UgxVMg3US…
G
@PositiveTradingOfficial500, thank you for your comment! Why would anybody fight…
ytr_Ugy6kygoi…
G
what if AI depending on Supervisors/developers is not aware of its own "programm…
ytc_UgzlQxtOr…
G
Animation requires a lot of finesse that the current generative AI models can't …
ytc_UgyojQwj6…
G
If the prompt was a fart, AI would shit its pants.
The bias to validate the inp…
rdc_ofh8cac
Comment
Sounds a lot like the ethical dilemmas faced by the members of the Manhattan Project, and likely other similar nuclear weapon development programs. Those programs, devastating as they were, had centralised and international government oversight so as to be less devastating than they could have been. Meanwhile AI is being developed simultaneously by profit-driven entities and citizens in their bedrooms, so even at this very early stage there is a great capacity for nefarious actors to emerge. And you cannot expect governments to suddenly swoop in to establish any form of control - they barely understand how to legislate or enforce laws related to use of the internet, let alone AI. Not that they can do much anyway since the cat is very much out of the bag.
youtube
AI Governance
2024-01-19T03:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxZGUvJvu2RsKMZ0-F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxCRfvHalsjxXHQhgF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwTUn_pek1VlAJElhR4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwsUgGAQynl9WCuXuN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxprUFuPVJ2jxhuVlJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxYD4VrQ2otFId05BF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyy5M7_YHuPSTy4Lvt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzhvW4B3waBjgEZR0d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzW_uDyqRFef7pHp-x4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugy5xgOqcCRnrlQAVr14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}]