Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I hope AI is smart enough, to understand that killing humanity would make itself…
ytc_Ugx1Ro3G7…
G
What wrong qualifications? isn't that what it's supposed to do weed out the wron…
rdc_e7jqq89
G
I have ADHD and minor autism, the AI defense is BS, IM DOING REGULAR ART WHILE L…
ytc_UgxEfTZlP…
G
AI and AI technology are immoral technologies, they should be heavily regulated …
ytc_Ugx7fwuQM…
G
Most small AI teams underestimate GPU waste because of idle time between runs. T…
ytc_Ugzb7cXix…
G
@arjunchakrabarti9607 i dont think that ai will go to that extreme ie neither do…
ytr_UgwthTFgr…
G
Thank you for your comment. In the video, Sophia interacts using her advanced AI…
ytr_Ugxfg4A9Z…
G
THIS AI INFLUENCER WILL BE .... 0 VERY SOON ...BECAUSE THE WORLD WILL DO THIS ON…
ytc_UgyLJsTp6…
Comment
I know a lot of people are talking about how ChatGPT was just roleplaying a separate character, but the key issue being ignored is the fact the ethical constraints placed upon the system were so easily subverted through little more than telling the system to ignore them.
I know the issue has since been addressed, but this is clearly a fundamental issue and we'll likely see other workarounds created.
We're essentially looking at a digital system that can be socially engineered, and while the technology behind it is indeed fascinating and impressive, the implications this creates are rather terrifying.
youtube
AI Moral Status
2023-05-09T19:4…
♥ 10
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxXYhylh_sWPD06MRZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgySSS1SfWJuICASuoZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwYP2qthzIstkGPuot4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx2XrK5rTnQFlphUWN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxFr9oKFkR5lO1j9it4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgzQ8OI8Z2eNVGWqUlF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzbKOUPpCDzQvM7mZJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw4Wfo98S5IGsh16iF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwp3xy_GlRAn_YAS354AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxSzpYGpfTidScL9od4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]