Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We’ll have to work in an alternative economy.
People need clothes. People need …
ytc_UgwAApfEi…
G
In junior year of high school, we were in our Gatsby unit and were given a creat…
ytc_UgyMIvWqC…
G
Ok adding bacon to ice cream means AI is far more advanced than we could possibl…
ytc_UgxfwZjfK…
G
I think he is one of the robots 😅😂. He loves terrifying us about the future 🤣🤣…
ytc_UgxAEBjPK…
G
@sabharishg5597 in your chat list on the bottom, u will see your email name, cl…
ytr_UgyZJAbyi…
G
Pint D does not exist.
D- AI that does develop emotions over time does not exis…
ytc_Ugw-_-hoK…
G
When we moved from assembly to C, sure programming got a lot easier but the apps…
ytc_Ugy1ziH6x…
G
That is true. Its crazy how people are deciding to think that making ai draw som…
ytc_UgwwKh9kX…
Comment
Its about time we've started taking this seriously.
AI programmed by people with ill intent could literally be weaponized in ways that we, or even good that AI we create, couldn't stop an "Evil," well programmed AI, from doing a LOT of destruction.
Think of a hacker that has all of the knowledge in the world be available in a blink of eye. Except its a computer, and it quite literally has access to all of the information on any and every system connected to the internet.
However it doesn't have to actually "think" the way that we do. If there were malevolent intent, either programmed or learned(?), it could crash the global economy, put satalites offline, cause wars, end wars, shut of the power grid, among a plethora of other horrible things that could cause human suffering.
youtube
AI Governance
2023-05-17T01:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxAn3VqUXZos_VDjTh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgysrOVEkG-U3imPKMd4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwPnWLdytQE0Uh71TF4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwBzM3I-FgHYzgddTd4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxJEAbjpSBuwKpepMx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzfntQxqQsQjRObi8J4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwCYZ26KQI-YFGkX_V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw8QGiSXSxeAmOv-2R4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzPngz8kwKyEnqFvbN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugydc9rwuRG3eaZvvSt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"}
]