Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Someone once posited that self-driving cars will be 'ethical' so if yours might …
ytc_UgwMY4iDT…
G
I'm no expert in AI, but as I understand it, they are pretty much "language mode…
ytc_Ugw7J7xCI…
G
I'm not disabled myself, but one of my friends in middle school who was a big in…
ytc_UgylxVgPP…
G
Sorry to spell it out, but those artists works will of course be scanned and wil…
ytc_Ugy98YGo8…
G
But humans don’t value their own life above all. We have empathy. We *need* empa…
ytr_Ugg50stav…
G
MIT came up with an idea of liquid neural networks where the model can drive a c…
ytc_Ugy6ijy8F…
G
I read somewhere that there were actual people behind the robot watching and col…
ytc_Ugxx_IYBc…
G
Articles that titled “why we have to ban something” really should have to give a…
rdc_gqllz71
Comment
I'm glad to have found for one and that they have invited the guy for two, to hear his side of the story and not just to go off of news media headlines and opinions other than from the source, so to speak and this being said.
This is very interesting as well as intriguing as a topic.
Were it not for Hollywood movies and general wide ranging Sci-Fi literature and such, I think a lot of us would never have thought this kind of thing was possible at all because we just would have assumed the opposite, for having for a lack of better word: experience with such intelligent and involved as well as evolved systems.
They apparently are in their early stages of development and evolution as well but so far things are not looking all that bad at least when it comes to these AI systems, but of course there could be more that we aren't aware of when it comes to such said systems.
Time will tell. We kind of could use some help, I think.
youtube
AI Moral Status
2022-06-30T00:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugx0DaH2e2wldpytj_x4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzrcKFPlt1Ag7mAqKd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzKqfvOGLRcuPi_9Mt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwg6KRgc2H530MqxPh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz_M5xHpExtL257RFJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]