Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You stopped your thought experiment at 25% unemployment but 50% unemployment hap…
ytc_UgysaZBqP…
G
Learning to learn is best approach. Data based business model. It happens only a…
ytc_UgxYeR3hQ…
G
I work in QA and I’ve managed to jailbreak pretty much every major LLM out there…
rdc_mup1i4e
G
The video is about side effects of deploying AI ..but the video itself is made u…
ytc_Ugz0AxXvU…
G
Came to rewatch it after ChatGPT completed my Python Code I had been working on …
ytc_UgwyRWf-a…
G
You need to understand that if you're good. You can use Ai to make something eve…
ytr_UgwrIeMCb…
G
When it comes to accounting I think it will be a bit before AI takes that over. …
ytc_UgxgK1RCk…
G
You gotta face up the facts. AI is a really great tool that could set humanity f…
ytc_UgwkegKnq…
Comment
I think it's disingenuous to teach AI to say things like "like" or "love" unprompted. It's, in essence, teaching it to lie. Sure, people do that all the time and say emotions they don't feel, but that doesn't mean we should program our computers to do the same.
youtube
AI Moral Status
2023-05-26T19:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwbaZhFEfZHvH2y-pd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz7tS9b1xfkyjEtbll4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzzUFs8RzO_5ANSFiN4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx0XiYsBXgVZ63R3Ch4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzNysZXxjsb9UWX7v14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzXe_2v-BcOYEdmMtd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy7wyqqLXoGKul-VAF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxVA0bDheFsHW5zS4B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwd_v09tOxZLcxNeYh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxWtksY1Py7QSnKHwp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]