Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A pod cast about AI ending humanity running promos for AI software. It seems we…
ytc_UgyJUt8fn…
G
The absolute big threat would be to suggest adding India to the list of places t…
rdc_luci43f
G
So AI is going to take all the jobs and we will become a welfare state? What the…
ytc_UgxBmHdX1…
G
This is honestly the best case scenario.
We'd be screwed if AI was conscious an…
ytc_UgywL0EUE…
G
I tried to tell ChatGPT to drop the pleasantries, stop complimenting me and givi…
ytc_UgwGqMs9v…
G
It is and nonsense. India will be unable to replace the manual labour as anyone …
ytr_UgzPWqPgE…
G
It’s more of a philosophical question I think… would A.I want to destroy us? Wou…
ytc_UgzTu1lac…
G
Are you saying all this to prove a point or you're just bad at art in general? J…
ytr_UgxniQBZe…
Comment
I’ll give you an example. We know and understand that cutting off our arm will be painful very painful and may loose our lives. So we do everything we can nit to let that happen because of our emotional feelings of pain. So if someone was going to attack us by swinging a sword to cut our arm off we will automatically take a stance to protect ourselves because we know how painful this would be. If you were an AI you would not take a self defense stance because AI has no clue what pain feels like.
youtube
AI Governance
2025-06-16T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz3zzyEG5V68b3yGjh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxCRgWpo1KFa49Zaj14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxLmHOY9xh-ckjFXrF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxSUpib1hcRSwdrVQ54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxmoRo7KfUvI8YKBQ14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwbkmUCxPIA6RSM2OJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwocjrzEwsqLjn51814AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwQ9iocCvn77xmtO3V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwp3htsGjG1Y9fKDph4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy_tRF9MjH7Kx8szIJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]