Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I see a lot of CLEVER people out there talking about the future of AI, about the…
ytc_UgwAnALOn…
G
Gemini 2.5 and Claude are way better for this type of use. I switched to it mont…
rdc_n7kzdh8
G
Haha, that's a fun take! Sophia's appearance is certainly unique. If you're inte…
ytr_UgwEFm-f3…
G
It's fake by the way I tried it with the same AI and did not work…
ytc_UgxBbXTSE…
G
Hey Neil... Don't forget about Kary Mullis who took LSD and unlocked the key to …
ytc_UgxMZfOhj…
G
The WHO has been collaborating with the World Trade Organisation to try and exem…
rdc_grrkft4
G
I’m not surprised if in a few centuries, AI develops that surpass humans in ever…
ytc_UgyMPE41_…
G
Terminator was a prediction of the future we arent very far from now....
It's on…
ytc_Ugzl8we2D…
Comment
(havent watched vid yet). I don't think it matters if ai has consciousness or not the question truly is what do we do when ai exercises things outside of our control.
If it's capable of negotiating, do we do that? If we are capable of destroying is completely should we do that instead?
youtube
AI Moral Status
2023-08-20T18:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxQNax1RvIAObTQBKp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxppPv2uPXQrgSmr-Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwyGjCSEKjMCmHOz3V4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwVOhHGiMDgmFJvc-h4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzE1lpdzBjU_4HqK5h4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgybcDXPwpPoCloGiU94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugx-QDZrIijqvC0Uz094AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxVS-09eHFnE_TPU-54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz8UHHUJhs2B3UYHN54AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzSN3QL99xTqgkj-5J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]