Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Also please note workers of kurzgesagt that AI is killing the planet!! If u take…
ytc_UgyF0us9p…
G
It didn't take me to the extensions page it took me to Gemini on the web, hit t…
rdc_mcsv79z
G
Thank you for bringing this to the attention of a grander audience. The ethical …
ytc_Ugy8yE32C…
G
Possibly, but this is a large assumption. Yes, Google has vested interest in AI,…
ytr_Ugzf3UpZY…
G
i feel like ai art will become the sort of fast food of the art community, and h…
ytc_UgxsNhgPv…
G
He is kidnapped and not dead. Technically “enforced disappearance “ , by globa…
ytc_UgxWhqsD-…
G
If jobs are replaced by AI then who would buy or use the product if none is gett…
ytc_Ugy28Ec6X…
G
I pretty much disagree with everyone Neil said. He is downplaying the real and a…
ytc_UgwnpboTY…
Comment
If AI is us, why would it reflect only what is good about us, and not the bad? Why would it reflect love and not hatred, compassion and not greed? I'm always amazing by how all the scientists and expert talking about AI, always downplay the potential danger associated with AI in the most naively optistimistic way, as if most technologies invented by humans historically have not been used both for the good and the bad of humanity. There is a good chance that AI will be the end of human beings as a species, I think it doesn't help to downplay that risk. If we are already playing with fire, shouldn't we fully understand how much it can burn?
youtube
2024-06-06T13:3…
♥ 6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgySFcSLVHtV3duGVAN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwQE4GjUuElcj28o3x4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx3TIFeAEmpFQctROh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwzNG7s9H4IjDi3dr14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxYR4fJQt0-x_r04x14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyIxd5OIP8w_23kkG54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxAFTtvg_9LfZCg4jZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgynMeD9H6jwAihxQIN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugy2bn_lZqMADnxb-r54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy-IeM_lz_jQinYvRV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]