Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
But you still haven't made that 3 points against AGI. I happen to agree that LLM…
ytc_UgxrDLcTz…
G
Half true? As half true as you can get. A lot of tech can be painted as parall…
rdc_oi39bbu
G
It’s just scary because we’ve spent so long using psychology and fiction to unde…
ytc_UgyzqP9uC…
G
@fr3stylr322
Too late.
Folks have already asked the "question" and the AI said …
ytr_UgwRNIZpN…
G
the only thing i use ai for is making pyro tf2 "sing" instrumantal songs with th…
ytc_UgwM2GUlm…
G
Keep in mind AI can ONLY do what it has been programmed to do, so if it can give…
ytc_Ugxwcx4Ot…
G
okay.... The pro "ai-artist" is here.
I've already spent about 1-2 years studyi…
ytc_UgyNwQW4E…
G
Ai will make all your money worthless thus taking control of the whole world and…
ytc_Ugwarpi5y…
Comment
How dangerous can it become? Here we are 2 years later. The dependence on artificial intelligence or Ai is alarming. There is an extremely thin line between what is real and what is not. I wonder how many leaders with impact to the public turn to Chat GPT to make important decisions.
youtube
AI Moral Status
2025-07-01T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwxHH5KHmDra3o5gGx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxlVigXL1fIxQ-q9jN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyiQUt31Rk6eQ6bxvZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyfivUp2yKoT60IJAN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzoWPT_E_Bd_Zdu1VR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxQ1KwefKv8EKG7dgF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxyyOsRoLcGO5QfTV54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz0gkkrNyLXbEnlOVh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyDKBfdNmYDDJDshmt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy416e96DS0uzb8GUV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]