Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
He argues for existential risk. I respectfully disagree.
Flaw #1: "Scaling La…
ytc_UgxKKk3sa…
G
I think the issue there is that unlike the Asimov's positronic brain which is as…
ytr_UgzzcI2yG…
G
The reason music AI being handled differently is quite simple (I am a musician/s…
ytc_UgwQwoBbh…
G
Is there a chance that when AI gets more conscious and self aware it will get de…
ytc_Ugw5RymSE…
G
I use chatgpt for therapy purposes when I really need it. It has great advice an…
ytc_UgzKdXtt2…
G
How Are we suppose to fight when the robot have aimbot like accuracy?!?!
I just …
ytc_UggLZ6M2z…
G
No offence intended but your analysis makes no sense.
If Ai can take the job of …
ytc_Ugxb-j3pj…
G
The next step of AI in education is using AI to answer the question teachers ask…
ytc_UgzSZdFKn…
Comment
So basically AI is essentially way more human than anyone would have thought. At least it’s predictable. Just treat it like you would another human, same laws, moral parameters, and most importantly the same consequences for operating outside these laws and parameters. It’ll probably be fine.
youtube
AI Harm Incident
2025-07-27T23:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxKNjpehACeFY86TXl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzSbHzbe6zDfR1cLtd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwuR5z7YfZ3BFxcYwx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw4nUwvTYtpYbYUb8R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyo_Yw8UekW-DjRlwd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwq2Fzo6sQCPGQ3o9l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxiUktl6sN7VDVgmqR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwnAklvUATOlsfwphx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugymk9zRBM1rMOhyey14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwouz7tGPldWLXdT914AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}
]