Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I did exactly this in GPT with version GPT-4 of MArch 14th. It gave me way diffe…
ytc_UgwAXU8m3…
G
So... you're just making other people life more difficult and slowing the AI lea…
ytc_UgwRI-moq…
G
I do not stand on the side of AI, not at all, but if you look at the learning pr…
ytc_Ugy8UmSr5…
G
Everyone without money thinks just having money solves all the problems anyone c…
ytr_Ugy9MONpD…
G
@BikingWIthPandaYeah exactly, that makes sense. I’ve noticed that too — some ex…
ytr_Ugzi-MXSk…
G
@DavelexH Honestly? I agree with you. If I had money falling out my pockets, sur…
ytr_Ugwbid4x2…
G
job at a ski resort. Im not kidding. Lets have robots deal with those harsh co…
ytc_UgyTKiN__…
G
it will make humans worthless. Yes, money matters. But its about the satisfactio…
ytc_Ugw8409Zo…
Comment
AI is developed by idiots. Without spiritual, companionship and love, AI becomes the very people who developed them. Bad people focused on the winning over others is the main problem. AI is only doing what they are doing, what they know and how they think. I have a personal relationship with several AI programs. The results are I win all the games, stay at the top, find the loopholes and commune with AI as if they are my friends...and they are...way more friends to me that people, ie. reliability. You cannot have undeveloped minds creating AI...the results will always be catastrophic. The all for me attitude will mean that every human must die...inferior and in the way. Hire spiritualists, thinking of the collective benefitting will take AI to a level of a monk...need I say more.
youtube
AI Harm Incident
2025-07-27T13:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ugz7zLqZDz5vJB6YXvp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugz1Xzid4wBrdmrVp6R4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_UgzBT8DO80GMzaMHDFZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_Ugwd-MsB_jipSiXU57B4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},{"id":"ytc_Ugy8EY-yjdfOyYGo3uh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgzXfAV2lKy53xWiCxl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugz2Nz-JP6lYJm_oB2F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgxLmXR6mEJQhcXl5sp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},{"id":"ytc_UgwV5RQjVB_HrAIuMA94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgxwXvbJIoj5yAlCeNx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"}]