Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Don't get'em twisted just trust the process
I say serves you right getting jacke…
ytc_Ugx2xXmtx…
G
I appreciate the tutorial on your style hate that this happened to you, I don't …
ytc_UgxEOFmpv…
G
The modern Socrates drives a chatbot crazy with lame questions, watch online for…
ytc_UgzTU70Qd…
G
the nearest future artificial intelligence is a grinder machine for digital iden…
ytc_Ugw_l3mCP…
G
I know AI very well I'm doing solo gamedev and I've tried to use ai over and ove…
ytc_UgzZc5kH_…
G
Every time they try this ai experiment, they always have to shut it down for the…
ytc_UgwNtYOTM…
G
Ppl laugh, scoff and mock but it’s all laid out for us, right there, and has bee…
ytc_UgyiAhnIt…
G
Serious question, why is an MMR titer test, measuring antibodies, appropriate fo…
rdc_g9u3ad1
Comment
My husband looked further into those tests made to check AI ethics. Firstly, there was a lot of fear mongering surrounding it. The people doing the testing are a company literally built to test the dangers of AI and find solutions. This wasn't a random test with no end goal. They wanted to push the AI the their limits to see whether they have survival instincts that will make them break some of their rules.
Also keep in mind, it was all a simulation, and nobody was actually harmed. I recommend looking up the experiment yourself. It made me feel a lot better.
youtube
AI Harm Incident
2025-12-10T22:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | mixed |
| Policy | industry_self |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyFc0wM3xBNIEc0XcB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwEjg-hZ8ld-nMqr2R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyZcT4rtB5toxFGiDJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz9TFnOQUdHUyh7red4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz_d7pGCnihOXlKScV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxW5TYTP6OhOF_Yh_V4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzMAqFVscowFB82HYl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxDUERPPkTvk196LcB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgxbmuzbsTZpVRLkvmN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxBsaLiCYdlV-CqbDV4AaABAg","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"resignation"}
]