Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I really hope this extreme restriction issue gets resolved before December becau…
rdc_njh91ri
G
Humans can't completely replicate an artstyle
If you think about it, its very d…
ytc_UgzSBYYe5…
G
Another expert commenting on DW said we should limit its proliferation in the sa…
ytc_UgxXITTu9…
G
Idk if this counts but I’m partially blind due to my premature birth , and I’ve …
ytc_UgxNeFPIR…
G
@carbonfootprint3635but it may also be a lot more complex than they are telling…
ytr_Ugy1VLefd…
G
People with massive fortunes to buy and integrate expensive technologies that di…
ytc_Ugwihutyi…
G
NO countries or person should be propagating this nonsense. They don’t even unde…
ytc_Ugw6tp9DS…
G
We have concluded that the only way to defeat a hostile AI, would be to upload y…
ytc_UgxFYUi0O…
Comment
"OpenAI faces more turmoil as another employee announces she quit over safety concerns.
It comes after the resignations of high-profile executives Ilya Sutskever and Jan Leike, who ran its now-dissolved safety research team Superalignment.
Leike [accused OpenAI](https://www.businessinsider.com/openai-exec-jan-leike-calls-out-sam-altman-ai-safety-2024-5) of putting "shiny products" ahead of safety and claimed that the Superalignment team was "struggling for compute" resources and that "it was getting harder and harder to get this crucial research done."
The exits this and last week followed those of two other safety researchers, Daniel Kokotajlo and William Saunders, who quit in recent months over similar reasons. [Kokotajlo said he left](https://www.businessinsider.com/openai-safety-researchers-quit-superalignment-sam-altman-chatgpt-2024-5) after "losing confidence that it \[OpenAI\] would behave responsibly around the time of AGI."
Krueger wrote, "I resigned a few hours before hearing the news about Ilya Sutskever and Jan Leike, and I made my decision independently. I share their concerns. I also have additional and overlapping concerns."
She added that more needs to be done to improve "decision-making processes; accountability; transparency; documentation; policy enforcement; the care with which we use our own technology; and mitigations for impacts on inequality, rights, and the environment."
reddit
AI Responsibility
1716610556.0
♥ 64
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_l5ml63o","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"rdc_l5kupjr","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_l5mjn7g","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"rdc_l5kz5sx","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"rdc_l5kktle","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]