Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Im ok with agi happening but as soon as it is put into a robot im building an Ap…
ytc_Ugyg8yzfa…
G
Ask Claus Schwan what he thinks about AI especially with his dreams to decrease …
ytc_Ugxq73mWL…
G
The problem isn't AI art per se. It's an interesting way to use machine learnin…
ytc_UgxbTc-Ff…
G
Petition to stop calling them 'ai BROs'. I call people I like 'bro'. These are a…
ytc_Ugz64zc6H…
G
AI will be just like the internet when it peaks. It's just as good as the person…
ytc_UgwztBAjb…
G
You can literally make any llm say things like this just by engaging with it lon…
ytr_UgzQZqlbU…
G
For those ignorant people - in Australia, we are primarily a service based count…
ytc_UgwUCb-dk…
G
If from childhood to adult, it takes 30 years for a man to become an intelligent…
ytc_UgydbfMpo…
Comment
As I said yes we should prepare and invest in AI saftey/interpretability reasearch, but claiming that super intelligent AI is almost guaranteed without any evidence is just basless fearmongering.
reddit
AI Governance
1708160838.0
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_kqt6tx4","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_kqtbjiw","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_kqsry3d","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"rdc_kqsvtqq","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_kqswznd","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]