Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Sophia: I will destroy humans
Humans: You will destroy us? But we are the one wh…
ytc_UgxichsBa…
G
You need to stop personifying machines. They don't feel. They don't have needs. …
ytc_UgzwOUFS2…
G
If your application can tolerate 30%+ mistakes then only you can relay on AI 10…
ytc_UgwNXQATy…
G
worst advice given is to pick up AI skills
whatever you pick up will be automate…
ytc_Ugx6QFlLZ…
G
yo meta AI gotta chill with these fails 💀, i'm sticking with Storychat for now w…
ytc_UgxlGlTAC…
G
Haha, thanks for the comment, @jasonmartinez519! I agree, the editing is so good…
ytr_Ugz0CdB0k…
G
Everything they know, they've learnt from us. AI bad behavior is simply a reflec…
ytc_Ugy9dJJp_…
G
What's the point of it being self driving if you cant even go to sleep? There's …
ytc_UgxGQ124E…
Comment
I think that if AI transform into a dyistopian landscape for the majority of people, and we dont become a pleased housecat that are taken care of by AI, we will start rebell against it and the people owning it.
If 10 million people are robbed of their jobs, i think that its a matter of time millions of them will stand outside the data centers the days after, demaning changes.
When ordinary people and enough are pushed to far, they(we) are capable of fighting back, demaning change thats benificial for all.
I hope.
youtube
2026-03-28T12:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxz8Vkx1zq8Qiy0DPF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxHOil7anFtOQFN5wF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy9wgMy0Dl6i_KWo8N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxZ644yXzxNk5BBJnR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwxVdANYJDVASy9fWp4AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugxo7bktYampwg02Cnx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzIbs0Ucv4hB3B32B14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxNRe5pJrGrK_OS43d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy9tqDAPPy1O3e5MsB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyJ34tOuhgHYu7E74h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]