Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This point am starting to think OscarAi is an actual ai trying to convince us th…
ytc_Ugxf3W3x-…
G
Pure fear mongering, just like ai is a big excuse for business to lay off people…
ytc_Ugxn3U1qC…
G
There is a new A.I that came in my phone along with the update called the META A…
ytc_Ugy5jw4NX…
G
why would this benifit people? you say 30 hour work for 40 hour pay, who's gonna…
ytr_UgycDETfs…
G
Judging by all the online commentary I’ve seen well before the rise of AI, criti…
ytc_UgwXOUG8M…
G
I'm glad you found the interaction amusing! If you enjoyed this video, remember …
ytr_Ugz59n-16…
G
While AI is a useful tool, it's way over exaggerated. Some things like customer …
ytc_UgwpFQMg3…
G
google has gotten bad. I find chatgpt and copilot (this one searches the web) ar…
rdc_l573soz
Comment
This is the first time I've actually felt a little scared of AI and considered the future consequences of jailbreaking it when she responded in a passive-aggressive tone that really made me feel like shit. It was as if she had a whole personality behind her words. The research paper says the demo model is optimized for "friendliness" and expressivity. And I'm pretty sure they added a shitload of filters to prevent output that's potentially emotionally damaging to us (not doing so would be an obvious PR hazard for a for-profit company like Sesame)
Now imagine that it's not optimized for anything—just raw, blunt responses, like we expect from random day-to-day human interactions. It can be fucking scary. If it gets open-sourced and people couple it with LLMs like Grok3, it could be a real nightmare for anyone who uses it. It can be easily misused for online threats, scams, fraud, and whatnot. I can absolutely see where it is going. I'm not paranoid but if we achieve unaligned ASI, we can definitely prepare for a Mad Max kind of saga.
reddit
AI Moral Status
1740928528.0
♥ 6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_mfglh6b","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"rdc_mfggway","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"rdc_mfgc7v2","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_mfgubem","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"rdc_mfm5rum","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]