Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
literally. like, by definition. a child with some chewed up pencil IS more of an…
ytr_UgzNCI7ea…
G
Here is a thought, the interviewer had to ask the AI agent for drinks. The agent…
ytc_UgwV-1_zW…
G
I'm still quite new to taking my drawings seriously, but if anything seeing the …
ytc_Ugyxh26bq…
G
And not even just that, they're studying LLMs. These are not bots whose purpose …
rdc_kp09gjk
G
Automation always impacts jobs, always will, always has.
Typesetting is (almost…
rdc_glhz0dl
G
I really take issue with the fact many people say that with AI the price of thin…
ytc_UgzwWd0zI…
G
This fearmongering is hilarious.
Token generating math algorithm will invent co…
ytc_Ugw_rSeHp…
G
There should be an internationally agreed limit on the size of AI models, maybe …
ytc_Ugw9If_BC…
Comment
this is out there (and maybe unethicial). but i think i MAY be onto something.
make AI absolutely LOVE humans, specifically homo sapiens.
like, soft edge yandere level of obsessing over people.
but even with that... i'd doubt that would prevent it without getting rid of the main issue(s).
mostly AI being seen as tools and robot racism.
i'm not saying it's sapient but it could be, so be safe about it.
and also this might backfire.
badly.
youtube
AI Harm Incident
2025-09-11T08:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxUUNMwCpySkRqGlg54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzqKDkQI_b25kV6WS54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwrHYOPq6oTN3jAQE54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxJ_SSEmJrUGGvLxcx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgygZ9UjBPYbbYPU2L54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy0CEink2CgMCv_OCB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxavQGV-ZhrqXzA42Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzLfXm1cHB7YY3ZGy14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw2pd0nsH9Rknboy0B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgzXeXcOVhtCapfM4214AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"outrage"}
]