Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ChatGPT would do well at Harvard. Interrogator: "Would you define Israel's miss…
ytc_UgyDgcNrC…
G
As long as we all go vegan, ban plastic straws and use solar power to charge our…
rdc_emno5o2
G
Thats alarming and yes she can hus iwner can program his robot to do something i…
ytc_UgyhQMjAK…
G
I think A.I need to charge Alex for harassment 😂
Paradoxically, if you're try…
ytc_UgxfuqdGr…
G
This guy is taking our history of the last what's 100 years and then trying to c…
ytc_UgyGveCS5…
G
I think due to recent development of LLM, it is language model. it will replace …
rdc_nm8nznh
G
Autopilot is not self driving. Autopilot is the level 2 driving assistant softwa…
ytc_UgxoZfYo1…
G
Yeah, no worries. He knows the full damage. The ASI is gonna do.. we’re at the s…
ytc_UgxgQlYm-…
Comment
Let me tell you how this pans out:
They conclude that humans can't ethically handle this technology, and use that as an excuse to let the technology "handle itself."
The only problem there is ..... The synthetic morality of the AI tech is still being DELIBERATELY CRAFTED by the people who run it.
youtube
2025-08-27T21:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzNU1NkH5T2mB4YXzh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzBbamMSHraXXbDW5J4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyvE94pDdWBmp1QvoV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwqlsW1ZWq-9V6TR_x4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwJ9Sjm3Mt6XNGjWYV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwmAudV5OLyqYT2OKV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzTqP4HyY2TkzlsKK14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy6FFIA-XuyExz-LDh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugzk5HRNDJfAdHyeW7J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyacL69hKuaWbzvx5l4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"}
]