Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm more worried Israel is going to cause world war III and get us all nuked to …
ytc_UgwRHnXY6…
G
The bubble will pop. Smart companies will be quick to reorient back to being tra…
rdc_n7yignb
G
I think the “integration” of AI happened way too quickly. Should have done it mo…
ytc_UgxexldGB…
G
I was brow beaten out of pursuing my dream of being an artist from a very young …
ytc_Ugxc_If0q…
G
Google's Gemini Pro 2.5 is currently ranked as the most intelligent AI on huggin…
ytc_UgxCaJ0jW…
G
I dont know why, But this video makes me feel unconfortable, deeply confused an…
ytc_UghyAFRx7…
G
I’m still encouraged by the life saving, cancer curing type benefits of AI in th…
ytc_UgxaXwAeH…
G
The problem I see is that AI is the new 10 factory workers being replaced by one…
ytc_UgzIeI6d-…
Comment
using racial minorities' difficulties in traditional publishing as a defense for using large language models to generate your novel is so insidious to me because I very vividly remember when Microsoft introduced a chatbot with a Twitter account you could talk to and within an hour she was racist. you don't think those language models have racial biases encoded within them? they do. using that as a defense is so ugly to me.
edit: my other thought is that if this leads to a bunch of spinoff writing challenges in november it will be just like inktober
youtube
2024-09-06T20:0…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyjHxL2mfRznCSGVUF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxWB-2_DPCQW7gm0Vh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwPA5-GpzBRslIapmF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxC0sD5PXXiBNYGaGF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyHU-TCRNzK8eO50Z14AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzwQ1BYYZ_gOaoUvf14AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzRyJZUUbNtLNlFZo54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxct7S2eqoMZ_QSM7J4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwTSxGE07tBdO1l0Rd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxFyDtkncsDPgSA3z14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]