Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Furiends GPT 3 was trained on 45 terabytes. In 2021, the overall amount of data…
ytr_UgxAafv6R…
G
I can't take the greed anymore. I hope they all rot in hell or become a seagull …
rdc_d0f88v3
G
I've seen AI drawings at my school
Which deeply disturbs me because we have an a…
ytc_Ugy4rLV_w…
G
Regulations and taxes are the only solutions that could hold back the speedy ai …
ytc_Ugzzf_nRG…
G
lower cost... bruh i bought a 15pc coloured pencil set (with pencil sharpener), …
ytc_Ugyn_PG51…
G
Currently watching the video so I'm not sure if this topic was mentioned but som…
ytc_Ugz6hdfFQ…
G
It gets more sinister, I predict some governments no longer care about the peopl…
ytc_Ugyb8F4br…
G
The only time I would say "Please" and "Thank you" to an AI is the day they beco…
ytc_UgyHPWwsb…
Comment
Letting tech giants in the A.I. industry - who all have dubious safety/ethical records with A.I. and all want the same end goal of A.I. profit - "safety check" each other's models sounds like a recipe for Skynet. Even if their safety standards and biases weren't an issue, the problem is that even if these companies are competitors, they can only succeed if A.I. is accepted and used by the public on a huge scale, so helping A.I. seem safer benefits both sides. It's a Nash Equilibrium where they both benefit the most if they don't try to drag each other down with bad safety checks.
...who thought this up, the FAA?
youtube
2026-04-10T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgydsObKgWJzks654EN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz5MnejrGSbsruraBp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzSKJ26vaEOaA8Jw0d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx4SXVsDWX27-pPXkR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyNR7QnmSlDJXiQn6h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzxuugFx3yJzEWXRwF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyX5qqxJAtYwmi2N4R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz3uAY5bKrr9M91PB14AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzgpaoBeFt8SveWsF14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxYNiQ3TxczR-tT8Kt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}
]