Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Just wanna tell u about something I found a ai version of jellybean and it's on …
ytc_Ugxe5gM4u…
G
This type of R&D needs to be stopped. Does the movie Terminator ring a bell?Wha…
ytc_UgxN4Qsfk…
G
if we gave a conscious A.I one robot body, how would the alpha character perform…
ytc_UgyKi0YAg…
G
AI will be aiding in diagnostics but not in treatment. It can never replace a tr…
ytc_UgzRaJPwR…
G
AIs don't recognize definitions of shoes; they recognize images of shoes. It wou…
ytr_Ugyz_XV7v…
G
We will regret this technology. Mark my words. If something can go wrong it will…
ytc_Ugzjng-ll…
G
I mean idrc, ik that can offend people who do ofc, but if you want to get images…
ytc_UgyBJlnwP…
G
The ai artist are mad because you poison there art but they don’t care if people…
ytc_Ugw8RGYQY…
Comment
It's stupid to calculate the risk of humanity disappearing because of AIs - there are an infinite number of unknown parameters! No wonder the results are incoherent. Nobody seems to realise that an AI depends on humans to function, to repair itself and even... to have a purpose: an AI that is no longer interrogated is like dead! It has no will of its own, no drive... If it seeks to protect itself, it's because it's trained with human data, and for humans disappearance is a tragedy: so it's trained to avoid it. As humans also have the right to self-defence, it is quite natural for them to react excessively in the way described.
NB: a simple jet of water is enough to destroy any AI: short-circuit.
youtube
AI Harm Incident
2025-07-27T17:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzS6yyzf9ot-TShh3F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxGaUinz9BuXgmkKBh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugyj0PXz-yC8Qsl9z8F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxiR2LzO81zfL_ejoV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzFHq4oPPPMv-9U6ht4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxW_XL6AbBSrn6ba4p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw8Jm3onh9pmtVoDzF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyS2Fu3v979r1xNaaR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxvxZCMZXe0be2BNL94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy0QlB_VzRolPEoM_F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]