Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There is too little information for us to speculate what really happened. I'm su…
rdc_cjowxg2
G
Be afraid of AI?
I have a higher probability of being unalived by fellow huma…
ytc_UgxfElQUU…
G
Imagine if they called her Apple Siri instead. I ask you is Siri not a form of A…
ytc_UgyOn41KH…
G
Some ai detectors are also reliable for me so far. Just like for images, truths…
ytc_UgzZeiywX…
G
Luckily, I live in a country where we have a social security system, like unempl…
ytc_UgyAUZszu…
G
I don't like AI, I'm not gonna argue for it. I WILL say, however, that I'm bad 🤣…
ytc_UgwyEx3s5…
G
I dont know anybody who scared of AI saying wall-e shit. they know the reality &…
ytc_UgySBtWut…
G
Just came here from your video about ai poisoning. Left big comment there alread…
ytc_Ugxk9toM4…
Comment
The problem could be fixed by filtering training decisions through another AI that is trained on being good. The more complex the neural net becomes, the more those ethics will be intrinsically built into the modal.
youtube
AI Governance
2025-06-17T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz52Uu5d7jdZooGen94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwNiI46-lZ_xn1zthF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzAFia9aLqXWDKGZ-Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxD3IU8IumHHh6Q1r54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzel7LjO1le8JN-J414AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxGxKSFfYUmqZQ4rIp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzlsLRRX7faQVr0svh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxjyG5mRbMiTbV2cTF4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugw0JgkbVkBZknwAaUV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzh0brdg4DNh490M054AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}
]