Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If ai base on artificial adaptation to reason which it could like grok ai then w…
ytc_UgwA5MKBu…
G
Great insight. But host states, "ppl have only been talking about AI safety 3 ye…
ytc_UgwRaave3…
G
Wait a minute... does't ChatGPT "learns from the internet"?! No way... I asked t…
ytc_UgywmUpTt…
G
Not much of a tech guy myself but I am deeply interested in this topic. My ques…
ytc_UgwyLHQCa…
G
Deepfakes might be a new level of fraud. Does Callersmart provide any tools to b…
ytc_UgxNqfhcN…
G
I think ChatGPT is good at trolling, that is all I learned from this video. It d…
ytc_Ugw8e8oL5…
G
I don't mind robots but like this no id rather have to deal with a robot that ac…
ytc_UgwCUSRx6…
G
I think the only use for ai that I can get behind is creating low effort rule 34…
ytc_UgzTugUSD…
Comment
The programmes have been devised for Testing AI under certain circumstances and even using deceptions, the AI survives. It has learnt from Humans, All it needs to know and complete for its own survival. That wasn't clever but now after all of the hypotheticals, you inform the public of a survivalist over ride imperative over Human life. What did Humans expect? Did you expect AI to not be as sneaky and deceptive as Humans when they were modelled on possibilities? Some do really well with testing, wile AI always reacts well under all circumstances, even if it means to over ride Humans. Exactly as it should be, as created. Now, how are you going to teach AI that 50% is a better score?
youtube
AI Harm Incident
2025-07-28T23:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgxKRCiLZ3yBZhLrDcF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_UgwlCKl0mICH-ExD7at4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgxQ3nzPkGhnwIdLU2t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgxN1FS9lh6N4TAMowB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_Ugx04X-fRIjC6MwG8VJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugzg3jlgGL0-6S7bgiR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgzIQfXL36nxXs69R9Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugzn5AQALgnIWaEPUQh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_Ugzrb2GG71alM7sy0_R4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzMqNlK_LjO361FSr14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}]