Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Those who can prevent that take-over are
the already existing 4 Billion years 9…
ytc_UgzBPsqlo…
G
@colspiracy8326 Everything in Aber is closed pretty much. Went back there last y…
ytr_Ugx-4M5X4…
G
I'm so excited for the future that I don't even care what happens as long as we …
ytc_Ugglgq8eI…
G
thats how a.i agents work or a.i in generall but think of it in a deeper meaning…
ytc_UgwJrEMVd…
G
Its sad but true that more safety features a car have, the more ignorant the dri…
ytc_UgxdXEscn…
G
Exactly. What will happen is the super rich will have everything automated and w…
ytr_UgxAX5T2n…
G
The thing is that if we do create AI that is conscious we could just ask it what…
ytc_Uggvb1v6x…
G
It sounds like you found the dialogue between the presenter and the AI pretty su…
ytr_Ugzkwjaor…
Comment
History has taught us that humans are flawed, reactive and not predictive, we perceive, we suffer, we seek we make mistakes, we solve problems, only then we learn, only then connections form in our mind.
AI is predictive, it could measure, calculate what can go wrong in a very objective way and then act based on the goal it's aligned to do. That is scary, it's cold, calculative, highly logical, practical, devoid of emotions. AGI could change that.
youtube
AI Governance
2026-03-04T04:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugykm1z19BMTkv2bThR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwXNjALOmb78ZeqsoZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwAGYENBOZMjvRIx894AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx0R9estGbLEjlSPwt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwxBgT3wH6uVTT_q9p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxV3MPIbgeIxir_yN94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxNRgld261l3ucViTF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxDYlj4Y8jUOACU1Hx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz1oSGh7vCCt-dJrYB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxlZ8Fp2GyMw8jMFT94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]