Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If you want an AI like ChatGPT to be conscious, let it talk to itself, at its ow…
ytc_UgwM94IpK…
G
So what happens when everyone is a plumber?
Taking into account a.i. and inform…
ytc_UgzSh1y1A…
G
There is a simple solution. DONT programm robots with feelings! Feelings make ev…
ytc_UgxaMVNny…
G
It mind control dangerous technology AI weaponise believers only 1% see them peo…
ytc_UgyRTMIUy…
G
So speculative. Yes a crisis is needed because nobody understands why a non-liv…
ytc_UgzPP67Rf…
G
Won't it mean that so many things will be so much cheaper? Therefore people will…
ytc_Ugz39VeVv…
G
They don't understand art nor AI. Like, they don't understand how an LLM is fund…
ytc_Ugy261PRC…
G
Absolutely NOT. Cities voted FOR the candidates that wanted A.I. regulations. Lo…
ytr_Ugwh_e1m-…
Comment
Yes lets ask an astrophysicist about the workings and dangers of AI. Same thing right. He does not seem to understand that it does not matter if people want to use AGI or not. Geoffrey Hinton, one of the founders of deeplearning, has outlined why a development like this is so dangerous. At some point there is no option not to use it, governments will rely on it and you are basically bumping humans down a step in the food chain
youtube
AI Moral Status
2025-08-10T21:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx4xk6sitWbQt_oU-h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxFUAurxBBABkLBXhV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgykfZH_ODyA0wKTd-V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxJU-DKv0ylNVRO4B14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy5XcWHLVzawvqPbSV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy6lJebl7626QoONc94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwBXQWUN6fxWDWOaCN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgykDaCCgUiyURTgUpZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwGIaumVBNSuuOeYDZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugywmk47QRlSP6nqSst4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]