Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What I noticed that whoever programmed these robots, must have been a male becua…
ytc_UgwRGmNbg…
G
Yep it does :) checked. So if you have it turned off ChatGPT won't train with yo…
ytr_Ugz43wkf8…
G
I’m not against holding OpenAI accountable, but why did you stay silent on this …
ytc_UgwhLfdbO…
G
One day a big reveal/surprise will be mentioned to the whole world .............…
ytc_UgzdWpRyl…
G
The problem with a lot of AI models is that theyre trained on billions of pictur…
ytc_UgyUlA-RN…
G
One of the best AI tools to humanize your work is Clever AI Humanizer. Its not l…
ytc_UgwpNl7Sp…
G
That's an interesting observation! The evolution of AI, like Sophia in the video…
ytr_UgxoamrYN…
G
Counterpoint: AI is good actually and this moment in time may be humans last cha…
ytc_UgzYl9F4P…
Comment
@13:39 that is where Hinton is being just dishonest.
The abstraction and generalisation a human mind does is nothing at all like the *simulation* an AI model does. The AI uses just a sequence of functions and weights, learned in training, to *compute* a result. Sometimes the computation results in something sensible, sometimes it does bot. When it does not *we* call that a hallucination, however the AI always computes results the very same way.
However we have no idea what the mind does.
youtube
AI Moral Status
2026-03-01T00:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwB30v4Evlt6koIOP94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyL0TsUpw5qsIXLCX14AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyjDRmYHp7ay0a7ynl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyfkxNlKwTXLLcw8FR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyUSRsUcIzfJIpDMAp4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy2092F2Jd9thGayCh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz8xBXqnat6J0_fTKZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxiYP3DVbg2i8z4bqd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzt2bvEZVzHpdjOo2t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw_RDpmZmod1ZbJkwh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]