Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If AI Was only purpose to research and tools of writers and i need a humen to …
ytc_UgzWchkR5…
G
Firstly ai is an existential risk and beneficence we should focus on that and re…
ytc_Ugzy0RS8r…
G
Lots of people claiming AI has a 'soul'. But it hasn't been shown to me that suc…
ytc_Ugzx43xSg…
G
i’ve seen ai write pretty well but Winston AI still catches the little signs tha…
ytc_Ugyn76pag…
G
Make a video on moloch as this is what is being used and will be the underlying …
ytc_Ugyry6zFU…
G
I am not scared of AI itself but of people who were lazy already becoming too du…
ytc_UgwazBdYb…
G
Can someone who programs AI explain to me why robots struggle so much with inton…
ytc_Ugx1aPNfx…
G
You want AI to create your art and then do what exactly? Go and consume and cons…
ytr_Ugy6VD_Mp…
Comment
I think developing AI is the only way to preserve sentience on earth. Humans are developing technology at an exponential rate yet we have unstable minds. When tech like synthetic biology is as available as cellphones, every human being becomes an existential threat to the entire human race. This spells disaster for earth-born sentience UNLESS we either improve ourselves or develop a better version of ourselves (i.e. a more stable mind that's not prone to diseases like schizophrenia)
youtube
2013-08-18T02:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzrOf6t6aLbReca1AJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw2S_xu9G5dFHpcYAl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz5wlVdvxY_T2Ag3vF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgySzmVymDiha7QN81J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwL2XiUPA7dkQyK36t4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxHnxkSaZnrZ3l_69h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx7_gEnvWQqt_-j0lR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyHZ_VypyfLGRHFUH14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz5JX3dK_Cy3OSHByp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"unclear"},
{"id":"ytc_UgzBoXUJBG9lDLN3KS94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}
]