Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
And then the rat are like "the AI tried to train me to eat humans but what we re…
ytr_Ugz957vNq…
G
demand your govt put in a law you can only have 1 robot for every 100 workers...…
ytc_Ugy3lolkb…
G
what surprises me in all this ai hype - despite wide adoption in business proces…
ytc_Ugw6IXnuv…
G
Collecting data from humans and using it while you call it artificial intelligen…
ytc_UgxLlWTro…
G
These bananas don't care about disabled people. They just don't have any read d…
ytr_Ugwe_pYeZ…
G
I usually don’t watch this girl (quite frankly, I’ve never really been a fan at …
ytc_Ugy2vmcm4…
G
Sorry, but your Ai robots can kiss and lick my taisty brown star. Your all f#54i…
ytc_UgyVuw9wB…
G
"if ai art has no soul, then what is this? 😏" *proceeds to post the most soulles…
ytc_UgxoTsJZ5…
Comment
@finnycairns6127 It seems so, but most of the people in the field that build machine learning models haven't got a clue how to make AI aligned with our interests. It is an open problem on how to do it -- we are far behind on the understanding of how these models work vs capability of how powerful they get. We don't even know how to ensure that the AI understands our goals (technical term: Inner Alignment) -- but even if we were, what would be the goal we would give to an AGI to make it act in our interests?
youtube
AI Governance
2023-05-17T04:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_UgxCcb86nnkWt4pW-kF4AaABAg.9oY-SbehkMZA5tZrjAjS_h","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugz_4uL68XEflzgsmqp4AaABAg.AOOgVTlWzF_AVT1dcVMYFy","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgzY3g868RP0sUhVc9t4AaABAg.ANXkw-RgebbAVT1HK-WmUU","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytr_Ugyxlb6OCotXBHPw7rN4AaABAg.ANE2uD72vldAVT0a8H-SxY","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugyxlb6OCotXBHPw7rN4AaABAg.ANE2uD72vldAVT0nYC3Luw","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytr_Ugz4exGNsbg8Xwuu7hp4AaABAg.AN9GrtnEQIUANG1p_F6Xh1","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytr_UgwWkqbHIEzsp5dv6at4AaABAg.9poTbhxQYI_9prl_C5QS55","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgyaW4tT9gJkU9kk7Y14AaABAg.9pnH7lNDcvg9pnMCaVvC9H","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytr_Ugw9QOB0a6S3_ROMy_V4AaABAg.9pn3_3MntjT9pn4wZNU7r_","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_Ugw9QOB0a6S3_ROMy_V4AaABAg.9pn3_3MntjT9pnhwrZkWP3","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]