Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Robot: screw this... i dont get paid anything for this... HEY! I WANT MY BLIMIN'…
ytc_UgzWTIQlW…
G
Honestly, pushing the legality of not owning AI art is probably the only way tha…
ytc_UgwNHgGwO…
G
This makes me extra happy because it returns us to the era of abstract and unset…
ytc_UgxHiCS-d…
G
Trial and error. Ai started out malfunctioning. It's a learning process. Even ba…
ytc_UgywO5vjX…
G
This conversation is difficult for me. Water is a problem for most global popula…
ytr_Ugzw2ml16…
G
"mark my words"🇬🇧☕
"AI is far more dangerous"🇺🇸🇺🇲🦅🦅🦅🤫
"Than nukes"🇦🇺🙃
…
ytc_Ugx4QIrgL…
G
@TheAngryDesigner Because they are limitless. Knowledge is power. The more you k…
ytr_UgziTMJMi…
G
Ai is limited to the collective human understanding. It cannot discover or crea…
ytc_UgxO1V28R…
Comment
: a robot may not injure a human being or, through inaction, allow a human being to come to harm
; a robot must obey orders given by human beings, except where such orders would conflict with the First Law
; and a robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
The First Law is considered paramount, taking precedence over the others.
And even this plan needed a part-robot-mostly-human hero.....ahy, Steve?
youtube
AI Governance
2025-11-04T08:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgziHnpt9VLT4_SyHdN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwterEDvISQG74_vQd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx7X2YrTl1nTxwAn9Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgweblUv9pH0ke3M3c14AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwQhVrDhmcTATDX3oh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugw6ic64tfhxsqw7xHJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxSrcywumA6SHEbvo54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwhBnyEYqEqUA5E5oZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgymZDdVBxWfq5h7dbZ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzr91DxttaO_sHlQOZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]