Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The very first question in cases we have to choose is, whose safety is the most …
ytc_Ugzslwt9Q…
G
Can people stop making comments like this? As long as there are still dumb peopl…
ytr_Ugz9dfrxc…
G
This. Climate change has real ecological concerns whereas AI doomsdaying is so o…
rdc_kvetmew
G
I disagree with the idea of Elan Musk of creating policies behind closed doors o…
ytc_UgyzC39Pn…
G
It’s not surprising seeing Big Corpo using a false narrative to justify a survei…
ytc_UgyQ6iDkH…
G
Idk but google ai studio can do it better with actual file management.And can ev…
ytc_UgzWXaj9H…
G
TLDR - slapping an AI sticker on your product by bean counters bandwagoners blin…
ytc_UgwIlrFJU…
G
If AI is actually taking jobs why don't we work to make the planet a better plac…
ytc_UgwwhQbwr…
Comment
"Hot Robot At SXSW Says She Wants To Destroy Humans | The Pulse | CNBC"
Hot Robot
Hot.
youtube
AI Moral Status
2016-11-09T23:3…
♥ 126
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugh5F38PcjWDh3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgiooCLnnS39EngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UggI9Gb7W5JeS3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UghriRFYhsJrrHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UghgHDGbmSbNOXgCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugi7GZAMMDtaI3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgiFG1PO1xAOBngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgjFHqon-Nyyl3gCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugjz4FSe5ADcx3gCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgjMJJzseEAKG3gCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}]