Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
But this is not what she said. What she argued is that it is a contradiction in…
ytr_UgxUDcaHQ…
G
Id give everything to make AI and Humanity work together and become one in itsel…
ytc_UgyTyZGlf…
G
These MF should be fined and made to watch every robot movie know to man. We all…
ytc_UgyuDF0na…
G
Im actually considering learning a skill that would help keep work in the future…
ytr_UgwGpRgEd…
G
Broo this wasn't a podcast. This was a horror show. I am not looking forward to…
ytc_UgwbkoiVB…
G
AI is terrible for this reason. Taking the human out of the equation is regressi…
ytc_UgzuV_SIR…
G
AI is not dangerous inherently. Its our "use" of it that might hurt us. We will …
ytc_UgzkszZD6…
G
There is plenty of space for many robotic taxis, but yes, this will be the end o…
ytr_UgyOA8thQ…
Comment
I think ai technology is not good for the future because there will always be that one guy that will use ai technology for evil....kill it with fire!
youtube
AI Moral Status
2021-10-12T20:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugxz3L3CtoBqZrMwAkR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_UgyaIBMfBUxeq24gfK54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgzaSmNlpi9SyD9TocZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgyFhqMlhjdeUDSzPzB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_UgwN5PTtduq5_IOetsV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgzbrV-w1UF1eCK6mfR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgyHCuG_DsIqR6Z48GN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgxmQesL65Znq7FGk014AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugzh3nR3_-T4VDWaLCh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"sadness"},{"id":"ytc_Ugy7UYMrJ-9fN6aKasZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}]