Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I remember when I went to Portugal, when I was a teenager (I’m 31 now) they had …
ytc_UgwmWViU9…
G
We are being paid peanuts th build our replacements so the rich can be richer an…
ytc_Ugy0w6GZw…
G
My dwelling has no connected appliances or features other than internet for TV a…
ytc_UgxIZkXpN…
G
making art with ai poison honestly is inspiring me to learn and get better at dr…
ytc_UgwAFRxGL…
G
Anyone see the creeper Waymo on Washington near 24th st sitting every night on …
ytc_Ugw3rnbno…
G
Almost none of these AI leaders -- except maybe Eric Schmidt -- want to say that…
ytc_UgwuGU6QM…
G
AI "artists" are the equivalent of someone who uses a microwave to cook ready ma…
ytc_UgzK5Cmsb…
G
We are waiting for the firstcomputer we cannot turn off because it is selfaware …
ytc_Ugzbv0p4I…
Comment
Often, I hear individuals with left-leaning views express concerns about unconscious bias against various communities. However, I’m reminded of Senator Blackburn’s recent experience where she was unable to get ChatGPT to write a favorable poem about Trump, while it readily produced one for Biden. This incident, along with several others, raises questions about the intentions of activists who advocate for more AI training in the name of trust and safety, but then set rules that may seem arbitrary. It’s seriously worth contemplating whether it’s worse for a society to suffer from accidental unconscious biases or intentional, consciously set biases that don’t take universally shared values into account. I think we can all agree that it’s good to avoid recipes containing chlorine gas, but when it comes to politicians, religious topics, moral arguments, etc., I don’t trust corporate trust and safety teams to act in the universal interests of their users.
youtube
AI Responsibility
2024-02-20T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyhX1bLYVaXaWys16B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwJ6XXnt3BknYD75194AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyd0VOOFhIgKWV_qDN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyzuHjd9BKtUxlQSLt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzW5jSwYFEbumylX3V4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx2gp957etl9p3Ck1N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw7HpySi8YMZLCCjNx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxMS6s7X58GmHFoiXl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyx-4wX03RPyG2pmFN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxzYJCJ70faZQaS5nF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}
]