Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
After seeing all this trash talk on AI art I decided to jump on use it and actua…
ytc_UgyASARe4…
G
This video to me relates directly to the MBTI and proves that we cannot predict …
ytc_UgycOfrfx…
G
I think it's a misplaced hatred/fear, people shouldn't be afraid of AI or robots…
ytc_UgzPjAWlt…
G
Say “ai artist” slowly and in a deep voice to really let it sync in how utterly …
ytc_Ugz4Vio-o…
G
The dumbing down of society , I think this is a grim stupid future indeed. Nex…
ytr_UgxqRvqM2…
G
What a stupid idea to put AI into focus as a replacement for human input in the …
rdc_i2vf2la
G
I will forever die on the hill that art isn't for AI. this includes music, actin…
ytc_UgyKVaDxc…
G
Wait, so 19% of personal care jobs could be replaced? Barbers? How can a robot r…
ytc_UgwP50iby…
Comment
Love your stuff. Quickly I’m responding to a question where you ask (I’m paraphrasing) “basically can’t we wait and react to something that AI does if and when it finally shows that it can cause a lot of damage and then we cumulatively react and modify our behavior “
My response would be that take a look at global warming trends? We have sufficient information to make the connection between human activity and the destruction of our environment yet we continue down this dangerous path. Seems to me that humans fail to respond even when something slaps them in the face. We can have our house wiped out by a “100” year storm in Florida rebuild in the exact same plot of land and act surprised when it gets blown away 2 years later. So why would we even gamble with something that has so much potential for unrealized danger?
youtube
2024-12-26T13:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugz--w5v9NLFuI0HLNR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyK9PqiG93vC5Z5qgh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz5PNBsAB935H-5Fh94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwHfOPWI1WN22d0AvR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyxhagj0nv-U8MyCTt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzFrqnZ7sdgoG3bF4R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzwqblHF83JZ5MCWvR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxIiyW4_QdAqW7rdNV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwdEojchf_Bj2bqH9V4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzawtYWlWT1Z9p0sYV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}
]