Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I draw, I use ai. I like putting my art through ai programs seeing what comes ou…
ytc_UgxekHw_r…
G
I like how the moment A.I. doesn't just totally suck and actually does what A.I.…
ytc_UgzbvwmPN…
G
Right now we have Trump and his adminstration to deal with. That is enough for m…
ytr_UgxSFDbAL…
G
AI is good for menial tasks and sorting through data. NFTs would be useful to at…
ytc_UgwHm2IZ7…
G
Yes, finding a balance between regulating AI to prevent potential harm while als…
ytc_UgyUhHgGR…
G
Out of interest I asked DeepSeek one morning, "How are you today?" Imagine my de…
ytc_UgwD__Zv1…
G
I did this and it is very real. It is scary how real it is. You can do this with…
ytc_Ugwi39RmJ…
G
@Mister33JC so if in the near future ai gets so good that it doesnt need human y…
ytr_UgylEVtmi…
Comment
Why does nobody think about security... AI is nothing else than learning from examples ... lots of examples... and then do some statistics about possibilities.... but what when somebody starts to feed the AI with false information ... more false information than formerly trained..... Then suddently a mouse becomes a dog... at least that would be the least problem...
Yes human make mistakes ... but they cannot be manipulated (ok, maybe with a lot of money 🫣)
youtube
AI Jobs
2026-02-09T14:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwB_FujTtz1FJ_QKAd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"approval"},
{"id":"ytc_UgzjCKBQnwNHxavIvAB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzAulysMiaHjzJWvs14AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw-gjCTHWI9SFuSQP54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzO1IhC0SkmRH2nhJt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxg-98Fx6y2XP9-WiR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwfojKoYnAZqIEy8TJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx7cOBsZRQGMQJRmFR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyiekd1swAeKVlELYR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx1rNAbtJGZDpLB93d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]