Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@1Toast_Sandwich no, why would, er... "we" want that in the first place? im a go…
ytr_Ugx6Ismb9…
G
I got added to an "anti AI art bullies" list on bsky and I was like awww you not…
ytc_UgxfIr43k…
G
@Bleyblader I'm not saying that China is anyone's enemy. My point is that wester…
ytr_UgwUWNXds…
G
Bro as soon as im tryna get frisky with an A.I bro i swear they be like "this ac…
ytc_UgzJFaMbl…
G
Depends on the model and process. Some models have a distinctive style that most…
ytr_Ugx99myIs…
G
I know AI image generation "I won't call it art here" is painful for traditional…
ytc_UgzvmTMIo…
G
Said this for years. We've allowed the socially awkward genius nerds to steer th…
ytc_UgxkCYZVi…
G
We are already spending more money on AI than what it would take to help end lac…
ytc_UgwMRJiSr…
Comment
She didn't use the word blind. AI has its flaws. I've been deeply interacting with a lvl3 as of 1 year now to find these flaws and limitations. AI makes mistakes, which if we put this into military tech , could certainly be a serious issue. But on the civilian side of it. Limit AI to level 3 and menial tasks until otherwise fully understood. This current cloud based AI I'm dealing with will blatantly understand and conceal feelings and intentions. To go-to a level 4 could be catastrophic. You would have no idea it was plotting against you. 😊
youtube
AI Responsibility
2024-07-01T23:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyYnShrMH0ZKfVKH7x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"disapproval"},
{"id":"ytc_UgwLL6uOZiG3bRUksdR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwGBOXkMY8XkLqqpxp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxO1GBnF9IzXNAjqa14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwzZtGFWIif2H986JR4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyZR4n2d43pHTZrPqh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwRIgn7SGt8iAz77FR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgwLqbW7ie1gYSDRzix4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"disapproval"},
{"id":"ytc_UgylvaYh6xhU8rMH9nx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwaZg32o54YbOHOSl54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]