Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A certain amount of moral code plus empathy needs to be written into AI. .. not …
ytc_Ugw0EyTyG…
G
yo these comments are bitchy, he’s just showing off how well it handles one aspe…
ytc_UgxKA6rKl…
G
Why dont u just ask like
"Are u a robot or a real person with makeup"
And the…
ytc_UgzvxFgVC…
G
Trust that Sam Altman does not give AF about you or your kids, so long as the us…
ytc_UgxfL6jFE…
G
16:51 I've said this before: AI isn't a tool, it's a service. Otherwise, my loca…
ytc_Ugwt1tSGx…
G
Been using AICarma for tracking brand mentions; its insights on AI hallucination…
ytc_UgwTY_ojO…
G
@negativezero8174 you being salty over it just proves that ai arts are not low…
ytr_Ugxj31Ly2…
G
We need to make robots feel empathy like us. If they didn't, they would kill us.…
ytc_Ugjp4atLR…
Comment
This is practically a new technology still under development It shouldn't be used to determine nobody's feature without rechecking in the results.
It's like throwing a beta needed to be tested for production.
The app might have be tested on enough white subjects, but not enough Black subjects.
Or probably the algorithm for the face doesn't understand Black faces yet. It might be working inefficiently with deep shades.
youtube
AI Surveillance
2021-06-26T11:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgyBCRn2IMzSnm26o6t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},{"id":"ytc_Ugw_YLyqmyO8jrFbq3t4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_UgxTNDuFrvpXaO2J7K54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_UgyT9erG4D0vG72BgEl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgzE0Q-sD4etuwQMMod4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},{"id":"ytc_UgxJYz7fniZehiT1Klp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwYDdnL4FjzfKW8QDp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_UgzfEAy_t5LjQcK1nB14AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"industry_self","emotion":"approval"},{"id":"ytc_UgyVpmKUOg-LEZ1EzMt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_UgyCnkacnYHmMddOB454AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}]