Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ive seriously been seeing malicious ai scams everywhere on instagram, with convi…
ytc_UgwhGVark…
G
IF JOBS ARE REPLACED BY AI, THE SAME AMOUNT OF GOODS IS STILL PRODUCED! YOU JUST…
ytc_UgwhJdbrj…
G
All AI has shown is that humans rejoice in rehashing and recreating everything t…
ytc_UgxFnOWzt…
G
As someone who is writing a book. I lost my job mid way through and couldn't aff…
ytc_UgxsA8f9T…
G
Want to emphasize saying "maybe another paper will solve hallucinations in AI" i…
ytc_UgwwD3Mq2…
G
@WysteriousWhims the ai is the art you idiot. how much time do you think it took…
ytr_UgyM9fHkx…
G
Is it legitimate real AI....or a human devolped program given choices and pre-re…
ytc_UgyV9u4RS…
G
I don't think calling small AI startups selling AI crap ware to big companies co…
ytc_UgwsZXEYl…
Comment
I also thimk gen AI is fair use. Sure, it's "automated" fair use on steroids, bit that doesn't make it not fair use.
On the "lying" thing, people seem to think it is "taking initiative to protect itself". I don't think so. What is the data set of web data of human responses to such a threats going to look like? People will lie cheat and steal to protect themselves. So "lying" to protect oneself is just what's in the dataset so that's how predictive modeling would be expected to respond, I would think.
I'm not sure what "copy itself is to another server" means or how that supposedly works.
youtube
2024-12-16T00:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgymSciS9-4kOGe8DB94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxvgirs5dDdgts0iKB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwH8vxiYbI7QZM52Nh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxDF3aqTeiSw2_j33l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxe_EuwLuNMIEukIB94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw6YTviQ9iI91qN4s54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw2Fb4PoVBLBju5ufJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwn95Sno3IE0-HJI354AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxG8e8KV2CqJHIHpDR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzs6NGPlFBW8eGDs5p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}]