Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Use AI at your own risk. I have been for years. I do not own any of it and I cla…
ytc_UgxAVRmht…
G
The bastards need to use sea water and pay the upfront costs to use titanium and…
ytc_UgzX1YmvO…
G
I see a short future with a blend of the two movies i Robot and the termanator f…
ytc_UgwUGYrkY…
G
The racist AI he talks about was actually programmed exclusively using 4Chan's /…
ytr_UgyOHHTEw…
G
I'm so lucky to have managed to get a job after looking for 7 months, this shit …
rdc_gkquipa
G
AI is the next generational disruptor. This needs to be stopped before it brings…
ytc_UgxelLJL1…
G
Alex it doesn't work like that....if you walked up to a stranger on the street a…
ytc_UgyfXceDu…
G
Ask Ai what will happen down the road because Gates already told you but you cho…
ytc_UgyggsZW-…
Comment
One major problem with AI's in our current society is the fact that they learn everything from preexisting information. Humans are not perfect. We are prejudiced and sometimes have rather uneducated world views. It has been shown that some AI's take over these traits. This can lead to racist or harmful views these AI's aquire.
This can be a major problem that we need to be aware of in the future
youtube
AI Responsibility
2023-01-10T00:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyow5nwQ_tw-h3_Twt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw-qKx9AEz9SO1nEVp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz4PtxJdqXPHgQM3ip4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx3ky_GRXdgpgZWGjV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxDWAurlcyK7UeYyzp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxmvOyf7KVDJvJ-fKp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxuMaFCj4Naj5zTZPB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwDx_PYQsie7mSmzyR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz5d4-8TSdBPBhq94l4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzNheKzByeDE7FmUv14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}
]