Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So you rather a badly done piece cause AI never did a pretty or good one and you…
ytr_Ugz1V-MDI…
G
The point about 'Info In, Info Out' jobs being dead is the hard truth devs need…
ytc_UgzXc1DMs…
G
Both are not AI because I saw my drunk ass friend did the same thing as Clip #2…
ytc_UgyvPhrGU…
G
US & UK have both been pushing for and investing in AI surveilance for some time…
ytc_UgzlxRPoV…
G
Holy shit people stop & read this…
Humanity is going to be able to build a robo…
ytc_UgyltAekn…
G
Exactly. The AI was given 4 attempts to get each question right — the doctors we…
ytr_UgyTBsoIj…
G
A Mexican may be turned down for a job in North Korea due to meth producer natur…
rdc_clvcxtd
G
AI makes really cool looking things but it has no soul. I’ve used ai to create b…
ytc_Ugy-GlYXQ…
Comment
I wished Elon talked about how ChatGPT can create sophisticated pieces of code that can be used as malware to hack into utility services, like power/water/traffic lights/nuclear since all these systems are directly or indirectly connected to the internet. Imagine the consequences if AI decides humans are a piece of sht and not worth living.
youtube
AI Governance
2023-06-11T04:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwOVPfiwKB9MqOE2i54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx-pW6Kc90F8wFQgFR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzQag00gm7DcHJikuR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzHF26XlgjtRnZATA14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzDY9xUlA20ea2l_ox4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz_R975FwBHcYyDJBl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyF8nygiAkc7fGGV4F4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxE8gMGS5gY0-WbZD14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwifLa5gVzv3st_dg54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwx4PPR6HtSO9hp5B14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]