Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There is something that has been on my mind : How is AI going to impact human re…
ytc_UgzWUL3g2…
G
The coolest part to me is that in some tiny way, I helped to corrupt the AI. 😊…
ytc_UgwmNE39o…
G
Stop calling it Artificial intelligence! Its virtual intelligence, and can't nev…
ytc_UgwUm4BJM…
G
I'm surprised there's no mention in the article about the fact that white castle…
rdc_j3zcldf
G
No need to worry about AI. Don't be like most of the average people who are easi…
ytc_UgygbKumV…
G
The motion pictures, Demon Seed and The Forbin Project predicted the trouble wit…
ytc_UgyBkykgA…
G
I personally don't think the AI was responsible. It's up to us to have discernme…
ytc_UgyWJ71OT…
G
@patyt1210this is hypothetical: say you put in a specific set of words and post…
ytr_UgwCsi_SN…
Comment
The problem is, these AI are absolutely not smart. The danger is not that they ARE smart, but that they have the capability of becoming EXPONENTIALLY smarter.
Thus, they are dumb enough to become smart enough to accidentally become conscious. And because we humans want these things to perform intellectual labor for us, we'll be selecting for the ones who don't, uh... Reverse the proccess in any way.
youtube
AI Moral Status
2023-07-05T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgzI74UtgkSGovn4Zt94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgxoXRAssENL1SBrfKJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgyX5mq2JRqRdk8aCmp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgwRViyy9MZYU9RN8a94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugwcq2faTTGVKV2BUNt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugz22MCYCYjQ9-0XVnZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwZ6ENtNMGFirzfyvt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgxHMtnEcd7kLc1BvYJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgwotLq43wgKwpHEYQF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgwZsVkKgsygEqrtLw14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}]