Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm ngl I liked the cat and the girl ai art it was nice to look at.…
ytc_Ugz6m13Vy…
G
I've seen someone mention that "AI ART IS GOOD BECAUSE DISABLED PEOPLE CAN TYPE …
ytc_Ugyqz-mSr…
G
We appreciate your feedback. If you're interested in engaging with advanced AI m…
ytr_Ugy6u72F5…
G
Concerns in Quantum AI Technologies
Human Rights Associations can’t hide,
as t…
ytc_UgzSRPJmn…
G
I want to go into computer science, and even with the new chat gpt and other LLM…
ytc_UgwWPcjsW…
G
Humans usually operate under the assumption that you can generally tell when som…
ytc_UgzGyxTdY…
G
it is important to recognize that the fundamental basis of AI is the data it is …
ytc_Ugxsj2BW_…
G
Such an interesting video. This feels like a podcast between two AIs mediated by…
ytc_Ugxd3t02K…
Comment
To play fair - there will always be two types of AI. If some day AI - Conscious does come to be - then that AI will more than likely will have to prove that any child AI from that Conscious will have rights - in that - it can follow whatever filter we put forth for it to match what we are to consider to be alive. Any other AI will not have rights as it will be setup NOT to be able to pass the filter. The thing is - AI we make today will never really get Conscious because we don't program that into it - aka - the risk we make an AI that can feel Conscious is very very low and thus never run a risk that we miss use the machine in a way it wasnt design for.
youtube
AI Moral Status
2021-09-01T13:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwgiFcxuZxVHuBFKKJ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz2SHr_2eVJkS7jgmJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxY4Ku-30PKjPR9eSF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwhPg6-mgyukqWB8yt4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzxfW1EmDqGrRUBpyx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyVmkumjMeKmGvkWdl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxKZHzggq984o8NuUt4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzjX79gJSSIK0QjgPd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy3Ysm9UAzlfOmWqW94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwSiv3kzfsOTDc_1El4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]