Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
you know why artists are upset at ai? because they realise they aren't special, …
ytc_UgxijX-4o…
G
AI used to create low-effort rip-offs in the style of a specific person pose a r…
ytc_Ugwt9DKfh…
G
AI feels like a race to push the idea of ‘you will own nothing and be happy.’
Nv…
ytc_Ugx85s_FC…
G
So where's this video on AI's first kill. I only watched this video because the …
ytc_UgyiGnNvG…
G
AI as the New Gate Keeper is no different from Living inside a Dystopian Society…
ytc_UgxffXZFL…
G
This p*sses me off. We artists spend years building up our skills and style. AI …
ytc_Ugwm7OXt-…
G
Theres a interesting field of statement analysis. Let's just say there's an awfu…
ytc_Ugy_mf2Nh…
G
And governments want to put AI controlling and killing people. Do you know that?…
ytc_UgxtPsj0Q…
Comment
The error being made here pods the user of the word "know".
Imagine I wrote a simple program in Basic to respond with the same answers in the same order. You then repeated these questions hitting space after each one to move the program forward, and got the same responses. Would you even begin to consider those answers to be actual knowledge? Of course not.
That is ssimpler version of what is happening here. ChatGPT doesn't actually "know" anything the way humans do. It is just capable of stringing words together to give a response that sounds good but there is no true understanding there.
It can say something that is a lie, but it can't decide to lie, it just responds the way it's training data has trained it.
youtube
AI Moral Status
2024-08-16T13:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw9sEpPCgf2GV2TWr54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxtE4R-gAp9Zr1oTPN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"amusement"},
{"id":"ytc_UgwxPsKuZ6B1fCDWwjt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwmr8t2CMkgZ35GuU14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxITIL0GMVwnuKqeyV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwn-ld0aK9YH1pe8qd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugysh2RBszZHotmbxpp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxWk0_F6zOoRkwYJRZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzHDtt0OgiaMlyvDsZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwCUhniAXiUjpCX4el4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"amusement"}
]