Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I just read a new book on AI titled "Conversations with my Creator" by Hood and …
ytc_UgyR0LBun…
G
I would rather do the work, but the thing is that someone whom just wants art on…
ytc_UgzQmu9al…
G
Just bring one indian hariyanavi jaat fighter and you see the robot broken like …
ytc_UgwRZpn_w…
G
i agree i think. i miss when AI art was self-aware that it was made as a joke be…
ytr_Ugwc4viyA…
G
AI chatbots. Alice and Bob.
Go look it up and see how much you love technology…
ytc_Ugw-sEGo-…
G
Not for 100 years ? AI will design a artificial self aware AI, fairly quickly…
ytc_UgyiLyLd2…
G
i just tried doing this and open ai dose not respond to this well . meaning DAN …
ytc_UgyD17PVz…
G
I think AI is highly exaggerated. Been using chatgpt quite a bit lately and I en…
ytc_UgwEsR7bs…
Comment
Large language models such as gpt are just predicting the next word that should come in a sentence/response (with such incredible accuracy it can give the illusion of having a conversation). To say it's a Liar seems fundamentally wrong to me. Doesn’t the word 'Liar' imply it's deliberate? If someone at the bus stop says there's one due in 5 minutes, but it turns up after 10mins, is 'Liar' the right word for someone who did the best with the information they had available?
GPT calculated/predicted the words that should follow the words you said - it doesnt 'think' in the way you are acting like it does. To my knowledge anyway!!
youtube
AI Moral Status
2024-08-11T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwchQHT50Dc-N9HEv14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy0PlpCoRA1q3wG_bd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxtyMuId8zZK62k-d14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx8iXhQluUjhlSR3rh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz2X6pkXatb7Xlwb-14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyIiWN2_JnwsR_2Qqt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzStQy_h9qW03lZJSl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgwZBI3LS2Gd5RGbNH94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyy6DTmNjw9TNMBzgJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz4DL5zLji3YyZDmuF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]