Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Lol this guy claims to know his stuff but he claims Google search is an objectiv…
ytc_UgxXhqS3T…
G
As someone who is even just… *new* to drawing, and even writing, I love practici…
ytc_UgyJgkS3p…
G
Elon Will build A.I. that destroys all the other A.I. Tony Stark took out A.I. s…
ytc_UgytYIo6w…
G
No matter how 'intelligent' AI will never have a soul and spirit. Never be human…
ytc_UgxqdjOL5…
G
Even if it would look good - copying something nice does not give "your" AI gene…
ytc_UgzLho2Yb…
G
The likely near term reality is that LLMS will cause more software developers to…
ytc_Ugw160_ol…
G
I see a romcom where a man thinks he meets an AI in virtual world, falls in love…
ytr_Ugy1-Gs3Z…
G
We're not in any danger yet. I like to use the microphone built-in to my smartph…
ytc_Ugz0rbIER…
Comment
Why does Chat GPT need the ability to create fictional cases. I thought those AI chatbot was supposed to retrieve existing data. Was those fictional cases out there in the ether and the chatbot just happened to get it? It seems a total failure of programing if it can make stuff up.
youtube
AI Responsibility
2023-06-10T14:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxERa2O1ahhqTzPOaN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyY7tbq8YElrDbNMw14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzeFHqenpDTTEvrjIF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyMMd_NM2XTu-UJtAd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxRrTCw92jMoa2l2WZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyvQlAlP2xC3dhnXwN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyfAhE8O5U6STx0Hbt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxTXRBjGjpBbQ1p5aV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxyfguQNWA4NgPM-Il4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyJ5dK5yXVgW0_ns2Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]