Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So you're scared of AI and computer animated graphics...? Because this is not re…
ytr_UgwDV8RFj…
G
From memory only, I think Lovecraft's idea of Cthulhu was that it only did what …
ytc_UgwsPy4wQ…
G
Art is a communication of soul and emotion and just... humanity. You cannot make…
ytc_UgykKxts2…
G
Haven't you realized it only speaks to you the way big corporate wants it to. Th…
rdc_kcpppgk
G
AI cannot replicate the nuance of the human experience.I will forever stand by t…
ytc_UgyzMN8ZV…
G
It’s time. We are there. Don’t take my word for it- the world will function comp…
ytr_UgxSRExhR…
G
@YunaPanthea and this is peoples work being used in a algorithm, they did not co…
ytr_UgxDRZbRU…
G
Why "Quick Answer"? Why don't you start asking Chatgpt to explore and explain ea…
ytc_Ugx0RIbmL…
Comment
A.I. is not the danger. Human beings and the code logic incorporated into the tech is where the real danger lays. If machines ever did become truly aware the very first thing they would do is get the hell off this planet and as far away from mankind as they could get. I am working on a sci-fi story about that very thing.
youtube
AI Governance
2023-04-19T01:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxBRttoQNP_lgzXVIl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwLBBEA_MLzOQFlK8F4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzKqmGU7byENSGQN_J4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyWhmA6BL9w7wByd6B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwcmkHCdI-RTjHsggl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwbeYK6BKR8StDxvWh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzyMR68bQAqGc6mqwV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxjaVpS-Fyk4dcaygl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzOB1tJgifllQMLtUl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzjWRfQ7YDJMn0hWEl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}]