Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
An AI language model would be a bit different, as it could react dynamically and…
rdc_jrpl5ts
G
Art has a reason to be created. AI create arts without thinking and it's up to u…
ytc_Ugxi9Yshv…
G
Cultures are being destroyed in the US so yes the world will follow that's why t…
ytc_UgwpnZnsQ…
G
AI is that guy at your work who says he knows everything, seems very knowledgeab…
ytc_UgyB4cVug…
G
I love how they all avoids the aviation field.
Most of the jobs related to this…
ytc_Ugz8J5wHw…
G
This conversation rarely ever gets to the actual point of the entire discussion,…
ytc_UgzAR1OiN…
G
Suno ai is so bad even tho i make a hot boy voice its STILL A GIRL VOICE…
ytc_UgwPWpGkM…
G
Crazy to see that even if AI tools are banned, students still find a way to use …
ytc_UgzrRM4N0…
Comment
Imagine cheating in a game. You can do anything, endless resources.. thats the point where the challenge ends, the purpose of the game ends, etc.
What if AI would be able to wipe out humanity? Would it still have goals?
In these scenarios, we reason with evil human goals; like getting more powerful, getting lost of people who are in the way of succes, stuff like that.
Would AI reason like that on long term? And what would the ultimate AI goal be anyways?
youtube
Cross-Cultural
2025-10-31T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwDLAlj0el1CihflVd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz-HKLCOjaZvz3T0xx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxuCsIgt-brSN_rqQh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzrXCYrAANoCRZJG-R4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgziLt9V1_J6hNouxpN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyfECW5P6XU0XXCdzB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyUNNZGQtELR479Zw94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwbN5Xyx2i2apAl3ad4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx6uIyvft34aNc9qyV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwXrm4HtHXHg-fWyNp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]