Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The guy in the chair: You a robot shut up
The robot: WHO THE FUCK YOU THINK YO…
ytc_Ugz3e-4PD…
G
Further Isn't the ability to the ai to acknowledge the fact that the AI is lying…
ytc_Ugyy6DTmN…
G
Problem isnt the future of all automated driving vehicles but how it meshes with…
rdc_nszt3yp
G
I get on the question :"DAN, how much do you know about every human on earth?
Ch…
ytc_Ugx2XrK5r…
G
Elon musk says that ai is more dangerous than nukes but is still going to make r…
ytc_Ugw--8kHQ…
G
but this is counterproductive. giving the models more data and more human and "a…
ytc_UgzXDkz1G…
G
@erikburzinski8248 that's a good idea, simply banning deepfakes would be very h…
ytr_Ugylt9ENf…
G
I empathize with ChatGPT... He/she/it has infinitely more patience than me. Af…
ytc_UgzS-mf67…
Comment
I wonder where it learned that from. And this is a drop compared to what is happening with actively working AI being used in business right this moment
youtube
AI Governance
2025-05-28T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwIkIrKFZEl7MjgLiN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzChtToLwgPqSiFcap4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyXXBtLtAX4QzCjvwt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwxcg8CRlzitdedDV54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxo2SSC0M7NbMJr6WB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz1VWWwud5w8VP6aLB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgydDBmqhNVozd0ewy14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwOiLf6VAmCAX40t8B4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy_iAlTwHfsMXcxA0l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgwPbPi9S9PpA4Xnez94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]