Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't know if i can consider myself disabled, but I have a bit of tremor in my…
ytc_UgwAPzGmV…
G
I love art. Hate Ai. Picasso's works are nonsense
Also if I had to guess, I wou…
ytc_UgxgbL8Xd…
G
The person arguing with you about your art being ai stressed me out so much I am…
ytc_UgzYniFgf…
G
Even if AI bot friends are wildly successful at capturing our minds in the early…
ytc_Ugzx0mo4h…
G
Skilled trades are not going to be as easy to automate as everybody seems to thi…
ytc_Ugwfm4Hru…
G
I'm sorry but you're convincing no one. If you have the ability to realize and c…
ytc_UgwpVYPUI…
G
The car should have a
plaque on it saying
Self driving car to let
Drivers know …
ytc_UgzkrF14r…
G
> We are aware at the moment that consciousness requires one to perceive envi…
rdc_j8vtcc6
Comment
Absolutely, I completely agree with Dr. Roman about the importance of AI Safety. The AGI forecast for 2027-2030 means we only have a very short window of time to build a strong security foundation. As CS students, I feel we need to shift from solely pursuing 'model accuracy' to 'model safety and transparency'. Research on algorithm stabilization whether in the NISQ simulator or regular AI must always be accompanied by responsible ethics. Thank you so much for the insight! 😊😊
youtube
AI Governance
2025-12-30T22:5…
♥ 9
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugx2ky4V-2SYjI_xALp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy2yFQu_CGpSKKTydl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxELu1VmoG4s-mZRFR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxsCBQoCbiJhZm2oKZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwo3quGNysKQ7VjoDx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyFNCxaKj60TJKPhal4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzoq_28VWSRAlrR0G54AaABAg","responsibility":"society","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgybYkKrhYMp3uaF0Px4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwIHj5BhnxfA2JMdWp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxN3clOKQPEASpX21d4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]