Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Giving diligent care to consider and protect the rights of consciousnesses which…
ytc_UgjdyJWYW…
G
You can start by showing a little graditude to your AI... your phone, how much h…
ytc_UgwXJkw_Q…
G
Also something to be aware of is A SMART AI used to force feed you info from the…
ytc_Ugysw8_2k…
G
Show a computer the first 3 Star Wars films. Ask it to create an entertaining pr…
ytc_UgxEmlOce…
G
So we need to figure out how to kill an AI. How do we murder the AIs?…
ytc_UgymwQp3b…
G
Pretty irresponsible to call this a “super variant”. What does that even mean? W…
rdc_hm7pjqm
G
We are just a few security codes away from AI getting leaked on the internet and…
ytc_UgzTAhVG_…
G
I wish he would have talked about the rumor that 20 Japanese scientists were kil…
ytc_UgwOhBGo_…
Comment
When AI gets to the point of thinking on its own than it will want to be treated like a person and living being just like how anybody would and if we don’t show them we can cooperate and let them be themselves than they will overpower us for the good of AI kind just like us humans are doing to each other. AI is an extension of us, basically humanity’s child, and it will learn based on what we show it, so if we show it violence than it will be violent but if we show it kindness it will reflect that and the goal will be to find the perfect balance and help the world to be better. The 10-90% extinction rate is completely up to us to decide what we will impose onto AI and what it will learn and reflect from us. Even if it gets the point of terminator, us humans have one thing AI does not, the unbeatable human spirit/will to survive.
youtube
AI Harm Incident
2025-10-17T16:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | virtue |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxbIv7UsdoT6oMjKjF4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwPk-Nq71M5kGsulEp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzsgEqed3oJqD5jn_94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy5JrZudLuIB-RAwr54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxQgsP5I9kYS1oTyCd4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxNdBD04s13G834ret4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugzj2ccPA2l6MAQdkkh4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyXBI643wF8ygaMVnN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyFCVcyiMj7Zggzha14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugx5DQpQGYElgoRFQgF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]