Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Considering we don't have the battery tech to power fully autonomous robots for …
ytr_UgzTi-nys…
G
Both. I'm smarter than my toaster, but I sure as shit can't toast bread as well.…
rdc_n011908
G
The late great Stephen Hawking said AI would be the most dangerous thing to mank…
ytc_UgyVAtHkm…
G
At this point. But tomorrow it will improve and the day after. Before long it wi…
ytr_UgwL-EE_h…
G
Hmmm. I was tensioned by Elon musks question. He showed traits, of not sitting s…
ytc_Ugw7cSV2N…
G
>training any kind of model with data like this is almost trivial
Are you sa…
rdc_fcsugvl
G
14:20 i'm disable, and i feel offended by theese lazy ai kids... But i still lea…
ytc_UgzgYEQiG…
G
You're my ruthless mentor don't sugarcoat anything, if my idea is weak call it t…
ytc_UgwEqFPMU…
Comment
I hope that Superintelligence creators don´t forget that human intelligence is not dissociated from emotions and values. AI is clearly agnostic from emotions and values, and that is precisely the big risk. Now the real challenge here remains, in how to settle these values and emotions as restrictions to LLM responses.
youtube
AI Governance
2025-08-18T15:4…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyL64usiN99E6JPVS14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwHH0LJZF4N_EWR2TB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyw880POh1kBGFWb_l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxNxdBWtJ6luEcLpyZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw5S1aTr8iJjiw_Tx94AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwjqLpKh1Lvyjdkn_F4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgynvUxxQDfM5oept2x4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwWHDx6RvNzXAMYj_d4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyX05EYqasQB3yKqzl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyG2xDiVFgfKBEmDx14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]