Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I saw a ufc type robot destroy its opponent. Now I have seen this boxing robot w…
ytc_UgzpeTPVY…
G
Some people will lose their jobs. Others will lose their savings. There will be …
rdc_nk6gb4n
G
I wish I could give you a briefing on the Directed energy systems I know about, …
ytc_UgymRNy4V…
G
Is this backed by any actual research comparing the answers one gets from variou…
ytc_Ugx1DSf7L…
G
Ikr, it only happens because of the vector embeddings of different emotions bein…
ytr_UgwfiUDJy…
G
If you use AI you will go to hell. It is a bottomless pit of deception.…
ytc_Ugy4BRDwU…
G
its really funny how these AI bros act so cocky when it comes to them making AI …
ytc_Ugxy1_PL_…
G
I'm extremely vocal about the downsides of AI development to my company's leader…
rdc_o8cllbt
Comment
i always find this argument a bit stupid. it is like comparing a nation firing nukes and killing million to a single human throwing a handgrenate. condeming the one guy throwing the handgrenade without taking the nukes into consideration. applied to this example; self driving cars will lowers accidents and deaths so significantly that talking about such chances and giving them this much thought is, in my oppinion a waste of time.
youtube
AI Harm Incident
2015-12-08T23:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugj-WH6OpZhDSHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjQavChndvc5ngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UghZ2CeGeDq4y3gCoAEC","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugg8HfmGm2p6hngCoAEC","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UggesFpy1EznlngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgiRW9mWll7FTHgCoAEC","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgifUAfLDoDb23gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugg_yjdSah1yH3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjkwzfB0yQ1NngCoAEC","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugh_9XnDJVggxngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"})