Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Been saying this for the last 2 years and been watching it play out in the last …
ytc_UgwgBDpQj…
G
@Sarappreciateswrong it means it doesn’t have control… I‘m pretty sure this will…
ytr_UgyCLLm0F…
G
Definitely not scared of the fact that the robot had a machine gun that was load…
ytc_Ugx6zf_Ls…
G
@handgun559 "And they didn't pay for everything they trained their AI on. That …
ytr_UgxLUDent…
G
What I fear the most is this will not affect those who are intelligent. But the …
ytc_UgwxCEQnj…
G
If you use ChatGPT you are feeding the beast - you are teaching it how we work a…
ytc_Ugx4y63Ub…
G
If the future superintelligence works for the benefit of humanity, which is like…
ytc_UgycjcFYO…
G
That robot can’t out pick me I haven’t seen him lift a single 50 pound cat liter…
ytc_UgzyQzASM…
Comment
All of these scenarios never talk about the possibility of empowering the human brain. Through micro cip, epigenetics, pharmacology, etc. AI has great potential but not yet the efficiency of the human brain. As well as flexibility and adaptability. The question is, will we have the other man? The fact that AI does all the work is great for automation, but it has to be in whose hands for a utopia to happen instead of a dystopia. Man can always seek and try to do more and at his best. It doesn't have to become a pirgo in front of the car, but it has to challenge it and look for where you can be better. Indeed, a machine in power is devastating, but in other contexts is this always optimal?
youtube
AI Governance
2025-12-25T12:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz2_CJvDO7avNVQU_14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgwXivdZvM8hADyWarR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzkv_1wFzZTqMlBJiJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyB1mlBpu7djDLwkmV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx0N6aTRY_RJoNzFxV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzmzH7WNug8aMMgp-V4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy5elZlPMJ9FUwCl0Z4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugys6WZ1rS3IJJK_0jh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxDH42lQKsR8eb0du14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"disapproval"},
{"id":"ytc_UgxcX471cqunnmL-TbJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}
]