Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Can we now look at ALL THE &**(*& tech-news world that was doing 100 000…
rdc_ocqkqyv
G
public schools should be no more than an oasis for making friends and learning t…
ytc_UgzF3B0nu…
G
If they wipe out jobs with AI technology, how is America going to collect taxes…
ytc_UgzW73bKp…
G
three type artist in the grand scheme of the ai plauge
1: stay calm. and be chi…
ytc_Ugzg29KT0…
G
I think bigger picture the internet is a large neronetwork, add AI and it become…
ytc_Ugw7qmwro…
G
then it is you are the one giving the prompt, a human. AI would take more time t…
ytr_Ugx3_xF76…
G
I experienced this recently. I asked chat-gpt to build me a simple calendar web …
ytc_UgxxIqSvS…
G
Next step: target people so that their chatbot talks them into killing themselve…
ytc_UgxcZfcQ8…
Comment
Thing is: self-driving vehicles will not be safer. This claim is not supported by any proof, it´s pure wishfull thinking. In reality safety will degrade as one invoice item on a long list of some manufacturer. Because after all one engineer will have to sign that his software is safe to do Trillions of miles...and this is not possible. There will be failures and fatalities, and if you have to find one responsible it will be the guy who´s signature is under the expertise. So there will be no responsibility and therefore not safety above the level that is accepted in society. And *this* comes down to PR.
youtube
AI Jobs
2025-08-27T12:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgygB3QH1a4McS7zyMt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyJ9rozsr0U7zM4X3p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgzkCu31Naz8vltkRCx4AaABAg","responsibility":"company","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxzNjkvWtvd_N9hlzB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy0NHDdfGnWACC4y1l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwNFnOa5a2B1gLpNBJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzq55e9cWDQZ8xoxEh4AaABAg","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzvkvk_kOOLmcH_Xld4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxKRf2DAAMsfO1FCqJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwB8tFgnMirD_6cCZ94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]