Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Basically it's history repeating itself. AI is a technology that was implemented…
ytc_Ugx8q265J…
G
@Myemnhk Also banning it federally WILL cause a regulatory vacuum. Meaning compa…
ytr_UgxvLEqQ9…
G
I stopped using chai and started using character ai because every five seconds i…
ytr_UgwdFBUM8…
G
Definitely one of the best WhyFiles, and probably one of the best YouTUbe videos…
ytc_Ugyi5Eduh…
G
but.. sora is for hyper realistic videos.. how does this impact digital art in a…
ytc_UgwxQZ6Dy…
G
Oh also, I was hanging out with my friend's friends. And one of them is an artis…
ytc_UgyoJurGx…
G
The whole point of AI is to remove the last part of the entertainment industry t…
ytc_UgwJQz_G0…
G
If you ask me, that's all ai generators deserve after what the companies behind …
ytr_UgxxMWu5K…
Comment
The main conflict between AI Safety and Capitalism is the level of risk. Capitalism demands moving fast and anything that isn't necessary is a barrier to launching a product. AI represents a lot of unknowns and the AI Safety field is still in it's infancy. The risk levels AI Safety represents are extreme with potentially extinction level in the worst case. Furthermore, once the genie is out of the bottle, it can't be put back in in many of AI safety's concerns. Thus, we only get one shot at this.
Imagine that all of humanity is on board a rocket ship and all the AI companies are racing towards who can push the button first, due to the potential economic rewards of getting to be the captain of the ship. The AI Safety community is the group on board yelling "Wait! Wait! Is this thing safe?"
youtube
2024-06-18T12:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx5m1ixYD5cnMzZHmd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwR311Inj8OASVwxDh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz2ZYAH8RmTN0SRzm54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxz0g5VdGfsiV_j1T54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwkmhBicUM3k9wGxZ94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy6xDRR8E_5-iDGhCV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzI1pW-qLt2QkY-vnh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzmCb1yMgZ2oH34VNx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx8cb6_yHjoojyOhM14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw91Ue4DZWTegv8_gN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}
]