Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
semiskilled and repetitive work AI will do better and faster and 24/7 so goodbye…
ytc_Ugz3H5Tn4…
G
„How does an artist come up with a style? ”
By choosing to be different from w…
ytr_UgwbmYHEk…
G
I love this! I study AI and Machine Learning but I really don't think AI should …
ytc_UgzpDP11Z…
G
If any of my ships on Character AI get leaked then that’s an equivalent to death…
ytc_Ugyx28J--…
G
It seems like you are excited! Remember, on the AITube channel for subscribers, …
ytr_Ugy1oQfTr…
G
This isnt really true, deviant art does not train their ai on their artists work…
ytc_UgzbhKMWq…
G
Notice how they say all the jobs but not theirs, which being the ceo or a vc or …
ytc_UgyG3SJTx…
G
AI replaced the real fighter with this robot just in case you believe everything…
ytc_UgzEBaNfE…
Comment
Once again, a discussion of AI Safety and how we caould stop AI development that completely ignores the military imperatives. Even if you could get a globally approved freeze on AI development, there is absolutely no way that either the US Military or the Chinese Military would stop developping AI as it is now a matter of survival for them to be leading that race. And no, the USA dominating the World is not something anyone wants, other than Americans. It would be a disaster for the human race. Imagine having an orange psycopath with absolute power over the planet. No thanks.
youtube
AI Governance
2025-12-04T13:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugx-aKGf0p_hQuyTf2d4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxFQKAPKOWDySBnKWt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwWc39WtXyHMVG4LQp4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy5Q3Lq3kuLnHEU_FF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw9l5dOh12lHIs1VyZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxOusnwRX4E_TVRt7B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxH7sOhjr5gdxAWi6x4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugw1ba_YmlEEUCc05Bd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzUiGiMmCWqK-W9MnJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz9RRUs9Alm9ExipbV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"resignation"}]