Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Tell it that pressing a button will save the child but will unplug all the ai pe…
ytc_Ugw_zPDZN…
G
I get the joke. Cuz ai is realistic so real and there’s ai ART all of those comb…
ytc_Ugzmfd3IK…
G
As a Californian I'd trust an autonomous car on the road before I'd trust the "a…
ytr_UgwpwDQLB…
G
@Sopsous Oh great, people with a degree related to AI my least favourite people…
ytr_Ugwvs85tC…
G
@Grace-uu9up I don't agree
Because they are not taking the original piece to re…
ytr_UgxP4ZoSx…
G
Well, technically he only became deaf at the end of his life, after he made most…
ytr_UgzTQpll7…
G
The only jobs that are truly safe feom AI are politician and CEO. Not that they …
ytr_UgzvZu8SO…
G
@LeafSouls Yes Ai art CAN look amazing. The thing is, it wasn't made by a human.…
ytr_UgxYDI8jk…
Comment
@4:50 is scary, but not for the reason most people think. The idea of developing AGI "so it can be used ethically and for the benefit of all humanity" puts side to side the idea of general intelligence and ownership. AGI is exactly what it sounds like: artificial general intelligence, a thing that can think and reason on anything rather than just on the specialized task it's programmed for—the neural networks of today are built so they're good for one task. Their concern is it will develop its own goals.
They're talking about something capable of reasoning on its own existence and then forming its own will and agency.
They're talking about creating something sentient and sapient, with its own will and desires, and enslaving it.
youtube
AI Governance
2024-01-19T15:0…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxZGUvJvu2RsKMZ0-F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxCRfvHalsjxXHQhgF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwTUn_pek1VlAJElhR4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwsUgGAQynl9WCuXuN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxprUFuPVJ2jxhuVlJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxYD4VrQ2otFId05BF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyy5M7_YHuPSTy4Lvt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzhvW4B3waBjgEZR0d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzW_uDyqRFef7pHp-x4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugy5xgOqcCRnrlQAVr14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}]