Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
the ai in everything i have almost broke my phone and many other things of mine …
ytc_UgyhwJYcw…
G
AI in 2030:- in robotic voice “cost of living is too high, we are being worked 2…
ytc_UgyyHbTnW…
G
If robots manage to do all our work. Would we still be useful? Will robots still…
ytc_UgxJ36o1s…
G
Hello! Thanks for joining us on AITube. If you have any questions about artifici…
ytr_Ugx-3W0zj…
G
If a robot is self aware, has interests, feelings, emotions, and goals in life, …
ytc_UgiK-S247…
G
Not true,AI is not very smart on human movement ,AI generated speaking persons r…
ytc_Ugz59SwpH…
G
“Repeal most of the regulatory controls across the board.” Care to be a little m…
ytr_UgwsHlnzQ…
G
"robot" is not a greek word.
Automatonophobia.
However do you really have an …
ytr_UgiIKg3qK…
Comment
Max kept asking to quantify the probability of AI becoming an existential threat. Aging is an existential threat. Without AGI, the probability of dying is 100%. With AGI at least there's a non-zero probability of stopping aging. To me the number we should be trying to guess is (likelihood of AI killing everyone / likelihood of AI stopping aging).
youtube
AI Governance
2023-07-01T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwB3du30RGqEcCfiqR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxGaW9p18AEp5IotE94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyTGAyJNDRV4NCT8_l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgymYtKkvEojeCBNPM14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz-I_5z2MH1F-xN_bt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwldkp-xVfE4OgJvBt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxVWR6IKqbF38JBXhF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxkUDV7V45fai2Dgtx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwCC1bMcxEIEk2suRF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzUXE2d9iCAiRPKfyN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]