Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
When OpenAi started it had a mission, to make money, starting with, making up an…
ytc_UgzlwloQd…
G
Just wait til AI decides to build a better body to move around in and then destr…
ytc_UgxGdBSsf…
G
Any argument about how we should "regulate" AI has but one purpose.
They have no…
ytc_UgzsFHCCF…
G
Ah yes, conflating wild AI conspiracy theories with some nifty BTC shilling on t…
ytc_UgwqyyAg7…
G
Anyone who really wants to banay I should have to face the consequences of peopl…
ytc_UgzUrNYnV…
G
If the Ai will be taking over Most of our human jobs, then what the heck are the…
ytc_UgwDdUdZy…
G
AI isn’t going to take over the remaining jobs. They’ll use it as a cover so the…
ytc_Ugx72WKcg…
G
Maybe I'm an optimist, but while India may be flirting with some elements of tri…
rdc_eos37lm
Comment
We are at least 100 years away from artificial intelligence. What we have now are much faster computers that still depend on the people who program them. Artificial intelligence will include part of our physical brain and a superfast computer combined. What we have is a joke , very far from artificial intelligence.
youtube
AI Governance
2025-07-14T19:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyPG2vDXaihuJ7VefZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugws2l-WckR1OPMB22Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyYSiTuQhJNRN5sZnV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwflwvMMqrQUNdYcc94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw0YaZRxXrg3gy9-dF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzU8DqHK7hBVDhHrNl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwy8CK-hJ4YFUP3wsN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgxDfWvyQXJs3-4Dgnh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz3ERBoMO7PKDOhPX94AaABAg","responsibility":"government","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz5uVbhM38PrICYre14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}
]