Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"iT dOsNeT WoRk!" - They're saying that to try to make you question using it. Th…
ytc_UgyYwnebe…
G
I find it interesting that ChatGPT cannot talk about itself. It cannot see insid…
ytc_UgxHfjosJ…
G
Thanks for your comment! Sophia's name indeed carries a rich meaning, and it's f…
ytr_Ugw9WTAWx…
G
Yessssss, i've always said that AI is a great tool, it's just that our society i…
ytc_UgyCm9b6a…
G
Our team has a channel where people post their ai success stories. Mostly it's p…
ytr_Ugw7TLCIg…
G
I actually use character ai, so this is terrifying. By some miracle, I haven't g…
ytc_UgytwjDds…
G
These videos target core instincts. Using the same on both sides would further d…
ytr_UgwDULgcx…
G
Actually I think it’s because the demand for it is there.
Not joking. I think …
rdc_nclty7r
Comment
I'm taken aback by how weak a showing LeCun has made. I am on his side but the arguments he made are not at all helpful. AI will certainly be weaponized, the question is how effective will the countermeasures be and what the destructive yield will be. My own view is that AI will function as basic infrastructure far more ubiquitous than human labor is today. Of all the AI that will exist many will be super-intelligent and many humans empowered by AI will themselves be akin to superintelligences. In this setting is higher intelligence an asymmetric advantage? no, intelligence is generic. Might an intelligent someone or something discover a blackball technology? Yes but that is no different from the scenario we exist in today.
youtube
AI Governance
2023-06-26T21:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwH-6hm87UtoueFPWt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxpdou8J-Mw29x-Zrd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxSn61F8CnsZATGdjd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx-fWVIjvGigcWWvcx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxRNSUq3g4j9m2Xu7t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz6VJdTx_854kKoTah4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwfDe1MsjPlNh2yMkZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwzbk-4P9eZqRv4nad4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzhWFRsnJNk4XwOKl54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgwpXS7IEJKGUTfTjjZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]