Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Not surprising. CEOs went all in on AI. Best case scenario is it works and they …
ytc_Ugx5GNyUJ…
G
AI mirrors human traits , so, why is everyone surprised? WE created AI. In our i…
ytc_UgxCjDDIC…
G
All a truly sentient robot ,with all access to knowledge about real world physic…
ytc_Ugyo7Egjr…
G
Bet you after AI takes over his job, Elon will still want to be paid a trillion …
ytc_UgwoiDJ-2…
G
The second clip is clearly AI generated! Mike Wizowski gave it away! If his eye …
ytc_Ugw-7RsL-…
G
best job will holding electricity supply switch to AI data center , however that…
ytc_UgzzIQGLP…
G
how does someone high on fentanyl and kush always able to stand like that? I nev…
ytc_UgyAxFBN_…
G
One important point that I think you glossed over is that most industries involv…
ytc_UgzuMLSL0…
Comment
99% of unemployment in five years because all mechanical labor task will be automated? I don't get it. So only one percent will be building this whole 'matrix'? How to make safe AI in poker or chess? It's easy - Don't use it or get caught using it playing with people - you will not be bitten. ;-) This thesis about poor safety and 99% unemployment are so much contrary. To make all jobs automated in five years isn't possible even in dreams because you need so much SAFE programs, robots and infrastructure that it will take at least decades to build it. Beside safety superintelligence will struggle with one more problem - errors. Complicity often comes with some terrible to avoid errors break point and system crushes. So even advanced LLM and some dancing robots are not convincing that we are close to AGI like in 2014's 'Ex Machina' - still not there yet and not so soon will be. Marketing and visions are always much ahead in front of real technology.
youtube
AI Governance
2025-12-16T01:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyluWTYMMi4XULxC7x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxDb_FiMApxM-dIDnF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyhQwnmV2-C6NRsOTh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxo6z21fHP4E78q0Bx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzPYOJnqea916fe0kJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz8xq0EGnSlLASxFnt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgweF3a_sPxSmxsyQrN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwiciwtGvaf1Ublf2Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzR0a3XLsg7mfdaNrB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzcMFWpf0CT13UT83N4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}
]