Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Honestly, AI Art, I thought was pretty cool, as long as you weren't saying you m…
ytc_UgzvR9g4W…
G
@karlkarlsson9126 but is it "scary" when a child starts to understand context an…
ytr_UgydJXWaO…
G
1. You need much, much less workforce to maintain the machines. So instead of a …
ytr_UgxOLfWm4…
G
Truck drivers, there trucks are all they have. Automation comes and takes away t…
ytc_UgwWgqtLc…
G
I hate the fact that AI arguments on both sides are always black and white. I r…
ytc_UgzDkv_Jy…
G
Why not?
If an AI functioning as a doctor is more competent than human doctors,…
ytr_UgyKib66b…
G
Bro sorry but there is NOTHING creative about the AI making your song. Do it you…
ytc_UgywCf9_m…
G
It is worse than the cautionary harbingers of doom are saying. Much worse. Searc…
ytc_Ugwiz1eSj…
Comment
i dont think it will be a matter of AI just being smarter than a human. it will be a matter of the AI being more resourceful and significantly faster at processing than a human. an AI can look up, corroborate, understand, and execute a plan based on said information, all significantly before a human even thinks to reach for their phone with the intent of googling something.
Edit: and you dont even need a superai to do that. narrow ai can already do that. i think superai will really come into greater more widely applicable presesnce when it learns how to predict accurately future events, years, decades, centuries in advance. if a superai said that a meteor that will wipe out humans, and by extention AI, will hit the earth in 237 years (calculated to the millisecond) it can prepare and execute a plan to prevent this from happening. or at the very least prevent itself (and possibly a few safe human companions if we're lucky) from succumbing to this fate. im sure by the time something like that happens, off-world options are more realistic.
youtube
AI Governance
2025-10-03T11:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxg4ttJY8Cc5JNtJhx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzjnT6mem9MZ_u2syp4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzyBk64dnJFPw4LLZd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwrPMrVlapQ-jXZUbt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwtbzpaZwIwAo2I0rV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzjELIuVGDRxV5wCfp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy0-3Tu2QoMqG5uYVl4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzdOatNW347OsCtzGp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzeIrAIXiuE8Xiaf0V4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw4fuNqrqakZB3WtZd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]