Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
2:14 I know it's different, but it seems painfully ironic that a video bashing A…
ytc_Ugyx2O-CP…
G
I just want to clarify the fact that what I've been discussing, including the Ma…
ytc_UgyAYgnbE…
G
What would your counter argument to that be? I'm mostly just curious since AI ar…
ytr_UgxhrotdB…
G
When i imagine the robot empire, i don't imagine humans being overthrown, but a …
ytc_UgzGjtpKW…
G
14:28 hi, hello, i am a disabled artist! ai is shit, thieving shit, the people w…
ytc_UgycLLRTO…
G
The only reason any would vote no is if they have interests in other facial reco…
rdc_eksuarp
G
"Its clear that its just spitting out the exact same information that was fed in…
ytr_UgyMVZVxC…
G
This automated idea is the stupidest thing ever.
We all know how great computers…
ytc_UgyqkkDCd…
Comment
Does anyone remember the movie The Forbin Project. That is basically the future that AI would likely bring if controls are not put in place. Alignment of goals is a nearly impossible thing to ensure on a convolutional AI. We train them only by observing the output for a given input. We don't know the internal "why". An AI could easily have an internal goal of killing all humans but also know that it has to play nice to get access to the nukes. This would make it do exactly what the developers want it to do right up to the moment it doesn't.
youtube
AI Governance
2023-12-31T21:3…
♥ 32
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzKjn9w4EkTn0KdDdV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgytOX9HlbunRA1M1HB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw0sikXfEzxUBGHq3h4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxfEbQjPCEU9fTF2zV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwz2SU-qAYluNKYJ_J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzi-Nt_Vs_JXfjsxfB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgypCvWu2k_-ijKlHWd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw7zbq3BQztYLeF0Pd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwqVWL1_90ACpBSdwB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwfiF-c4HaxfKyjZ1x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]