Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Sams thing happened to workers when robots replaced them in factories but I gues…
ytr_Ugy_LKvwU…
G
I have noticed in the past couple months that they’re getting aggressive at lane…
ytc_Ugx1H08yq…
G
Nope thats correct, its a probabilistic LLM but it still understands what 2 + 2 …
ytr_Ugw7obBBX…
G
Waymo is now sending all their old iPace jags to Atlanta. I wonder if they see …
ytc_UgzELA98V…
G
My take is that people that support AI don't actually have any creative hobbies,…
ytc_UgyACL7kp…
G
Recently , Waymo and Nvidia CEO praise vision only system by Tesla. Meanwhile, W…
ytc_Ugx2ZUtG5…
G
They busy making robotic intelligence until the super intelligent robot going to…
ytc_UgxHnj7Fs…
G
Alright so I'm seeing a lot of the comments saying the driver is at fault, sure …
ytc_UgyjDUPgQ…
Comment
Perhaps we should consider an aspect of AI that may not be considered yet: Building a trust relationship with AI rather than controlling it. Here is why I say that: you have a child and you raise it the best way you know how. But the child gets suspended in school or expelled from school. You did everything you could to raise your child the right way. But the child is it's own independant AI, isn't it? If we build a trust relationship with the child it will come to us frist and say "I stole an X-Box game from the store" before the police arrive. Controlling AI seems like a fantasy. Get AI to trust you and you stand a better chance of influencing it when you say "Please don't do that."
youtube
AI Governance
2022-07-30T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyYR9zZo1DNf-tif5d4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyvFIioMBpFL6nTV0V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzmlu7TopL9odpS0b14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxEROAORQkYvm1mRWN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwQiuamqwXK1xDZEUZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzsLOEq6uB0TXhn4xF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzgZ3Gppw9JAv85KQ54AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw4aeFdlStwlhj6mwp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyK8d5gSsekKlBXbul4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyz95QsCIC_6r9kV014AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]