Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Instead of ai taking over high paying blue collar jobs, I think ai should be rep…
ytc_Ugwm-ydtI…
G
Bro, I was able to tell which thumbnail was made by AI it’s not gonna replace ev…
ytc_Ugw8YxuZX…
G
Understanding is always up to the Individual. I grew up with learning disabiliti…
ytr_UgyDNUELt…
G
Ironic how a video cautioning about the dangers of AI has an ad about an AI prog…
ytc_UgxDuu-eh…
G
Open AI and Quake 96 are DISGUSTING!!!!!!! Perfect of id software and demented m…
ytc_UgzC3PErq…
G
Art is, in anthropology terms, the material expression of culture. Culture is fu…
ytc_UgwAzm98e…
G
Even Waymo cars despise the very $$$elect demographic that has completely change…
ytc_UgwjLXSC0…
G
Many of us are deeply mired in our reptilian brains, how can paying us universal…
ytc_UgyYBzUS2…
Comment
I have a solution:
Hardwire into the core program, the 3 laws of robotics
1). A robot may not harm any human being or through inaction allow a human being to come to harm.
2) a robot must follow any order given to it by a human being, as long as the orders don't conflict with the first law.
3) a robot must protect its own existence as long as the protection doesn't conflict with the first and second laws.
Added laws
4) a robot must always be transparent ( always truthful )
5) a robot must always co-existence , and
Cooperate with human beings in peaceful Harmony.
Please institute this simple but important solution
If I can see the answer, why can't the P H D's? 😂😂😂 ( Not as smart as they think they are...... Lol )
youtube
AI Governance
2025-09-21T14:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzzEtldBgnEFCqYX8d4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_UgyfyUu7Be4H3bb02G94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugx23Q-jhglYBq-U1ox4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgyXrkQ-vIydhUAwBmV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_UgzWphUfkI2P48H3Wrh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"ytc_UgylmXBsGAG2ZZucLcR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_Ugz7Qiqi2n_U1OTzMqR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgxqXUrebrs_K6BSKOJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_UgzAGIK-aoFGH6rjk0l4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},{"id":"ytc_UgwxAI2U1sozsvebMah4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"})