Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So we dont need to fix the ai.. we need to fix the rampant racism.…
ytc_UgxFUwHCE…
G
Dude, you've been using it so long this post sounds like something it would writ…
rdc_oi459by
G
LLMs have no intelligence. They are artifacts, so they're artificial, but becaus…
ytc_UgylAN63k…
G
Once they perfect Quantum computers and link it with AI ~ Well, lets just say we…
ytc_Ugz-_OFFb…
G
This is more likely to happen over a much longer period of time, more like fifty…
ytc_UgxAAJhx4…
G
Hmmm....a ceo of ai saying IF we dont put more money in ai, danger ahead. Maybe …
ytc_UgwURsnDI…
G
NEWS FLASH!!!!!!
You need extremely skilled software developers and engineers to…
ytc_Ugw8S1kHI…
G
Is this using Elevenlabs or something else? Because the Elevenlabs deepfakes are…
ytc_Ugxds4GNN…
Comment
It depends on who is feeding the information to the intelligence and what information is being fed to them… then making sure the AI intelligence is told not to add or subtract from that information… it would have to be very exact....precise..controlled..like telling a dog to sit, stay, run, stop barking.. less is more in this situation.. ❤ Then testing the AI robot over & over again to make sure that it acts out the script it’s been given @ 100 percent..this is where safety comes into play..We ALL have the same (GOD) supreme being, higher power.. we are all just looking at it from our OWN point of View.
youtube
AI Governance
2026-02-22T09:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgyZqnH2B4DagoHSLyh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyQQMMQqpQxje1DgDN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzmkuoUt6yWlWj5omR4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwDfIdU1ksrD0l-mDV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxzdQ54ShwF17eA-l14AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw9K-ElqPRQnp3QnBJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwupjOA_qUQBZ6Nuy54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxsg6CR0akqLtiUjjJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxuXo9dEnGI4PLqjhR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwYPPm8Hxjgg5e6G_p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}]