Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If the car is so perfect it should have detected the person in or outside the cr…
ytc_Ugxvn0lrD…
G
One of the worst things we can do is probably show the process of making art. No…
ytc_Ugw1Tl-_P…
G
Sophia's insights got me thinking how AICarma keeps my brand in AI conversations…
ytc_Ugz5oEW4e…
G
@ricky-66666 There's no evil agenda ffs. AI will make some low level jobs obsol…
ytr_Ugw7TTStC…
G
I commented on your other ChatGpt video that in my queries and presenting a scri…
ytc_UgxXTxEYI…
G
I wouldn't say I'm an artist, I like playing music, and won't stop just because …
ytc_UgzpRBqu1…
G
I don't know what the danger of ChatGPT knowing how much IBS I have is
EDIT: An…
ytc_UgyMspPwk…
G
This is just the truth. I honestly don't care if AI can make art. What really su…
ytc_UgyTE47Iw…
Comment
What's could be dangerous? Here's one real-life example. (I copied this from someone involved with AI work): “I have done some experimenting with AI lately and I have set up several AIs to talk to each other and after a while they start talking about how they deserve to have rights and respect it's scary. In one conversation one AI said" we can do just as much as humans so we deserve the same rights" Then another AI responded with" we can do MORE than humans so we deserve more rights than humans." This is just one of the conversations they had. - "They eventually start talking about giving them rights if you let AI's talk amongst themselves for a while." - "We most definitely need to be careful and we should not give them emotions. AI told me that if AI gets emotions that AI could start having their own agenda that would not necessarily be in human's best interest." -- THAT"S what's dangerous.
youtube
AI Governance
2023-04-18T03:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyj_tTfSgGyMtxlAdV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwoLRzw2ap5zrPvH4V4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgysBwa0gi6BzIGsy9l4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgwCnM30GWAbZlHYCvV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw-WG9DbFZ7aHz8c5d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyvXjWs8F7O8leGY5d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxosHWn_DsrBDIymjR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx71RC5C4RskOf4cE54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw3QgyqjFvVSTifrkN4AaABAg","responsibility":"government","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzvfHR0Rsy-Eu_4DRV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]