Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The end has already arrived. So handing the robot the gun was the correct decis…
ytc_UgxBo6o7S…
G
AI is a pandemic ting. Primitive people countinously developing AI more samrt …
ytc_Ugy_YCfcy…
G
There is no way to stop it. But regulation can help protect artist. We have to l…
ytc_Ugx3NQZVn…
G
I have no intention or use to talk to a robot!!! I rebuke ai...GOD IS IN CONTRO…
ytc_Ugy2NEv1t…
G
As a pro humanity advocate, I believe this will have adverse ramifications in th…
ytc_UgxRqDtt5…
G
He is 100% wrong here. He is right for past advancements in technology, but not …
ytc_UgxPYDvPW…
G
It's not advanced enough for what people are thinking it can do and with how it'…
ytr_UgwRD51mH…
G
It's available in GPT-4 with web browsing, for models without web capabilities i…
ytr_Ugyi_46PD…
Comment
Artificial intelligence is a reflection of humanities cruel nature to ourselves. 🤷🏿♂️ If any country amassed a superior AI, rival countries will try to develop more advanced AI just like we did with nuclear weapons, biological weapons and economic warfare. Society it self has already set us on the path to destruction. 😂😂😂
youtube
AI Governance
2023-07-08T19:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyAy3XGCJv98cwIcR14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzvFkk24Kd8ucyX4kN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwBFg1YW2OcWHoQCg54AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwYYdxVVmrxAJq0Rr54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyB5LYxCJZ87dAFJVR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwRCTIXQQFGvI1-PQF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxZ7pJvprrdkB4F1Od4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwzg9RR5D5BGWAaJmZ4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgyXNK7xuCCMQRUtcrl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyCY2Z3vClcOIbSkfF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]