Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You will know the moment when A.I becomes sentient, when it shows empathy for ot…
ytc_UgxaiwR7b…
G
14:07 I think this could be good if you apply the glaze and nightshade to the ar…
ytc_Ugx1XJ7tb…
G
Developing AI might be one of the greatest scientific discoveries yet. It might …
ytr_UggSkZsWg…
G
I'm just glad I grew up in a time before all this. There are people born now tha…
ytc_Ugy4xoHtY…
G
@bulbstark7231 Maybe not. However, we CAN make some deductions based on the nat…
ytr_UgxNSShD2…
G
Does it matter if it's Ai or not. I mean I don't think they mean it after priori…
ytc_Ugx51z0Gl…
G
Hate to be *that* guy but it actually takes a LONG time to get Stable Diffusion …
ytc_UgyJU6_44…
G
As long as humans are behind the coding and implementation of self driving syste…
ytc_UgyBOj2eu…
Comment
Any veteran System Shock player out there knows it´s a pretty bad idea to have some S.H.O.D.A.N. running amok. Pull the damned plugs already!
AFAIK, at least DARPA has tested autonomous turrets...the things where firing all over the place, allegedly even killed one technician in the process - as in "on purpose" since he was a threat to its continuation...plug was eventually pulled but just scale that crap up...
youtube
AI Governance
2024-08-04T12:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyskpvfexc6emo3rEx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzdwEKP59iAR3nLA394AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyzEXjbaJ5-XOiRA8d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxFicw_DtN5tTyvVd54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwFA8cs4N4WKTlPNQt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxx8AweY_rEkjONkRl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxOi6E4Xt_MBhG6aox4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxmFBg_ZrFZbejgFdB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyU6hQJ_vS9F3kKHzp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgyjE6XPjOheWnvnh2R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]