Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The scary thing is, the more robots and ai advance, the more plausible the theor…
ytc_Ugzu5HlCn…
G
The point being that we're at max capacity for personal single occupancy vehicle…
rdc_dmp31dk
G
The only way that "AI" can be nefarious is if it was programmed that way by a hu…
ytc_UgyCrGlwD…
G
Here is a case for Class Action Lawsuit against AI Art written by openGPT
"A c…
ytc_Ugw53V8k-…
G
For one, they can charge the caller for wasting resources and using the emergenc…
rdc_lvb8vbb
G
Most serious users have no problem in saying that they used AI. Like artists who…
ytr_UgyQLScKK…
G
But if a person does not use their will, does that mean they are actively just c…
rdc_df0noc6
G
That depends entirely on where you are located in Canada - maybe if you heat you…
rdc_gtctioc
Comment
I think Elon should be the head of the regulators. He has seen the possibility of A.I dangers for years. Who better to over see the future of it.
youtube
AI Governance
2023-04-18T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw7UHgWBr872LE7PYF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy5cZpxWzQKZPo17cl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxtSGacH95xCZhGlzh4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyNjmwGYQnTu6jg86p4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyL2-ibBeu8QQrFVpN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyhMYEDPdQv7gqg7ux4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyt3tQnHkN_-V4q1CJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwBQxlMHcpxY3KMOs54AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzCUtTA61F6Fujw05R4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwvKbr7Q0rvY73z_8Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]