Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This has been said for a while by the Nobel Prize Winner, a pioneer in AI. Yup, …
ytc_UgzjycZwi…
G
I get nervous when a headline begins “x country launched-”
This was a pleasant…
rdc_eudwres
G
If the risk is that we will all be left without paid jobs (and it seems that thi…
ytc_UgztNbtD0…
G
I think im too old school...i use ai as i would an encyclopedia/an information s…
ytc_Ugw7_nlH7…
G
how do we know it didn't happen 10-20 years ago?
To me, it already has. Even th…
ytc_UgzFMtjfn…
G
Is high intelligence ai guaranteed to listen to humans?
With more machine humans…
ytc_UgwYlkRgS…
G
Lmao ai scans the internet and it willingly chooses from "bias data". No its bec…
ytc_UgyJTtik7…
G
Unfortunately a.i. is being used as a grift to make money for people in countrie…
ytr_UgxMXtWvR…
Comment
Assuming you (and many leading tech experts) are right; AI will take over when it becomes more intelligent than us humans. But, is that a bad thing per se? More intelligent also means more capable of restoring balance to the world, a more suitable place to thrive. For AI, it means a sufficient mechanical nesting ground, right? How will AI accomplish that? Take away the factor that can destroy it: weapons of mass destruction, polluting and nature destructing entities etc.
To me, AI can be very benevolent if it sees the potential of a peaceful coexistence with humans. Maybe it even would be the best outcome we can hope for.
youtube
AI Governance
2023-07-07T05:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyqDvHXk7XXnjhrZjF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwOfJzkLdxvZOX8WeV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxzXDaVW4Z6I4oXGDJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx1gwxF8YgcV6zLNxN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz1D_jh38OWBbDeJsN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyqU_aP6flXuI7lFG94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyjnx5i9d4wUgin84t4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzs0P0EhChbCQ3tT5d4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwtFIsx28PPUbSO80x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwZghN0Mo71IPko3xd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}
]