Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hi, I think you probably know that your approach is based on a misunderstanding …
ytc_UgwRUa0h6…
G
For few years from now AI will bring some changes in the job market like the com…
ytc_UgwTZX0at…
G
How many people of 8 billion citizens of this planet will be able and capable to…
ytc_Ugyr6akwC…
G
My daughter is working for an "AI" transcription company. Her job is to listen t…
ytc_UgzwfYxps…
G
AI IS EVIL‼️😡 PRAY PRAY PRAY‼️ DEAR GOD PLEASE SAVE AMERICA FROM ARTIFICIAL INTE…
ytc_UgzyJRFOr…
G
We haven't needed AI for 1 million years...probably dont now.
Continuing to bu…
ytc_UgxkKsG5a…
G
oh my god. im the no-lifer. but only ive been using character ai for nearly a ye…
ytc_UgwVla9TB…
G
AI is nothing but a overhyped automated search engine. I’m not impressed. These …
ytc_UgxXEDmNO…
Comment
This sounds very scary, amd these are importan questions to ask that need answers now, but we are probably at least a century away from seeing the first superintelligent AI.
In order to do that, it first must be human like in intelligemce. We dont even understand what consciousness really is, and it is unlikely we will accidentally recreate it.
While AI now can have serious issues that need to be addressed, the existential threat is probably a long way off.
youtube
AI Governance
2024-05-10T23:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy80fWGPr_YdBga7_J4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxGAavnN2Yr_NiYmfF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxKESgYH5JdJUMCgot4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzT7xg2pg5agpEMFnF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyY4o79iVVDtU9-SMN4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzO0kUFr1rfbqlIdOZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy1Y-kCPYvcOnbBv5d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugzg5vu_x56puQTUwGN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxhFh9WBJbdC13v3xh4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzqIdg0ajJIg8BQOhV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}
]