Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The problem is we use the term "AI" as this anthropomorphized catch-all for many…
ytc_UgzzbfewF…
G
I personally believe that AI/Robotics won't be able to replace us human's reason…
ytr_UgzTD1_rO…
G
So the danger of AI is not the millions of jobs lost as large corporations roll …
ytc_UgwEtcZsO…
G
@ to a person who doesn’t know the creator of the piece you have no way of diffe…
ytr_UgxbFIK7V…
G
Pretty sure open ai hired a sloppy hit man. The guy used a fire arm n it git mes…
ytc_UgzPyp0Xe…
G
Boeing whistleblower deletes himself just before his trial.
Open AI whistleblow…
ytc_UgzYo_b3c…
G
AI opens a can of poisonous worms. In some areas, it can be useful for the poten…
ytc_UgzJpeB5f…
G
@B-lair there is an entire market of AI generated graphic novels, and alot of v…
ytr_UgyWw_t_u…
Comment
Yes, finding a balance between regulating AI to prevent potential harm while also allowing for innovation and progress is crucial. This can be achieved through collaboration between governments, AI developers, and other stakeholders to establish appropriate regulations and guidelines for the development and use of AI. It's important to ensure that AI is developed and used ethically and responsibly, while also enabling the benefits and advancements that AI can bring to society.
youtube
2023-04-10T08:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxngsk4XS5IokdOwZp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyxVXWv0eR7lmu_OaN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz0Sb86csC2A7rb-IV4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzIl5EjhiiEjpDQHet4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxE9CNKfKGu4Mmguf14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"unclear"},
{"id":"ytc_UgymLIx7eVakjOabKkB4AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyUhHgGRPJuD0dOFMF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgysX6Tl_iiMLchLPjZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyJ_oN-L_Imo1Z2vkJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugy_yAmcX6VrwpYObr54AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"regulate","emotion":"indifference"}
]