Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Honestly, the more stuff I watch like this, the more I realize how stupid prompt…
ytc_UgxilP8wV…
G
It is generally concerning they don’t realize their own logic leads to the concl…
ytr_UgzsQ4qlC…
G
That sounds ominous ai should be shut down a drine tried to kill its operator fo…
ytc_UgxC-N9vk…
G
I am constantly amazed by the number of people who did not see *Terminator*. The…
rdc_o87aywo
G
AI art is interesting, but AICarma gives me deeper insights into how people feel…
ytc_UgzDR7H0R…
G
As a supporter of AI and a data scientist, I feel weird for this wave of "AI-ent…
ytc_UgzVZveCw…
G
@deeplearningpartnership her irrelevant politics rant aside, the points she rais…
ytr_UgxzfxMwL…
G
If the brakes wasn't working then why was the car going faster,was the speed aut…
ytc_Ugyc4lqtR…
Comment
Technology advancement can be relished only when it is under human control. It should really be useful and worthy enough as for example , communication systems , which evolved From pigeons to mobiles.
But AI replacing human jobs , creating new jobs, sounds alarming.
I did expect this scenario of AI going uncontrollable.
I certainly would say , there is a limit and a threshold level to anything, and AI is one.
The world will definitely be better and peaceful without AI.
I only wish tech freaks and scientists to rethink before they invent and launch something.
Where are they heading to? Is it to completely eradicate peace?
Well, we are already amidst chaos, and why race for it more?
youtube
AI Governance
2025-06-02T11:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyR8q6gdcwkYR-LJpx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxlCkL4eEYbMSR8CUN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyHarEAEtPOnYgE8DV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwKoejZwNGXfm6DzEN4AaABAg","responsibility":"elite","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwu3Oa75ob8V4QxiuF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"disapproval"},
{"id":"ytc_UgwOc-pKkE4hc2Ms2n94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxIGBet6-uKZ3PTuJZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgzS3T9A1flR-0wEiYZ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzLZJk3PUP7IIf-9YF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyefl02vXx9R70lIg94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]