Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
the american govt has access to the best ai's,highest tech, yet problems with ec…
ytc_Ugw0iqwXH…
G
@8MunchenBayern8Yeah.. you’re too old to understand.
Those self driving cars t…
ytr_Ugx85xXV9…
G
I think the problem is that at some point we may be so dependent on AI we would…
rdc_l5u8vbr
G
Very true. I think humans are worse than AI in trusting. We lost the trust in ea…
ytr_UgzeBzFAA…
G
Glad I believe in Yeshua I have nothing to fear He is king of Kings and Lord of …
ytc_UgyvbTq2U…
G
It’s funny how before ai art was a thing, when art had something wrong, it was j…
ytc_UgxnbV7PV…
G
Blaming ChatGPT for how people use it is like blaming a knife for a crime. A kni…
ytc_Ugy3ovKJb…
G
On the AITube channel, we explore various aspects of artificial intelligence, an…
ytr_UgzMhqFrr…
Comment
I did software QA for about 30 years and that was probably the worst life choice I ever made. The cost in human suffering alone is plenty for me to not only not grieve if AI wiped out QA as a hands-on activity, but also actually hope that it kills it altogether. QA is absolutely a nightnare we should _want_ to rescue human beings from. For actual SW development, AI is poised to trim some of the awfulness there too, but I think will open other areas that could be rewarding for humans too. Prompt engineering, for example, is already becoming a "thing" with chatGPT, etc., and taking away the "grunt work" of writing regex's to parse email addresses and such frees development up for higher level design, etc.
So yeah, for QA - good riddance. Give it all to the machines and don't look back. But for development, I think there will be new opportunities and it has at least kinda sorta a future.....
youtube
AI Jobs
2024-01-19T10:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwtTzN1o_l0QDOdt614AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzPSabSAR6D4JHXlPJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx5_2gUpPUxSdJe8ql4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxqr8j3yJV_V49eH6R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzCJs11rLHL0_Toe7F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyjYtgBKtyiWD9mOSx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzUBXQw0VOENf0FjeZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzL_A0Ayq2-oDwknOl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxuA-bzMTUdstDO7Ql4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx_JuECfyo2z_2cq7F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]