Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Absolutely agree with this video about the problems of AI. I am an architect an…
ytc_UgzrHy-ZE…
G
From your story i can tell that you are not aware of the right way of usage of A…
ytc_UgyJ2xl7h…
G
The old software adage with programing still applies to AI and AI prompts. Garba…
ytc_Ugx8GfvTB…
G
Sooner or later, a super intelligent AI will take control over the civilization.…
ytc_Ugy9xdoC6…
G
This isn't accurate. I work in the industry. Top AI talent already works for a t…
rdc_mz04hp0
G
I think AI is far from replacing human physicians because of the fact that medic…
ytc_UgwhOkBZQ…
G
@gabrielvitor-pj5hhArt by Da Vinci's definition is a way to understand both hu…
ytr_Ugw2uyKu8…
G
Since that teenager unalived himself, I've been testing ChatGPT to see how that…
rdc_njiqqym
Comment
I work in sales for big building projects, and trust me, building trust is essential. Yes an AI could talk and write better responses to every questions on the planet I can have in an audition, but it wouldn't be received the same. Prospects and clients are humans, they need to trust a human before launching a big project. We also work for institutional clients and professionnals mandated by people, it needs even more trust.
If every human was perfectly rational and judge offers based solely on measurable and factual points, yes sales could be totally replaced by AI and automatization in general. And it's already a lot the case with chatbots, email templates etc. But AI is a tool, not a human. It can't build trust, and can even convey distrust. How would you and your project would feel considered highly by the company if you've never been offered to talk or read a human to explain it to you, to answer your questions etc ? I must say most decisions are not fully rational and the trust build with the salesperson or future project manager is essential in the choice. (Sometimes even sadly bc the quality of the sales isn't the quality of the product or service behind. To me that'q up to sales to be ethical and work only for solutions which are worthy and good for the customer first, and more generallt the society and environment, but this is sadly rare.) Anyway, an AI and a salesperson can tell you the same thing, but the passion in the eyes, vulnerability to say "I don't know, let me check and I come back to you" and actually do it in time (so keep promises), show real listening and care to your problem, only a human can provide.
I guess the more "time and money consuming" the project / service is to the customer, the less sales can be replaced by AI. (I agree I don't need to feel particularly taken into account if I buy a pair of shoes or a can opener, but I would still want customer support if it doesn't fit etc). So... as a loooot of comments have said for numerous sectors, AI can help to improve or sometimes deteriorate the conditions of some jobs, but mostly I HOPE it wiln concentrate human activities on high value tasks. By high-value I mean relations between humans, tasks implying skill, trust, creativity, etc. And not repetitive actions. The question remains : with all the automation and loss of "less valued" skilled jobs, how do we - as a society - redistribute employment so everyone can earn a living ?
As any tool increasing probably productivity we have to ask : what societal benefice do we want to take out from it ? Today most companies only respond "more profit" and it is not an Ai-related problem.
youtube
AI Jobs
2025-10-07T07:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugwk_xyof-7KmaWS0iJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwC9pWTHWe-dqnBYER4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxRloeiPv5Mu0HIgYV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxcpy2l9R-YfcEWAoF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwyvZ3MPQWuFXJJryl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyHv5KuP_aKRJ9ZuFp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxktY6TtCqWxx5Scv94AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwwQxbjGyjk_pWlwjx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwzbJsOAMKo9ue0w3Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyZGhdk4RKuC4-BV_B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}]