Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI won't be dangerous as long as it doesn't develop a self-motivation to do thin…
ytc_UgzXVnoQ8…
G
they are underestimating the human potential... and the limits of AI .... life w…
ytc_UgznVYpx8…
G
Humans do the same but we humans have been thought just do what asked. Well ai i…
ytc_UgymuDqPN…
G
I've been saying it for a while now... I'm not afraid of what AI might do... I A…
ytc_UgweBCjW8…
G
So, one of the main problem with AI is, you can't really give access to an AI to…
ytc_Ugwf6XTjA…
G
They ready have a model for this using RPA in Power Automate to run apps. Yes yo…
rdc_oh2qanp
G
I only use chatGPT for simple research like "list me all the endings to NieR Aut…
ytc_Ugz06Xs0y…
G
No issues here, while the AI robots do the work, just sit down and relax, from t…
ytc_Ugy6ss-js…
Comment
This why doctors constantly study. So, I am not surprised because ChatGPT sits on more cumulative knowledge than an individual doctor, or small a group of doctors. But ChatGPT is trained on so much data that it can struggle with some specialized things, or the data it was trained on that subject is limited because OpenAI doesn’t have access to everything. There is something called fine-tuning where you can give ChatGPT some data on a specific subject and its answers can improve.
youtube
AI Jobs
2024-04-15T11:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwxwL9djH-OOzpA-_14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzHuKko5RwmGKx36fp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxpQvDzycaCHhXn54l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzH2KS-VKNR43yE51Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz77KALqUGEywrlAv54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgwW2xnCZNzcivjq44B4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy1MWta8ppRMAob0BJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwxMPc9P96sgPYnGUB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzLDs48W1rDQkyyJDx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxJiIxJycgtHJiNY114AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]