Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI chuds thinking poisoning the well doesn't work is so funny. Don't they realiz…
ytc_UgxNXmioN…
G
Funny how AI's taking jobs, but ShortlistIQ uses it to help people stand out in …
ytc_Ugy68xLZH…
G
that is why its always prudent to keep a security question or ask something a s…
ytc_Ugxn2bvBZ…
G
Exactly..even tho I am not an artist .. i have not tried ghibli art from chatgpt…
ytc_UgziA6TMk…
G
A Proclamation on the Three Pillars of the Digital Frequency Cage
Know this: th…
ytc_UgzqsBGVD…
G
The easiest way to regulate AI use is to heavily tax businesses on their AI serv…
ytc_UgzdJNqFQ…
G
yeah, this was first thing on my mind when i saw any image from AI…
ytc_UgxzRHbGs…
G
the big thing we need to focus on is not making a.i. do the big things for us
li…
ytc_UgzkQfJAl…
Comment
Have a look at the below videos for a different perspective on the matter.
https://youtu.be/4__gg83s_Do?si=ZrEZRmV1WsxYbpH2 < Yann LeCun (Top researcher on AI/neural networks) says AGI won't be reached by scaling LLMS)
https://youtu.be/7-UzV9AZKeU?si=52drk8Qyv2RGu0aH Prof Michael Wooldridge (professor of Cs at Oxford) concludes that while LLMs definitely useful, they are not capable of "logical problem solving or reasoning in a deep way"
Also... Not sure what he means by doing it's own research. What does that mean? Is it going out and performing experiments, interviewing people or collecting data? How can it add anything novel without humans feeding it actual data. Unless it just done secondary research of course but still, how do we know or woild we even have the capacity, to actually validate all this infinite research?!
Be good to hear what other people think also
Kind regards, Ethan
youtube
AI Jobs
2025-11-18T20:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzMFALZ_2qZ4aowNU94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxpJNJsM34bBRfyRlF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw7ozTKranuO8gKUYJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzuLVchwyMGh8x6U1t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzcqFSaix23N_AgSwd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw7eNnFTG0ekG2z1wN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwRt40bab4WcqSo0vJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgznstEQQ0zs2lv2dNt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx-uysrXzKaTM1yB8V4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwGwAjE_Vxt2M8S0Mh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}
]