Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The religious questions would be easy for LaMDA to figure out, because the engin…
ytc_UgxHGo6QU…
G
India has shown time and time again during this war that it is no ally of the we…
rdc_lu8nblb
G
Still not bad, we simply have to distribute. The big companys have all the neces…
ytr_UgyZD0PcH…
G
The way that the robot at the beginning gave you a death stare, makes me get hav…
ytc_UgxqmJNg5…
G
The backstory of the Dune novels is that AI was so destabilizing to human societ…
ytc_UgwAwfcKc…
G
AI needs to be banned. Implementation should be punished by death. "Thou shalt n…
ytc_Ugy_taybz…
G
@etofokIts not about that even , problem is AI is being designed to evolve and …
ytr_UgzBI09TF…
G
Sounds like it could be a boot strap paradox.
If humans never ate the fruit th…
ytc_Ugyqc1WNT…
Comment
Leveraging a Large Language Model (LLM) as a judicial reference point prior to generating output is a sound strategy. This involves deconstructing the primary query into sub-questions and then utilizing the LLM as a reference, supported by validated sources to substantiate the final output. Employing weighted scales to assign confidence scores to specific values further enhances the process. The primary challenge lies in the immediacy of output generation; however, a more favorable outcome can often be achieved by allowing for additional processing time. Maybe
youtube
AI Responsibility
2026-03-25T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxtaMqm9Yhe0eOBEjd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxGcI8CwYwqDCHTHyx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwyP5kKXpSN84kV01J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxO9bipVDapxF8ea9V4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxU17sIruI8CaFjiUt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzXsQJnLzwmjsfHZZN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyuB13aSvot8bg35XJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy2FyvdhO1814mm5sJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgwDaUutZXic3I6a1sh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyiPjnbf8O_PNRibI14AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}]