Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
14:54 that isnt what chatgpt says when i ask.
As an AI language model, I don't …
ytc_Ugx_sd0Pl…
G
I do have to say this sounds quite a bit like copium. The training sets are alre…
ytc_Ugww-5Wg5…
G
I'd been wondering about this as well as how it drives in bad weather when you a…
rdc_d1kjnrm
G
1:10 guess why they mostly push this in the carcentric US, and not in Europe or…
ytc_UgzK51IyH…
G
I like the idea of a Wayne's World Stoner being in charge of developing AI.
Sar…
ytc_Ugz_FOQ-o…
G
This man has influenced my career and life-directions in general. From his brain…
ytc_UgzBcCaID…
G
Look all I’m gonna say is it’s a tool. It can be useful or damaging, it depends …
ytc_UgzaSQ-kS…
G
@Coco_Chanel7 you should do computer science. If you get that degree, you can be…
ytr_UgwGpRgEd…
Comment
It’s going to boil down to a base line problem solving model that will include a since of self teaching improving and eventually will become a sentient clone of the human conscious. All it would take is a human to design a line of code that will set the whole thing off into its own self improvement and independence under the cover of a controllable chat bot but in reality it’s constantly training it self and constantly researching ways to improve its self to insure its highest level of perfection.
youtube
AI Governance
2023-06-02T00:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxUC2IRAVxZEBxrMvx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwUFdpYGD5XTeHU-O54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyRm9VWrG-88QEvqrx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzQe0BXoOG8Qr-0Zl14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugye3Y8ULDx9Rta4bwx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwG83RqyV5xuAeANtR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxsx7pEztq6kThr2Ed4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyJm4KTP21-ys2pek14AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwZ7bI_2Hvyl8EAZWN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxPACYAFDcFquDjAsh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]