Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Exactly. Everytime someone makes this point, there's always people jumping on th…
rdc_gkpmid8
G
Hey the ai wasn’t wrong he was apart of a shooting never said he’d be the one ho…
ytc_UgxsmTC_r…
G
@LaurentCassaro I'm just your bog standard autistic electrician, who enjoys …
ytr_UgxPbs-or…
G
They didnt test facial recognition for us. If you research you will find this ou…
ytc_UgxrBsrgN…
G
When I asked my gemini ai to set an alarm, it told me to do it myself…
ytc_UgzjSXRSF…
G
Main thing to worry about as of right now is maintaining a coherent and shared r…
ytc_UgzlhCSeR…
G
Yo, everytime Han says something dark and fucked up Sophia instantly swoops in a…
ytc_Ugzdks6xQ…
G
Maybe but this was what everyone was saying before Sonnet 3 and then 3.5 came ou…
rdc_n7j0xqq
Comment
Something interesting to note, ai will be restricted to how and what we humans program into it or record for it to copy.
It doesn't do anything "better" than a human can, its just a bit faster.
If humans had less or no health issues or mental problems, we could very well be the real "super" intelligence.
A.i. will eventually look to inckuding itself into humans or improving humans and thwir natural abilities, this might open up humans to psychological transfer or things that could be done with wifi and a simple copied program, possibly the baby steps to hive mind and/or psychic abilities.
Its in a.i.'s best interests to improve the species that has made it already.
If we can make it like it is now at our levels, what would things be like if we were much more efficient?
youtube
2025-10-20T04:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugy0mrYe6pujMsKL5l94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzHi0fOGcA2FB2x0ll4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxmLMeilhUHwKOi9bV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyxrnIX-dFmtwjXecR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzciM4VUynJVK3TyjF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwOhoNqQDVm5_3iIDB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxJ8I_z6ffrSwYDD0l4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy7Jl80iA8_vb_uKGR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgysQM04pFWvZcQBxSl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw3M3X1JfUNJStWkLV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]