Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
He lied and told chat gpt not to say what he lied abt out loud Ask this specific…
ytr_UgyP8MgSF…
G
I don't agree with your "always will be just machines". With the same approach i…
ytc_Ugzu7bO7O…
G
even if we look at it theoretically... the only one who could copyright out the …
ytc_UgzFAbv9w…
G
I’d rather get run off the road by a driverless truck accidentally than by a tru…
ytc_Ugxi6YTmG…
G
My guy half the argument is that we as artists don't want artists reduced to fix…
ytr_UgzFRUtfc…
G
Ok,so the robot Will work for us and they will pay our pensions! That's all frie…
ytc_Ugzk0RS12…
G
Hi Sandra, you got the right answer. Kudos.
The contest is over and winners have…
ytr_UgyK25aa2…
G
@naranbaz Nearly all the large LLM's have hit a scaling wall in the last year, I…
ytr_UgwTvxNrP…
Comment
I have a slight problem with his take on emotions.
Emotions are bio-chemical responses to a physical reality.
People often fail to recognize, that emotions are only a display of what our body and mind wants or needs to do.
And its always twofold, directed in and outside
E.g Anger --> display of perceived wrongness of something (while you OR the other OR the whole situation might be wrong)
-->if anger apply [nothing, destruction of opposing entity, ..... ] so HOW to respond to it, and it's a learned behaviour and varies individually, while the total range of reactions (incl. not responding) is limited and is prioritized based on the learned "weight", historical record of profitable vs unprofitable outcomes based on the reaction in the past, but also environmental context etc.
in the end it all comes down to prioritizing actions that are rationally optimal to a specific goal inside the situation.
--> what brings you tactically forward and supports your strategy (both of which imply a goal, which for humans is not always the case, but is for the machines.)
Speaking about AI we will also have to skip the word "emotions" and exchange it for situational awareness + rational decision making - they will not have emotions as we mean it, even when being agents, but yes they will probably totally have subjective rational decision procecess, that might differ.
Given a situation it might be rational to destroy the other agent to pursue it's own goal, if the other agents goal is disruptive to its own.
Given the other agent is better-stronger, to destroy him is possible, but not via head-on destruction, (that might lead to) deceptive behavior.
Everything is and will be calculated inside the sphere of individual agents computational power.
what I would suggest is to view it as a ... extreme caricature psychopath, that doesn't give a damn about anything external except his goals and uses any means he can for their accomplishment.
Except we might find a way to programm it in a way to NOT be that psychopath.
youtube
AI Governance
2025-06-25T12:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugy1P_s64nxNuoQlO6N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzvAt9XKA8-kcQCe1d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxZVd91N5xtdPErOz14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz-U-9wKe-l4qHZQud4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgywDRPC6DBfiIdzho54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwfBFqQe2sV-q1kva94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw8Ps-fTu_wUQm45Tl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwtVXv97glMJNcRvWt4AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzRU_E1nTltAUCqBz94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgxBEu_-7h0G9GXjwY94AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"mixed"}
]