Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI in fiction: *Goes rogue and enslaves/kills humanity*
AI in reality: "Despite …
ytc_Ugx7nRJlg…
G
@eveefi So do computer when it arrived. You lost your job? My was amplified by L…
ytr_Ugz77_oWl…
G
Make dicision indipendently
Learn principal
Copy maching
Information truth
Socia…
ytc_Ugy92Sel-…
G
Developing your own style as an artist is just mixing other styles of other arti…
ytc_Ugz52eXbX…
G
Wasn't sure about AI in hiring, but after using ShortlistIQ, I see how it can gi…
ytc_UgwWg2KBz…
G
I mean if you call yourself an „Artist“ by demanding an image is like my client …
ytc_Ugz86pp3-…
G
IRL the problem is that we don't value people unless they're struggling for mone…
ytc_UgxGqJiee…
G
Hey I am a 16 year old my name is Sourav and I watch your video I have built my …
ytc_UgyBFmxTs…
Comment
Talking about what we do not understand. I agree with Wolfram that we tend to anthropomorphise and in fairness we do our best to make computers appear to be like us right down to robot humanoids. It is difficult to look too far into the future but to my mind two serious problems are firstly that we will deskill ourselves so much that many people will become totally dependant on tech. The second issue has already happened with computerisation of the stock market. You automate something that is told to do a specific thing in a specific situation but you have not foreseen a positive feedback loop that will do what you do not want, devalue the market in seconds. In such a situation someone presses a kill switch but it might be more dangerous with say automated warfare. I suppose this is and example of Stephen Wolfram’s computational irreducibility - the inductive process that has to be run to find out where the glitch is. Previously say writing code for a nuclear reactor control, a very extensive testing of the programme would be carried out and of course already has this capability. I suppose what I fear, (anthropomorphising!), is an over confident Dunning Kruger effect on a super smart system that is not quite as smart as it needs to be.
youtube
AI Governance
2024-12-09T23:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugzyw7P6UIG7qr9orm94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz5qfO2p5ouopqxF9J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw5jx3JN_iJjVdgF-V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxgabcdIuRhNkDAGoZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzK0cxdklJv4XjEKQV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugwk38JoiF5nupttEiV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgxUpWrqOtfeJUqbHoB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw9Yn37_qtH16HPxL54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzRiCvRXTjY9wSaOpB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxrxwC9GQeGPZSOxHV4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]