Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I read a great “science fiction” thriller about 40 years ago entitled, “Colossus…
ytc_UgzRYFYqT…
G
Those that use AI to create art aren't artists. The software is the "artist", yo…
ytc_Ugwq5AlpV…
G
There is NO INTELLEGENCE in A.I. A.I. gets its smarts from programming; but the…
ytc_UgwOzhwJ_…
G
Great observation! Sophia's nuanced response definitely reflects a strategic und…
ytr_UgyxVRrX6…
G
@Lushora.Store01 Right! And I am sure after the owner pays $5K a month for the A…
ytr_UgwXpG9Yo…
G
And now AI has access to this information and can learn to deceive us better.…
ytc_UgzJ332DM…
G
AI regulations can really only be (potentially) effective if passed by the UN vi…
ytc_UgzPhWlmK…
G
6:20 a minor thing, but I would make the claim AI can't make those images withou…
ytc_UgyD5ATRB…
Comment
I think the capabilities of AI are extremely exagerated. While some jobs are at risk, i don't think it's to the extent being claimed. Companies that are actually implementing ai are having to back peddle. They're finding that it can make one person much more efficient, but cannot replace them. If you want to know what ai can actually do, compare their claims to what happens when companies implement it. You'll find some pretty large gaps between claims and actual implementation. Jobs will be reduced, but not completely eliminated.
youtube
AI Governance
2025-09-05T12:3…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxzDglcJ5BjRhoU1D14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyA9ea24KkhbHGWemB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxKSegBc7MKYnjNK6V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwSZxE5ZG_pQYjg6Lx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzI5ef7Rl_shOp4HTZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxO6l6rs67qXN33yft4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgykuxCsaS1zHj-xr5h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwroszhiZJwg6Uxj9Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw0NRI4FV8epB9FUch4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzoRdWF6xMWKusaHuF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}]