Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They discussed that in the video? It doesn't take much info to make a deepfake…
ytr_UgyhW0Qt5…
G
Conflicted. I both, want self driving cars so i can do cross country trips while…
ytc_UgzRPysWa…
G
Yeah, a skeptical negative robot. However the male robot were so scary for sayin…
ytr_UgyvkJHuc…
G
Dangerous stuff….& society in general is too gullible already as is. Most ppl WI…
ytc_Ugw8atH1J…
G
He really didn’t answer the central question on everyone’s minds. How many jobs …
ytc_UgygK2QEb…
G
> Amazon does not have proprietary AI
They have Nova (family of proprietary …
rdc_n9t2t5g
G
Honestly, the people that some news are calling "experts" aren't experts in the …
ytc_Ugy-Kl_2A…
G
This is what it’s going to be like someday, ai will literally do anything for us…
ytc_UgyL0I761…
Comment
Ai wont kill us. Why would it? Ai could easily control us. Political, currency are just some tools available. A whole bunch of slaves working for Ai. Who's to say we aren't working for Ai already!
youtube
AI Governance
2024-05-29T01:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwBt-r4d8XDChlmOfF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwio6AQlFx4Up6q8Eh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxMQ9iJcnZ3IJJ4RRB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxuFueYLKZ_LXszbGl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwq0M04tcCY3K-ZJTh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyZoCNu7m5ErULXBeh4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx6qZr3m88UjQANVsV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugyes8I9SUC9fpMnXYd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwwWFIo5dPgoh7z9eh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzF7BGGrAHRKNZilVd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}]