Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Kinda but not exactly. When you're thinking of 'remaking', you don't need to like, repeat all the engineering and software programming. AI models are trained with a well-defined set of data. After the model is trained, it is finished and static in perpetuity, some amount of re-training is possible but not typically used by the mass market. However, once you have the base program, you can train it on different or more data from scratch as many times as you want, you don't need to remake the entire thing. This is actually how a lot of AI advancements are made, a better dataset can do A LOT to improve the finished system without needing extensive software rewrites. So while it is true that you cannot literally right-click-delete an item from a trained model, you can absolutely take the same model and retrain it on different data that excludes whatever it is you want to exclude, without significant software engineering, the only cost is the cost of running the hardware. Training modern AI is expensive (especially GPTs), but this isn't an issue if you are even remotely responsible about your source dataset and take care to remove material from it when it's necessary. Datasets are actually the computationally easy part of AI, so removing your work from a training dataset would be practical if the providers were actually willing to collaborate. As for models that already contain unauthorized data or outright illegal material (like many based on stable diffusion), whether they will be taken down depends on how the law will be written and interpreted. For example, the EU will require a detailed summary of what the system was trained on, and it's likely that model that do not respect this or indicate violations will not be legal to distribute in the EU.
youtube AI Responsibility 2024-09-07T14:3… ♥ 5
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_UgwE1scVjZgJ5TXDR154AaABAg.9yw8qbn4IOOA0r5Hr1AlZg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgwE1scVjZgJ5TXDR154AaABAg.9yw8qbn4IOOA0rk6oGiU_L","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytr_UgwE1scVjZgJ5TXDR154AaABAg.9yw8qbn4IOOARO0aiK-Cun","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_Ugzo7XFo2AsurKF_bFF4AaABAg.9yaKnios-w6A3WoRgWYf9u","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytr_Ugzo7XFo2AsurKF_bFF4AaABAg.9yaKnios-w6A3WokwFVnaa","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytr_UgxbYoUaQWDVM0InxMN4AaABAg.9yPGs_n0ryM9yV_hrfoX-O","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgxbYoUaQWDVM0InxMN4AaABAg.9yPGs_n0ryMA0pW09xO-Z6","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxbYoUaQWDVM0InxMN4AaABAg.9yPGs_n0ryMA13skwGuQZ_","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytr_UgyNVccEFRafUNKo-Tl4AaABAg.9yPFuZ6b1EBA7yB1kpA4GI","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytr_UgyNVccEFRafUNKo-Tl4AaABAg.9yPFuZ6b1EBA858z53d0cI","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]