James B Maxwell
2 min readNov 28, 2022

--

Interesting article. Yes, I think ML-based art is revealing how broken copyright really is. For this new domain we need to think beyond traditional notions of individual authorship and start thinking of art-making as a human endeavour with its own implicit value. These models make an interestingly indisputable case for the fact that the value of human art-making can't reasonably be measured by the revenue generated by individual artists, for the simple reason that these models require data from all types and levels of artist in order to work their "magic". Arguably, "great" artists like Van Gogh have much less impact on ML art's capacities than "lesser" artists, simply because they're outliers and don't individually affect the overall learned distribution all that much. So it's the human "creative class" as a whole that is responsible for the capabilities of ML-based generative art. Any approach to attribution and/or remuneration that can address that reality can bring us closer to a just system. Otherwise, human artists will continue to generate training content for ML*, and we'll have yet another situation of tech riding on the backs of unpaid or underpaid human workers in order to generate huge amounts of cash for a tiny minority (and no, I'm not talking about the AI-based artists here but rather the companies running the services that support the models—that's where the money gets made).

*Yes, I recognize that generative data augmentation will increasingly be used in training datasets. But human art will likely continue to drive ML's capacity for novelty for many years to come. Generative data augmentation is primarily an instance of "exploitation", not "exploration", so it will tend to lean in favour of "quality" over "novelty/innovation" in the creativity landscape, thereby similarly pushing model outputs toward quality, not novelty/innovation.

--

--

James B Maxwell
James B Maxwell

Written by James B Maxwell

Composer, musician, programmer, technologist, PhD

Responses (1)