James B Maxwell
2 min readAug 1, 2023

--

Multimodal models are certainly more likely to generate new knowledge than simple LLMs, because they have the potential to reveal connections between representational systems that we might not otherwise identify. But they still depend on human knowledge as a starting point, so their abilities are in no way independent of human intelligence (to say nothing of the fact that humans created them in the first place).

This is why I think there has to be a lot of thought put into how best to attribute/distribute the value they generate. There's far too little stock put in the knowledge they're trained on, and far too much noise made about the idea that they're somehow doing things "themselves". They're not. I think the best way to characterize these ML-based systems is to say that they are statistical aggregators of human knowledge, capable of operating on superpositions of human perspectives. That's their "superpower". The concern for me is that in essentially every case there's a mega-corporation holding the keys. Which I'd say very much flies in the face of all the cuddly talk of "democratization". It could just as easily be seen as a case of Google selling humanity back to humanity. Of course, proponents will protest that "it's free", but we know by now that when tech is free, the user is the product.

To be fair, I am, broadly speaking, a "proponent" of AI myself. But I think it demands a profound re-thinking of how we consider the connections between knowledge, effort, IP, patronage, value, wealth, and human life. Current thinking about economics in particular is no way up to the task, imho.

--

--

James B Maxwell
James B Maxwell

Written by James B Maxwell

Composer, musician, programmer, technologist, PhD

Responses (1)