What's much more worrying is that the fraud was only detected because the photoshop editing was poor. Generative media (think "deepfake") is already here—if mostly in AI research circles—so it won't be long before anybody can generate entirely convincing images of anybody doing anything. When that time comes, notions of evidence, proof, truth, justice, honesty, and so on will be turned completely on their heads—i.e., the complete invalidation of "seeing is believing". And since human interaction and discourse via digital media is clearly in its infancy, with a huge portion of the global population routinely demonstrating how incapable they are of conducting themselves in a mature and rational way online, I think we're in for some very rocky times. One possible positive outcome?: social media platforms will come to be treated more like porn sites than valid sources of information or interpersonal communication.
Note that there is an active line of research into deep fake detection—along similar lines to fake news detection—and this could allow social media sites to flag and/or remove fakes images and videos.