Originally Posted by
Joseph Kashi
Ai has been evolving since the first rules-based "expert" systems of the late 1960s and early 1970s, which evolved out of the original DARPA Project MAC and Marvin Minsky/Seymour Papert at MIT when I was there. It's nothing new, just becoming increasingly visible and mainstream lately.
AI itself can be very useful in constrained situations where experts design and train systems for specific uses, such as medical diagnosis and reading medical imaging for subtle differentiation that initially escape a radiologist's review. These are very useful but they are different in kind from the sort of free-ranging generative AI that's being so highly touted now.
I did a few experiments with supposedly AI-enhanced Google searches and some of the results make any mildly knowledgeable person laugh. For example, a search for the best normal lens for an 11x14 film negative came back with a recommendation for various 50mm lenses. Everyone on this forum knows, or should know, that can't possibly be accurate.
Recently, a law firm was nationally embarrassed and fined by a federal court for submitting an AI-written brief without a lot of human review and care. Turns out that the AI could not find the "right" kind of precedental cases, so the generative AI just made up ( i.e., "generated") a bunch of faux court decisions with the names of real federal judges and all. The other side, not being idiots, checked that brief and quickly found that the citations to precedent were largely fictitious. The federal judge was "not amused" and, among other punishments, ordered the offenders to write letters of apology to every judge that they cited. These guys are now famous, but as objects of ridicule and as examples used in legal ethics training nationwide. That's not a good way in which to become famous.
Now, this sort of error can be mostly caught and rectified with the right kind of programming and automatic "adversarial" systems before the generated product is displayed to the inquirer, but fictitious-precedent brief illustrates that generative AI cannot substitute for real knowledge of a particular subject matter. At this point, at least, it's at best a useful adjunct rather than something that will burn down society overnight.
Certain kinds of professional photography will be essentially put out of business by this technology, but what's new about that? That's happened throughout the medium's history. Remember Kodak?
Programs have now been developed that let a content provider, such as a photographer, go through their online postings with newly developed "AI-poisoning" applications to render their content unusable or even destructive to generative AI. That removes the copyright violation issue but it may well be counter-productive in the long run as it renders that person/organization's content invisible to AI searches and hence invisible to users over the long haul. Do you want your Flickr images to be unsearchable?
Serious art photographers are probably less affected by AI than many others because AI isn't yet at the point of actual creativity. If your photos are truly your own artistic and emotional creation and not merely derivative of others (no more crowds in Iceland or Yosemite all photographing the same thing from the same vantage point at the same time, please ! ), then you'll likely remain differentiated from AI-generated images, which tend to just recursively reproduce and reinforce what's already online ad nauseum. AI is not at the point of generating something new, different, and emotionally resonant. Only you can do that.
Bookmarks