Using tiny apertures produces images that I find unusable for my purposes out of the camera.
For example, below on the right are out of the camera JPEGs of two tiny aperture captures. (These are taken from JPEG preview images embedded in raw files which were resized to my normal 1300 pixel high display size. We are looking at the central portion of the 1300 pixel high images.) For my purposes and visual tastes the right hand images are not usable.
On the left are the same images as they look after having some post processing. These do look usable to me, for my purposes of viewing on screen at 1300 pixels high.
Two questions come to mind:
- What post processing did I use?
- Are the results authentic, or are we looking at artificial intelligence /machine learning interpretations/inventions?
Post processing
The post processing products and workflow I use change from time to time. I'll describe here my current approach as at April 2021.
My aim is to produce JPEG images in the sRGB colour space for viewing as a whole image (i.e. no zooming in) on my PC or online at 1300 pixels high on a calibrated monitor in subdued lighting.
I use three editors in sequence: DXO PhotoLab, Adobe Lightroom and Topaz DeNoise AI. I sometimes also use Adobe Photoshop and very occasionally Topaz Sharpen AI.
I use two passes of the raw files from a session. The first pass is to get all the images from the session into a state where I can tell which of them might turn out ok after being processed. (They are in such bad condition before having any processing that I can't tell whether they would be useful or not.)
There are typically in the order of 400 to 600 images from a session.
For the A7ii double teleconverter setup the first pass involves:
- In PhotoLab, selecting all of the raw files from the session and applying a preset to them all. It is the same preset for all the images, irrespective of ISO etc. (I have one version of the preset for the A7ii, and another version for my other cameras.)
The preset sets the white balance to a fixed Temp and Tint which I established using a test shot of the grey patch on a ColorChecker Passport using the KX800 flash with the diffusion setup I used in the session.
The preset sets a colour rending profile based on the A7ii body (as known by PhotoLab)
The preset applies a mild amount of PhotoLab's ClearView Plus and Smart Lighting, pulls the highlights down somewhat and pulls the shadows up somewhat.
The preset applies the default level of PhotoLab's DeepPRIME noise reduction and also Chromatic aberration correction at the default level.
The preset outputs full size uncompressed 16 bit TIFF files. - I import the TIFF files manually into Lightroom and apply Auto Tone Control to them all and also a preset which adds a little Contrast, a fair amount of Texture, a little Clarity and even less Dehaze.
I export to 1300 pixel high uncompressed 16 bit TIFF files - I drag the 1300 pixel high TIFF files into DeNoise AI and apply the AI Clear method with Auto settings, with export to JPEG.
- In Lightroom I do a combination of adjustment and selection.
Adjustments are mainly from the Basics tab (most often Exposure, Highlights, Shadows, Whites, Blacks), and sometimes Graduated Filters and/or Radial Filters and/or Adjustment Brush. Also cropping.
I occasionally do a round trip from Lightroom to PhotoShop and back to do cloning that is too difficult or tedious to do in Lightroom. - While working in Lightroom I mark images that I may want to use with a different colour label and end up with a subset of the selected images that have now been processed individually.
I clear out the contents of the folders containing the 1300 pixel high TIFFS and JPEGs and export the selection subset of images from Lightroom to 1300 pixel high TIFFs and use DeNoise AI on them as in the first pass.
When I have got the point of having a set of images that I am content to use I go back to Lightroom and use the Spot Removal brush with Visualise Spots turned on to deal with dust spots. Having done that I re-export to JPEG and reapply DeNoise AI. At this stage, or earlier, I may be using settings different from the defaults in DeNoise AI, or I may be using the DeNoise method rather than the AI Clear method, or in extreme cases I may (very rarely) use Sharpen AI instead of or as well as DeNoise AI to deal with locally blurred or out of focus issues. With DeNoise AI or Sharpen AI I sometimes use their inbuilt mask function.
Image authenticity
I see two aspects of authenticity: whether the particular images processed this way look authentic; how far photographic images of invertebrates in general are authentic.
As far as these particular images are concerned, when I compare an out of the camera image with a post processed image I think I can see where most or all of the "extra" detail has come from. (Actually it's not so much "detail" - that seems to me to be about small-scale structures towards the limit of what I can make out. It's more a case of areas of varying sizes that are more clearly distinguished from one another in the processed images, plus some things like hairs that look thinner with more clearly defined edges in the processed version.)
What I'm not noticing in the processed images is something I have seen with upsizing software, GigaPixel AI for example, where the upsized images have some "details" that are obviously artificial.
So from the point of view of "invented" detail, I'm fairly comfortable with the processed images. Some of the edges look too sharp, too well defined, for example some things in the background of the upper of the two examples at the start of this post. But that is more like ordinary over-sharpening and to do with visual preferences, and if I put more time and attention into the processing, for example using the masking in DeNoise AI more and more carefully, then I think that is in my own hands to deal with.
As to the general issue of photographic authenticity, particularly as it relates to small invertebrates, I think this gets tricky. The thing is, especially with smaller subjects, I have no way of knowing what they "really" look like because I can't make them out with my own eyes in anything like the detail I can see in photographic images, nor can I resolve the subtleties of colour and texture.
That said, my feeling is that my invertebrate images are not authentic. Nature, to the extent I can make it out, looks much softer in its edges and colours than my images of it. That is fine by me, because my aim with my photography is generally to make what I think of as "pretty pictures" not authentic records of reality. For those whose viewpoint has more to do with reality and authenticity, I can see that my images could be problematic, over the top, unappealing or even distasteful.
I see other forms in inauthenticity in some images of invertebrates. One has to do with focus stacked images, where depending on the scene there can be a sudden and to my eye very unnatural-looking and visually disturbing break between the in-focus and out of focus areas. This can happen too with other subjects, for example in my flower stacks, where I try hard to avoid it.
And then there are dead animals put on to a machine to be photographed, or animals that have been cooled in a refrigerator to stop them moving, or baited to take them away from their natural activity so they are conveniently placed to be photographed. How natural/authentic is any of that?
And then there is the wider issue of how authentic are photographs in any case? For example, I never see the world, or people, as a thin in-focus slice with an out of focus background like you get especially when using large apertures.
I can enjoy all sorts of images of animals, as long as I don't get the impression of some sort of cruelty being involved. As to arguments about authenticity though, I wouldn't want to come down too hard on that one way or another, it is too slippery a concept for that. If I find an image pleasing to view, and/or informative, that is enough for me.
No comments:
Post a Comment