workflow

My iPad workflow – some conclusions


Over the last couple of months I have been looking hard at the whole idea of an iPad and iPhone based workflow for the kind of photography that I do. I have tried to find a workflow that is repeatable and adaptable that could replace my tried and tested (and damned good) workflow on a laptop or desktop computer.

I’ve failed.

After trying different iPads and iPhones as well as dozens of apps and an endless combination of those apps I have come to the conclusion that there is no way that an iOS device can replace a computer for the vast bulk of my work. There are several reasons for this but the main one is that iOS was never designed for this kind of heavy lifting and the way that you move files around between apps is still pretty painful and that it is even worse with RAW files. Don’t get me wrong, using a fully-loaded top of the range iPad Pro with decent internet connectivity and a keyboard you get really close to a good workflow but by then you have a device costing at least £1,000.00 (and a lot more if you go for the 12″) which weighs and costs almost as much as an Apple MacBook without the access to rock solid made-for-the-job applications.

Now all of that doesn’t mean that there’s no place in my working life for an iPad workflow. If I’ve got to offload a few Jpeg files quickly on the spot then the small, cheap and very lightweight iPad Mini with a few carefully selected apps can do a great job. My cameras are wifi enabled and the iPad fits into even my smallest Domke J3 camera bag which means that half a dozen images can be sorted an uploaded/emailed pretty quickly. Anything much more than that and it pays to get my ageing MacBook Air out and use that.

So far I’ve been through four stages of my quest on this blog:

  • The introduction to my quest asked questions and promised an answer. Eventually.
  • Part Two of my iPad workflow was an investigation of the various ways to get images from the camera onto the tablet. By the end of it I was still unsure which method(s) provided the best results.
  • Part Three of the series included a video showing how I processed files on the iPad Mini. I still use that same workflow and so that video is still worth watching.
  • Part Four concentrated on distributing the captioned, toned and cropped files to the clients. This will always be changing because new clients have their own requirements and old ones seem to keep changing theirs too.

In this final (for now) section I’m going to quickly go through my preferred Jpeg workflow stage by stage. I’m sure that this will only work for a few of you as it is but I hope that it provides you with a few ideas to incorporate into your own workflow and/or some ideas to reject because they don’t work for you.

  1. Use the wireless functionality built into my cameras to send the files to the FSN Pro app on the iOS device using the FTP option.
  2. Select those files, add IPTC captions from templates (the main description is often pre-written in Apple Notes and copied and pasted).
  3. Export them to specific folders in Files.
  4. Use Adobe Lightroom CC to crop and tone the pictures.
  5. Share pictures to a different folder in Files.
  6. From files I share them using the Transmit app for FTP, the Mail app for email or either Photoshelter, Dropbox or WeTransfer apps for jobs where those are preferable.

It’s a simple process with Jpegs but trying to do this with RAW files is a lot harder and has extra stages that make it unsuitable for what I need it to do. You may have noticed that I have avoided any mention of the Apple Photos app. That’s because it annoys the heck out of me. It keeps renaming files and tries to bring images into the Photos system at every stage. I just don’t want to send files to clients with file names starting img. I go to a lot of trouble to use custom filenames in my cameras and I want to be able to go back and find the matching RAW file without having to compare images. I may have shot a couple of thousand images on a job and only picked out six to send for urgent (social media usually) use and by avoiding the Photos app I save myself a lot of headaches later when I come to do a proper edit away from iOS.

An interesting journey which has increased my understanding of the way that Apple’s iOS works as well as giving me a useful, usable and adaptable workflow for when a few quick Jpeg files are the client’s priority. I mentioned near the beginning of this post that I’d failed and that’s true but in failing to replace my desktop workflow I have added yet another string to the bow and that makes the time and money that I’ve invested in this project well spent.

There’s a good chance that I’ll revisit this if and when some better apps come onto the market or if Apple finally decide to stop their Photos app renaming every file. Until then, enjoy working out whether your own work would benefit from a tablet or phone workflow.

iPad workflow part three

Welcome to the third instalment of my investigation of the best iPad workflow for the kind of work that I do. At the end of part two I came to the conclusion that adding images wirelessly to the iPad (or an iPhone) was the best way to go for me and in the few days since I made that observation I have largely moved towards using FSN Pro to get the pictures to where I need them to be.

I mentioned several times in part two that I wanted, wherever possible, to avoid storing anything in the Apple Photos app without explaining why I am so keen to avoid it. The simple answer is that my normal workflow for several clients involves keeping the original camera filenames intact so that it is possible to follow up at a later date and find them again without having to spend any time looking. Why Apple are so keen to rename every file with the clumsy “img_1234” formula is beyond me. I guess that it must make what goes on inside iOS easier for Apple – if not for photographers. By avoiding the app it is entirely possible to retain the original filename from start to finish. Don’t get me wrong; if I was rushing and getting a couple of quick edits away to a client then I’d happily rename files and/or settle for the img_xxxx option but when there are five, six or more photographs going through then renaming becomes a pain.

With this in mind I have looked at lots of different apps for captioning and toning both RAW and JPEG images and it has become clear that there isn’t one clear “best option” for all variations on my workflow. As someone who uses Photo Mechanic and Adobe Camera RAW within Adobe Photoshop to handle my pictures I’d love to have iOS versions of both ready to use. Camera Bits say that they have no plans to develop an iOS version of Photo Mechanic and Adobe seem to be more than happy with Lightroom CC as an image editor and RAW converter. During this phase of my research I’ve looked at lots of photo apps:

  • Filsterstorm Neue Pro or FSN Pro – a very capable IPTC and image editor for a JPEG workflow but not for RAW files. It allows all sorts of options and allows you to set up IPTC sets in advance making it very easy to caption photos individually or in batches. FSN Pro is also great for importing photos and exporting them to other apps or directly to FTP servers or other cloud based storage as well as to the “Files” option on iOS11 and later.
  • Lightroom CC – the nearest thing available for Adobe Camera RAW and therefore very familiar for me. It interacts with the iOS Files storage well too and it is definitely the best option that I’ve tried for working with RAW files. The synch with the Adobe CC Cloud is a mixed blessing and I am going to monitor how much mobile 4G data it eats when I’m on jobs using it. It has IPTC captioning built-in but it’s hard to imagine a clumsier implementation of what is such a vital function for me.
  • Affinity Photo – The Apple app store photo app of the year 2017 promises so much and delivers very little for me. It requires a top end iPad (preferably like the iPad Pro I have tried it on) and isn’t available on the iPhone at all. It edits photos really well but the lack of availability on my iPad Mini 4 or the phone means that I’m not interested in it as things stand.
  • Picture Pro Lite – A really good app but it appears to be no longer being supported. Very good IPTC options, decent image editing options but it has no interaction with iOS Files that I can see. Another app that promises loads but doesn’t quite do enough to be THE answer.
  • Shuttersnitch – Great for importing images and it has some good automated features but it doesn’t like RAW files and doesn’t play with iOS Files either.
  • Marksta – excellent watermarking and captioning app developed by an award winning photographer.

I’ve looked at others but I am trying to narrow things down here and so it has come down to choosing between a workflow for just JPEG files where time and simplicity are everything and a RAW workflow where I can get everything out of a RAW file that I could if I were working on one of my Macs. It’s entirely possible to have a single workflow for RAW and JPEG and here’s what I’m using right now:

  1. Connect the camera to FSN Pro via the FTP import option. I have blogged about setting up an ESO5D MkIV before and the process using FSN Pro to receive the pictures is exactly the same.
  2. Select the images on the back of the camera and use the “Set” button to upload them to the iOS device.
  3. If you are working with RAW files, select the photos within FSN Pro and export them to a folder in Files making sure that you check “Files to Export > Original Image File” option.
  4. If you are working with JPEG files then add IPTC captions in FSN Pro before going to “Files to Export > Selected Edit” option and exporting them to Files.
  5. Go to Lightroom CC and select the folder that you wish to import the files into and go to “Add Photos”, select them from Files and import them.
  6. For RAW files then it is easiest to write the main caption in Apple Notes and copy and paste it from there into the IPTC inside Lightroom CC as it cannot import the caption xmp file created by FSN Pro.
  7. For both file types you can now go through the photographs and adjust the colour, contrast, crops, sharpening etc in Lightroom CC making use of the copy and paste settings options as you go.
  8. Save the finished files either to another Lightroom CC album or folder or into a folder in the iOS Files app.
  9. Wait until part four to find out what happens next.

Here is a video that I made as a “walk through” for a basic and quick JPEG workflow. It is fine for RAW files too but you would have to ad the captions after converting the files rather than the more convenient way that they are added before toning in this film:

The great thing about having done all of the research and practice over the last few weeks is that I have a decent and repeatable workflow. The second best thing is that if I need to make a few changes then I understand what all of the other apps can do and I know how they work. This works but I’m not going to stop looking for improvements and changes. Yet!

The video on Vimeo:  https://vimeo.com/247334007

The video on YouTube:   https://youtu.be/rgc_cBjASVI

RGB and me

I get involved in a lot of discussions about the finer points of photography both online and in person. One of the most common this year has been the about choosing which RGB colour space we should all be working in. The truth is that there are a number of variables which, between them, should point you in one direction or another. There are plenty of RGB colour spaces but the main two are Adobe RGB and sRGB – mainly because these are the options you have when shooting with most DSLR cameras. There are a couple of others (Colormatch and ProPhoto) that offer wide gamuts and some real technical advantages but, as I hope to explain, this isn’t necessarily helpful. More isn’t just a waste, it’s a potential problem.

Ideal Worlds

In an ideal world we would have cameras, viewing and reproduction systems that gave us every tiny subtle variation in colour that the human eye can see on a good day in great light. We don’t. Yet.

What we have to work with in the Spring of 2017 is a range of different types of screens, projections and printing systems and those printing systems rely on an almost infinite variety of inks, pigments and papers. That’s before we even start to discuss all of the other materials onto which we can now print. So basically we, as photographers, have a series of moving targets to aim at and it’s almost always been the case in my career to date that I have no control over those targets. The same picture may be used for social media, newspapers, magazines and Powerpoint presentations and it is my job to supply those pictures in a format (or series of formats) that will enable the client to get consistently high quality results from them.

Bringing that back to the topic of RGB colour spaces means that I and my clients have choices to make. Those choices almost always involve compromise. Compromise almost always means that nothing is perfect for anyone or anything. In a version of the ideal world as it exists today I should be shooting, editing and supplying my pictures in the colour space providing the widest possible gamut of colours and tones providing the most vivid yet subtle renditions of the colours that match the brief and my vision for that brief. Maybe the client could pay for enough post-production time for me to provide two, three or more versions of each photograph suitable for each type of use. They don’t have the budgets in 999 cases out of 1,000. Maybe those pictures would then be taken by colour technicians and modified for each and every use and converted to the relevant colour space using the best equipment and software available. It doesn’t work that way very often.

What actually happens is that after I have supplied them the pictures are viewed and judged on a range of un-calibrated monitors in less than ideal viewing conditions before being sent to the web or to reproduction that doesn’t make any allowances for the kinds of screens, inks, pigments or papers that are going to effect how the pictures look. Because of this it has become sensible to make some compromises.

Adobe RGB is better isn’t it?

Yes but no… There’s no denying that in every single way Adobe RGB is superior to sRGB. It has a wider gamut meaning that the differences between colours and tones can be more subtle. There’s a case that says if you want wide gamuts and subtle gradations why stop at Adobe RGB? Why not go the whole way and work in ProPhoto or Colormatch? Good questions and here’s where we get to my reasoning about why I don’t bother.

The first is that most of us rely on what we see on our screens to make decisions about colour and tone during post production. If you work on anything but the highest of high end monitors which have been calibrated to the most exacting standards under ideal viewing conditions then you won’t be able to see the whole Adobe RGB gamut let alone the ProPhoto or Colormatch ones. Forget working on a laptop – unless you have your monitor and your colour management down to a fine art then you will be using guesswork and approximations on your images. Worse still, very few browsers, applications and viewing systems are smart when it comes to colour management. You might, through a combination of skill, judgement and good luck, get your pictures to be as good as they possibly can be only to experience the heartbreak that is seeing those perfect pictures displayed on dumb systems and looking like the flattest and most inept renditions of your images making you feel that you have not only wasted your time but that you may have done something wrong.

Sadly, all of those systems that make your images look awful will also make the Jpegs straight from your mobile phone look pretty good. Not to put too fine a point on it, the phones, tablets and screens that the vast majority of our images are now viewed on are not au fait with Adobe RGB but they love sRGB. Most of the printing systems and most of the automated systems for converting RGB to CMYK for printing work just as well with sRGB as they do with Adobe RGB because almost every CMYK colour space has a narrower gamut than sRGB does and that’s important.

What do you do with the spare reds?

stock-neil-012

Imagine a photograph of a red telephone box on a street in London with a red car next to it and two people walking past wearing their bright red Manchester United shirts swigging from cans of regular Coca Cola. Got the picture in your mind? How many variations of red are there in your picture? It’s a sunny day, there are hundreds and the differences are often very subtle. You’ve shot the picture in RAW (of course) and you are going back to your high end workstation to process the pictures. Your have a monitor capable of viewing the whole Adobe RGB gamut and you get to work. A short time later you have an edit of ten great pictures with all of those subtle reds looking as good as they possibly could and as good as you hoped they would. Save them as Adobe RGB Jpegs and whizz them off to the client. Two things can then happen:

  1. The client understands photography and has a completely colour managed work environment with decent screens and runs applications that can see Adobe RGB files properly.
  2. The client doesn’t work in a wholly colour managed environment and their monitors show your photographs as dull flat pictures that look worse than their own phone pictures.

If there’s any chance of getting option 2 instead of option 1 you have a problem and you can do a few different things:

  1. Work in Adobe RGB, saving the photographs in that space but then doing a batch convert to sRGB to supply to clients who they suspect cannot handle the wider gamuts.
  2. Ignore the issue and continue to supply in Adobe RGB and then complain when the work dries up or when the client comments on the flat files.
  3. Supply two sets of pictures; a “viewing” set of medium sized sRGB files and “printing” set in Adobe RGB
  4. Move to an sRGB workflow and supply everything in sRGB.
  5. Become a campaigning photographer, strive for ultimate quality and educate every one of your clients encouraging them to invest in perfect workflows.

The same goes for every colour. Purples and magentas can easily get mashed up when reducing the colour gamut and greens are famous for moving to mush really quickly in many CMYK spaces.

Why I’m a type 4 photographer

Several years ago now I realised that I wasn’t supplying any of my pictures to clients with workflows that could actively take advantage of Adobe RGB files and so I started to convert my carefully worked Adobe RGB pictures into sRGB before getting them to the clients. Most of the time I was working in decent conditions on my Eizo monitors and the rest of the time I was making educated guesses about how the pictures looked on my laptop.

It worked OK but I started to wonder what happened to those colours that were inside the Adobe RGB gamut but outside the sRGB range. How did an automated batch conversion deal with those subtleties? Photoshop offers several options for rendering those out-of-gamut colours ranging from shifting everything by the same amount down the scale to employing seriously sophisticated mathematics to translate the colours using what it calls “perceptual intent” which keeps the balance of tones without damaging the safest colours to accommodate those either side of the line. I asked myself why I was doing this when I had a RAW file to go back to should I need a more nuanced version of an image. The clients wanted (even if they didn’t know it) and usually needed sRGB files so why did I need Adobe RGB ones? Logic dictated that I try working the images within the sRGB gamut to start with. No more wondering which rendering option would do the best job (if I had a choice) and a lot less reliance on guesswork when I was editing on the laptop. Where, I asked myself, was the disadvantage to working solely in sRGB? I couldn’t find it then and I still can’t find it now.

The clients are happy. I’m happy. Win/win.

Based on a pragmatic and professionally sound set of reasons I now set my cameras, my computers and my whole workflow to sRGB. Having done that there are two further advantages that I had never considered (but really appreciate).

Monitors

Here in the UK you would struggle to buy a decent monitor with a genuine 100% of the Adobe RGB gamut for under £1,000.00. You can buy a quality monitor that handles 100% (and more) of the sRGB gamut for under £600.00 and have considerably more choice. Money saved on buying kit is always something that you should consider when you do this for a living.

Filenames

There’s something else that always bugged me. Canon and Nikon’s higher end cameras always change the leading character in the filenames to an underscore when you were shooting in Adobe RGB. Clearly someone, somewhere thought that this was a very useful thing to do and both of the major manufacturers still adhere to it. It really annoys me – in an almost irrational way. Moving over to sRGB has cleared this daily annoyance from my life (unless I’m editing other photographers’ work) but I’d love to know why Canon (and Nikon) cannot make this a custom function in their cameras rather than imposing it on us whether it suits us or not. This isn’t a reason to switch to an sRGB workflow but it is a side effect that I appreciate. Of course by the time most of my pictures arrive with the clients the files have been renamed anyway but one or two clients like to keep the original camera filenames too.

I shoot RAW anyway

All of this is a matter of opinion and logic for me and I always have the RAW file to go back to should I need it to create Adobe RGB versions. In the last two years or so since I went all the way to sRGB nobody has said “please supply us Adobe RGB files” (all of my clients are polite and always say please by the way). This is probably a case for my favourite piece of advice

“if anyone ever tells you that there’s only one way to do something in photography, don’t listen to them, they’re a fool.”

I’m convinced that, for now, I have it right for me. Want to tell me how and why I’m wrong?

Post production is all about the details

Passenger on the top deck of a tourist bus passing through Waterloo. © Neil Turner

Passenger on the top deck of a tourist bus passing through Waterloo.
© Neil Turner

I’ve read a lot about the ‘instagramisation’ of photography. I think that means taking slightly dull images, applying filters and presets to them and presenting them as bits of creativity. At the right time and in the right place those kinds of pictures have value and can make significant additions to creative campaigns and can go a long way towards making some elements of social media and social marketing more visually interesting. I’m not talking about that here – this blog post is all about choosing between making decisions about individual pictures or letting technology take over and ‘improve’ your work for you.

If you are on Facebook or any other social media that has targeted advertising you will probably get as many ‘suggestions’ as I do for people selling magical presets or add-ons to make my pictures instantly better. That’s great – or at least it would be if I wanted all of my images to exhibit a sameness with each other and with those of so many others. Trying to reduce professional post-production down to a series of mouse-clicks using algorithms and actions developed for others isn’t, in my opinion, a very good idea. I don’t want bland, over-processed or unreal pictures and I certainly don’t want to supply them to my clients.

For those of us who remember machine printing from our colour negatives where some pretty smart state-of-the-art machinery replaced judgements made by well trained and experienced operators it all looks pretty familiar. No matter how smart the technology gets, the highest quality and the best representations of our creative visions can only be realised when we pay attention to the details. All of them.

When I look at a single (RAW) image on my screen I make a lot of decisions pretty quickly. That is what an automated system would do too. I go through a sort of check list of options to make that picture into the best thing that it can be. Arguably, that’s exactly what an automated system would do too. The difference comes when I start to think about context:

  • What is the purpose of the picture?
  • Who is the client and who are the audience?
  • Which elements are most important?

The list of possible questions is long and you are probably getting my point by now. The same photograph needs different treatment for different purposes and that means handling the detailed decisions in different ways. How could an automated system have any idea whether correcting barrel distortion in a lens is necessary or even desirable? How would any machine know whether blown highlights in unimportant areas of the frame need to be sorted or whether they add to the atmosphere? The same goes for blocked shadows, underexposed faces, oversaturated colours and another long list of potentially vital elements. Let’s not even start to think about cropping at this point.

All of this is deeply reminiscent of hand printing black and white photographs in the darkrooms of the earlier parts of my career. Getting the contrast right and doing some dodging and burning were things that made all the difference between an average picture and a thing of beauty but the degree to which you ‘worked’ your photographs was dictated by where they would end up. Letterpress newspapers required very different prints to gallery walls.

It’s not just about how you handle the options for grading and optimising your pictures as you go through the process either. Output options vary and choosing between sRGB, Adobe RGB and the half dozen other options that are less commonly asked for isn’t something that you’d want to hand over to a machine. Sharpening comes in so many different forms these days and then what should you do about file sizes?

I have calculated that I make between thirty and forty decisions for every picture and another four or five for every batch of pictures that go directly towards how my photographs look when they arrive on the client’s screen and almost every one of those decisions has an effect on so many of the others. This stuff ain’t easy and it certainly isn’t as easy as those advertisements that pop up in my social media would have you believe.

Testing a Think Tank laptop shade

The Think Tank Pixel Sunscreen V2. Hillcrest Road

© Neil Turner, August 2015. The Think Tank Pixel Sunscreen V2 folded up.

I seem to be spending more and more time editing photographs in strange places. Last weekend it was in a tent on The Mall – right by Buckingham Palace. The weather forecast predicted  bright sunshine so I decided that I needed to replace my very old plastic laptop sun shade with something a bit more ‘state-of-the-art’. Looking around it quickly became clear that the Think Tank Pixel Sunscreen V2 was the most likely to fulfil my needs so I went down to Fixation to buy one. Before I parted with my money I made sure that I could fold the thing away and the handy instructions printed on it made it very easy to do. Basically, if you can fold a Lastolite or a small tent, this is a doddle.

There followed two very long days editing with a team of great photographers covering the Prudential RideLondon events in a white tent which was, for the most part, in direct sunshine. My 13″ MacBook Pro disappeared into the sunscreen pretty early on in the mornings and didn’t come out again until well into the evenings. That made for a very full-on test and having laid my own money down the previous day I hoped that I’d made the right decision.

© Neil Turner, August 2015. The Think Tank Pixel Sunscreen V2 opened up

© Neil Turner, August 2015. The Think Tank Pixel Sunscreen V2 opened up

When you see the unit opened up (and it opens very easily) it appears to be rather tall and the shape doesn’t look like any other sun shade that I have ever seen for a laptop. Sitting working with it for only a few minutes you realise that the height and depth of the hood makes sense and working with it was nowhere near as awkward as my previous hoods. No need to stoop or bend and my spine was in a lot less danger than it would otherwise have been. It isn’t as comfortable as using the laptop without the hood but it appeared very quickly to be a great compromise.

Typing captions and other text based activities were fine. I had a very dark shirt on and so barely saw my own reflection in the screen. When it came to preparing images I was forced most of the time to use the lightweight black cape that comes with the sunscreen and attaches with velcro tabs and actually get inside. The three pictures below are an attempt to show what using the screen actually means. As long as there’s no direct sunshine hitting the laptop screen you can happily work away. The centre picture shows how bright the sun was when took these pictures to illustrate how effective the Sunscreen is and the third picture shows how clearly and easily you can see the screen when you are ‘inside’ the caped unit.

Being inside the screen with the black cape draped over you can get a little stuffy and even a bit warm but you really can see the screen properly – even if you are facing into the direct summer sun.

So far I’ve mentioned lots of good things about this Think Tank product and over the two days I was using it I didn’t find too many faults but, in the interests of the V3 being even better, I thought that I’d share two niggles and suggest a couple of small design changes.

The biggest flaw by far is the velcro hatch on the lower rear left hand side (as you are looking into it). There’s no way that the Apple MagSafe 2 power supply will stay plugged into the laptop when it is fed through the slot unless you use something about 2-3cm thick to stand the laptop on inside the shade. Most of the time that you are editing outside I would say that battery power is fine but for this event I had to plug-in.

I found two strips of wood that my MacBook Pro sat happily on and all was fine. The way that the (very well made) seam of the shade is positioned in relation to the slot means that the slightest movement will disconnect the power supply because it cannot sit straight unless you raise the laptop. Somehow they need to create a slot that is less well engineered that overcomes this small issue because I don’t want to carry two strips of timber around with me. Whilst they are at it, it would make way more sense to have the velcro slot open from the bottom and not the top so that less light (which tends to come from the top rather than the bottom) or even go so far as to have a simple slit that you stick things through that gently holds the cable rather than this heavily constructed back door.

My other minor niggle is the size and placement of the branding on it. Almost a third of the right hand side is made up of a massive Think Tank logo. More than one other laptop user on our team suggested that they would prefer not to be advertising a product if they’d paid for it in such an over-the-top way. I came to agree with them once a few people had suggested that I must be sponsored by Think Tank.

Back on the positive side there are lots of small pockets inside the hood and I found these useful places to stick memory cards that needed to be given back to people as well as to hold my Netgear Mifi unit which was there as a back up should the provided internet service have failed for any reason. The pockets don’t keep gear safe but it does keep it away from being out of sight when you are closeted away inside the editing bubble.

Thinking that I would review the Sunscreen I tried a colleague’s 15″ MacBook Pro inside and, whilst it was a very snug fit, it passed the test. The MagSafe issue was probably worse with the larger screen – something that I’d need longer to confirm.

After two days of very concentrated use I decided that this is a very good piece of kit. It is well worth the money and I predict that it will last for a few years too.

07 August 2015. Bournemouth, Dorset. The Think Tank Pixel Sunscreen V2. Hillcrest Road

Zooming with your…

©Neil Turner/Bupa 10,000. May 2015. A Police rider accompanies a detachments of Guards as they march back their barracks.

©Neil Turner/Bupa 10,000. May 2015.
A Police rider accompanies a detachments of Guards as they march back their barracks.

I was on a job the other day, standing next to a very young photographer in a ‘press pen’. He glanced over at the gear I was using and mentioned how much he would love to own the 135mm f2L lens that I had on one of my cameras. He said that he had never really got the hang of “zooming with his feet” in the way that so many of the photographers he admired had advised. He had also had it drummed into him by one of his tutors at college and it had left him wondering if he was doing something wrong.

Zooming with your feet is a great concept and it is one of the catchphrases in contemporary photography that appears to be beyond question. But is it? Is it actually as much a cliche as a universal truth?

There we were on a job where we couldn’t have zoomed with our feet even if either of us had the skills to do so. We couldn’t go forward – there was a metal barrier in the way. We couldn’t go backwards because there were other photographers and a couple of TV crews behind us – and behind them was another barrier. We had a tiny amount of sideways movement if we could change places with each other but, apart from that, we were in a very fixed position.

The event we were shooting was a fixed distance from us and so it was possible to get the right prime lens on the camera and then to shoot the job.

What my young photographer friend didn’t know was that I had my 70-200 lens in my bag but that I had some real concerns about its performance earlier in the day which is why I had grabbed the 135 and decided to use that.

As we had plenty of time to spare I explained my choice of lens and explained that a lot of press work means that zooming with your feet is somewhere between difficult and impossible and that to get the most from a fixed position a set of zoom lenses is actually the right choice. I went on to admit that I would be doing a fair bit of zooming on the job myself, except that it would be in the post-production – zooming with the crop tool is what I decided to call it.

And that’s what I did. The resolution of modern DSLRs is such that you can get a high quality Jpeg from 50% of the actual frame and the quality of the best lenses easily allows you do that and maybe more. Starting off with a lens wider than you probably need and then refining your crop in post-production was very common in the days of darkrooms and prints but when we were shooting 35mm colour transparencies or with the early low-megapixel digitals it became important to get the crop right in-camera. We have come full circle and some judicious cropping makes sense once more.

Shooting with prime lenses is something that I have discussed more than once before and it is something that I find myself doing more and more on jobs where I’m the only photographer or where I have enough freedom to go with the universal truth/cliche (delete where applicable) and actually zoom with my feet. The rest of the time it is zooms and now that I am using two distinct sets of lenses for different types of jobs I’ve decided to invest in some new gear – and I’ll be blogging about that very soon.

Taste in monochrome

Ever since I shot my first roll of black and white film back when I was teenager I have been striving to master the art/science/alchemy of good monochrome. Many of my early photographic heroes were all brilliant in black and white and my own struggle with getting close to being good at it is a subject that I have blogged about before. Over the last two years I have become much better at it and I thought that I’d show a series of images here that demonstrate how I go from an original colour picture to a toned monochrome. I sometimes use Tonality for my conversions but this one was done in Photoshop CC.

Colour photo converted from a Fujifilm .raf file in Adobe Camera RAW

Colour photo converted from a Fujifilm .raf file in Adobe Camera RAW

Pensioners window shopping in the Brtish Heart Foundation furniture and electrical good store in Winton.

Straight ‘desaturate’ from the colour photo using Photoshop’s Shift Cmd U on a Mac (shift Ctrl U on a PC)

Contrast added using levels  in Photoshop.

Contrast added using levels in Photoshop.

New layer added and a tone applied across the image using the paint bucket tool at 12% before the levels were adjusted to re-introduce a black.

New layer added and a tone applied across the image using the paint bucket tool at 12% before the levels were adjusted to re-introduce a black.

Once you get the hang of it, this is a simple process which could be automated for batches. I prefer to do it by eye because the re-introduction of the blacks after the tone was added is something that benefits from subtlety and which changes from frame to frame.

I’m 99.9% sure that there are ‘better’ ways to do this but it appeals to my taste in monochrome for the web. It chimes with my taste in printing papers back in the days when we hand printed our portfolios on specialist papers with their own signature tones. Mine was Agfa Record Rapid which, when developed in the requisite chemistry, had a very pleasing warm tone.

I’m getting close to having a style that I like for this kind of work – my personal work – and I am looking forward to putting a better edited body of work together using this style or at least a development on it. In the meantime, there’s a large collection of assorted personal work on my Pixelrights gallery.