PDA

View Full Version : Flextight Colour Neg Scanning



yelmarb
16-Jun-2018, 04:47
I've been using my Epson V750 for colour neg scanning and it's surprisingly good. The colour is 90% there straight out of the Epson software, however the sharpness does seem vary from neg to neg.

I'm considering buying a Flextight but I'm just wondering what the colour is like from the scanner's software? Is it difficult to get good colour balance?

Pere Casals
16-Jun-2018, 06:04
I've been using my Epson V750 for colour neg scanning and it's surprisingly good. The colour is 90% there straight out of the Epson software, however the sharpness does seem vary from neg to neg.

I'm considering buying a Flextight but I'm just wondering what the colour is like from the scanner's software? Is it difficult to get good colour balance?

Scanners are (IT8) calibrated systems tending to deliver the same colours. Anyway the color interpretation may differ because the dyes filtering light on the pixels on the linear sensor, and because the particular illumination the scanner uses.

You always can make a device work mostly like another one by making a 3D LUT that converts from one to the other in PS, this can be made with 3D LUT Creator software for example.

Beyond that Flextight systems are top notch, I've never been aware that somebody was complaining about the color interpretation of those devices, scans I've seen from flextights are simply amazing. Also I agree that the V750 makes a surprisingly good color job, this is extraordinary given the price.

If "sharpness does seem vary from neg to neg" this can be because curled negatives, you first can try to address that by trying wet mounting or by simply purchasing the V850 film holders that have an ANR glass to ensure film flatness and also to adjust film height.

If you have the problem with roll film you can also solve it with a dedicated roll film scanner, like the Plustek 8000 series, or the Plustek 120 that also makes MF.

A Flextight will shine when you have extreme densities you want to recover, but for 4x5 film you won't obtain a substantially sharper scan than with a V750, because the flextight has a 8000 pix sensor that when scanning 4" wide it only takes 2000 samples per inch. Anyway the Flextight is Pro machine: the 2000 dpi scan is good. To obtain the same with the V750 you have to scan at 3200 and then you can reduce the image size.

interneg
16-Jun-2018, 06:12
You can get very accurate colour from a flextight, and you will discover that the Epson's sense of 'colour' is rather poor by comparison. Some of the pre-set modes in Flexcolor are sort of OK, but for best results you should scan as a transparency with no clipping & some neg rebate, possibly using a curve to open up the areas of density that will become highlights. In Photoshop you want to use ProphotoRGB colourspace, sample the base density, use that to fill a new layer set to divide - then flatten, & invert. Using a curves layer & clipping warnings, clip black and white points in each of RGB. Now you'll have something that is very close to accurate reproduction of colour within fine colour corrections (curve layer set to 'colour' blending) and a curve layer set to 'luminosity' blend mode for adjusting tonalities. These techniques apply across most high end CCD & drum scanners. No, the Flextight is not a drum, but it is very, very much better than an Epson, just remember to set the unsharp mask to -120!

There will be those who try to deny that the Epson has very severe shortcomings & make claims for its resolution in comparison to the Flextight that don't hold up in the real world. A halated, 2400 loaded with aberrations fades against a crisp, unaberrated 2040 when scanning 4x5. A drum scanner or some high end CCD scanners do have benefits over the Flextight - fluid mounting, all the resolution of the scanner available over the entire bed/ drum & (in the case of PMT drum scanners) potentially less noisy shadows & adjustable apertures to minimise dye cloud induced aliasing/ noise. That said, the Flextight is in many ways the easiest of higher end scanners to learn to operate, with excellent results possible after a few minutes of teaching.

Pere Casals
16-Jun-2018, 08:13
Epson's sense of 'colour' is rather poor by comparison.

The reality is that if you scan an IT8 target you get exactly the same scan in the X5 than in the Epson V750, and exactly the same is exactly the same. This is a fact that cannot be challenged.

In the case that one device does not deliver the reference values in the target this is because it needs a calibration or because there is a color modification (auto image enhancement) procedure that damages fidelity.

It is true that for each film it can be an slightly different color interpretation, depending on the particular film and scanner ilumination/on_sensor_dyes, but it's really easy making a 3D conversion LUT to obtain a perfect match.

bob carnie
16-Jun-2018, 08:34
I find that with colour negative film and the Flextight its a bit of a challenge , getting the right balance... As some noted turn off the sharpening,, as well I go through a bunch of the presets available to this scanner for different colour negative film, But it must be noted that different labs worldwide have different levels of quality control process.. So a fuji nc processed one place is not the same profile as a fuji nc processed elsewhere.. with that said the balance should be in somewhat the same ball park.
Once I have found the profile I want to use and the sharpening is set to off , I set 16bit RGB and the correct film format crop.
Then one can go into the R G B channels and bring the endpoints to the info area of the histogram, this usually brings the colour into a decent shape, then using the middle point I make minor mid tone corrections .
Also available is the curve to adjust density and contrast but frankly by this point it is minor corrections
Usually this works well and save the settings, then you can follow up with the rest of the negatives of this ilk.

I like to keep all the extreme adjustments for PS and am looking for an initial good colour balance, density and contrast , not the final look which could be a problem as PS software is much better than Flextight for the final look.

Pere Casals
16-Jun-2018, 09:06
But it must be noted that different labs worldwide have different levels of quality control process. So a fuji nc processed one place is not the same profile as a fuji nc processed elsewhere.. with that said the balance should be in somewhat the same ball park.

I agree, a very important thing is how the negative scan is processed. Beyond that each lab has a favourite color profile for each film, also they may have a profile for each Pro photographer, this is specially true in the case of wedding stellar photographers, and Fuji/Portra 160/400.

http://www.johnnypatience.com/jose-villa-colors/
http://www.rebeccalily.com/fuji-pastel-colors/

Ted Baker
16-Jun-2018, 15:53
The reality is that if you scan an IT8 target you get exactly the same scan in the X5 than in the Epson V750, and exactly the same is exactly the same. This is a fact that cannot be challenged.

It's true you could use an it8 target AND a colormeter to reproduce a negative. But you would just.have another negative.

To reproduce the results that you would get from an ra-4 print (or for that matter software designed for this purpose) is a little more complex... A it8 target can be helpful just as Macbeth colorchecker is just as useful for this purpose. It8 targets are NOT designed for this purpose.

IMHO This isn't a simple topic with much proprietary knowledge.

interneg
16-Jun-2018, 17:14
I find that with colour negative film and the Flextight its a bit of a challenge , getting the right balance...

Main issue seems to be that Flexcolor tries to deal with the mask as if it's just a global colour correction rather than a sample & divide before inversion - and doing the latter to a positive scan of the neg makes it surprisingly easy to match enlarger RA4 prints made by some of the better known current printers in London, largely because it gets you to a point where the fine colour corrections are much more darkroom like. Just to annoy those blathering on elsewhere in this thread about IT8's etc, it's possible to match a colour checker (even from an Ektar neg - allowing for the saturation difference!) from a Flextight scan without too much trouble.

cdavis324
16-Jun-2018, 17:36
You can always scan as "raw" and use colorperfect to invert the negative. It provides the best results with the least amount of work with almost any scanner. The flextight software is powerful, but can be time consuming to get the right color... You really have to go in and set curves, endpoints and such for each image to get it right with flexcolor. Colorperfect alleviates a lot of that work!

interneg
16-Jun-2018, 17:59
Colorperfect alleviates a lot of that work!

It doesn't, not if you want to get everything that the film is capable of.

Ted Baker
16-Jun-2018, 20:08
Just to annoy those blathering on elsewhere in this thread about IT8's etc, it's possible to match a colour checker (even from an Ektar neg - allowing for the saturation difference!) from a Flextight scan without too much trouble.

Why would you want to annoy anyone posting here ;)

IT8.7/2 is for reflective materials to calibrate for the dyes in the print, which are designed on purpose to work under a wide variety of illuminates in the first place so perhaps good enough compared to a macbeth chart which is designed specifically for colorimetric purposes. IT8.7/1 is for transmission targets were the illuminant is more critical, and the dye sets are more likely to suffer from Metameric failure when using the wrong illuminant, they trade this for greater saturation and gamut. Metameric failure is the main reason you need a target with the correct dye set.

No ones arguing that these targets are not useful, and it is possible to match something that already exists like a negative if you take colormetric measurements of the negative, but it is not quite as easy to match something that does not yet exist such as the resultant print. The calculations and methods being largely proprietary.

Pere Casals
17-Jun-2018, 02:46
The calculations and methods being largely proprietary.

I'd say that the proprietary side of the methods are mostly because practical issues, rather because the calibration nature.

A 8 bit per channel would require 3D LUT with 256 positions in each direction of the cube, this is 256^3 so some 16 mega cells, each with 4 bytes (3 + 1 dummy because memory alignment ), so 64 MB. This is not much for present computers.

A 16 bit per channel 3D calibration would require 65536^3 x 8, which is an amount that is crazy big number.

In the 8 bits per channel calibration you fill the 3D matrix with data from the calibration, from the scanner reading of each color in the calibration you get the x,y,z coordinates and you fill that cell with the reference values in the target.

Then we have to fill the rest of the cells that were not set with calibration data, this is all the matrix but the calibrated reference points. This process is a common interpolation/extrapolation that takes data from sorrounding places that have reference values, that interpolation can be from a weighted (upon distance) average to splines.

In this way any RGB combination from the sensor would have and output value in the calibration 3D LUT. So that's straight

Problem happens when wanting a 16 bits per channel calibration, we cannnot use such an inmense LUT !!

There are 2 ways I know for that. One is going to a logarithmic scale in a way that it does not degradate the calculations for the low values.

Another one is having a "virtual" 16 bits LUT in a 8 bits LUT, this is moving the calibrated point to a cell rounding the 16 bits value to a 8 bits RGB input value, and then calculating what it would be the reference (16 bits per channel) output in the new position, by interpolation.

With that calibration LUT for each 16 bits per channel RGB entry point we interpolate between 8 bits per channel entry points that have 16bits per channel output values, this works perfect.

I've been typing that code in c++... for 16 bits per channel data with 8 bits LUTs in PCs, and or 8bits data with 5 bits per channel LUTs (32x32x32 levels) for embedded uC.

yelmarb
17-Jun-2018, 03:04
If "sharpness does seem vary from neg to neg" this can be because curled negatives, you first can try to address that by trying wet mounting or by simply purchasing the V850 film holders that have an ANR glass to ensure film flatness and also to adjust film height.


I was looking at the ANR glass holders but I can only assume that on a 4x5 neg, they will be magnets for dust and lint. Of course you could fluid mount them as well, but then you have issues with air bubbles and dust etc on the mylar film. Have you had any experience with them?


In Photoshop you want to use ProphotoRGB colourspace, sample the base density, use that to fill a new layer set to divide - then flatten, & invert. Using a curves layer & clipping warnings, clip black and white points in each of RGB. Now you'll have something that is very close to accurate reproduction of colour within fine colour corrections (curve layer set to 'colour' blending) and a curve layer set to 'luminosity' blend mode for adjusting tonalities.

That's a great workflow for processing colour neg scans, thank you very much for that advice! My only question about this is that if you're using Flexcolor to scan with, wouldn't it be better to do this on the 3F raw file instead?

Pere Casals
17-Jun-2018, 03:18
I was looking at the ANR glass holders but I can only assume that on a 4x5 neg, they will be magnets for dust and lint.


To scan (and other tasks), I'd recommend having a really dust free environment. I scan in an small room that only has the scanner and a laptop, I clean air with an HEPA air cleaner: Honeywell 16200. I start it 10 min before and I try to not generate dust from clothes while scanning. No dust.



Have you had any experience with them?


This is straight but it is matter of practice and watching some tutorials:

https://www.youtube.com/results?search_query=Fluid+Mount++Tutorial

interneg
17-Jun-2018, 03:51
That's a great workflow for processing colour neg scans, thank you very much for that advice! My only question about this is that if you're using Flexcolor to scan with, wouldn't it be better to do this on the 3F raw file instead?

My own experience is that the 3F is just a straight uninverted unadjusted .tiff file with a proprietary file name - and that making an adjusted 16-bit .tiff in Flexcolor gives a far better inversion - possibly for reasons to do with the analogue to digital conversion or something like that. Essentially, what I am doing in Flexcolor is making the file that a 3F should be, not what it actually is. After all, how many drum scanners offer anything other than a .tiff or similar file? Can't think of any that use proprietary file formats.

Ted Baker
17-Jun-2018, 05:11
I'd say that the proprietary side of the methods are mostly because practical issues, rather because the calibration nature.


I think the proprietary side starts with entire analog process from Kodak, fuji and not sure who is left. Neither of them publish the details really needed, though the process is explained in a few books by Kodak employees on how the the negative/positive system has been designed. The best by far is Digital Color Management:Encoding solutions by Giorgianni and Madden it is not cheap but worth it if you want to understand the process better. On the software side most of the solutions aren't all that well published. BTW The process that interneg refers is almost the same as used by colorperfect (who do publish their algorithm). IMHO its a very sound principle but misses a few calibration steps.

Plus of course in the hay day of analog, high end scanning was primarily focused on chromes, photographers who shot negative stock tended to make prints...

But back to calibration , putting aside any computational difficulties in the case of a negative/positive system what would you actually calibrate?

Pere Casals
17-Jun-2018, 12:50
Neither of them publish the details really needed

Let me tell my view.

IMHO the critical process is at exposure time. Then you take spectral information hitting the negative in each image point and you convert that SPD in each point to 3 values. After doing the 1st development of the C-41 you get in the negative 3 values of silver density, one in each of the 3 color layers. This is where true metamerism takes place !!!! In the same way a digital sensor converts the spectum to 3 RGB voltage levels in the bayer pixels. Also the human eye takes the spectrum hitting a point and makes a phototransduction (https://en.wikipedia.org/wiki/Visual_phototransduction), RGB chromophores in the RGB cones filter the spectrum and that ends in "3 electrical signals".

Kodak and Fuji publish that critical data, Portra 160:

179470

Then you make the color development stage, and silver is bleached. Those coupler dyes only work like an ICC profile, no information is lost at this stage, spectral information was lost (reduced to 3 exposure values) at the exposure time, the dye coupling only cooks the metameric information for a nice result, that's also linked with the RA-4 particular paper interpertation.

What I say is that there is no universal calibration than would make Velvia 50 portraits look as if Portra 160 was used, because the captured information has reduced spectrums to 3 color levels.

In the same way when we have the negative developed we are virtually playing with the metameric conversion done, the spectrums in the negative are from 3 overlaped curves that can be higher or lower, but always having a similar shape, depending on the coupling dyes the film uses, so at the end we have 3 virtual levels for each point.

We are not playing anymore with spectrums hitting every image point, but with reduced metameric information.




what would you actually calibrate?

IMHO we have 2 cases.

a) Slides are a reference medium. A calibrated hybrid process should deliver an sRGB file that shown in a calibrated monitor would be as close as possible to the slide viewed directly. Here we have a problem because some Velvia tones cannot be seen in a monitor. Also the Velvia slide can be 3.4D while a monitor usually has 2.0D static contrast (and 200000000000000:1 dynamic contrast :) )

b) C-41 is an industrial process intended to end in a print, but we have a necessary post-process to adjust the result. So IMHO for an scanned color negative we haven't a reference. IMHO single choice we have is selecting a nice reference for each film type. This would be taking sound prints from common shots and making the calibration from the print, or from a good digital edition of the scan, adding perhaps an smart equalization.

A C-41 negative to positive conversion could have some options: Standard, Landscape, Wedding José Villa, Vivid, Soft...

Ted Baker
17-Jun-2018, 20:05
Let me tell my view.

IMHO the critical process is at exposure time. Then you take spectral information hitting the negative in each image point and you convert that SPD in each point to 3 values. After doing the 1st development of the C-41 you get in the negative 3 values of silver density, one in each of the 3 color layers. This is where true metamerism takes place !!!! In the same way a digital sensor converts the spectum to 3 RGB voltage levels in the bayer pixels. Also the human eye takes the spectrum hitting a point and makes a phototransduction (https://en.wikipedia.org/wiki/Visual_phototransduction), RGB chromophores in the RGB cones filter the spectrum and that ends in "3 electrical signals".

Kodak and Fuji publish that critical data, Portra 160:

179470

Then you make the color development stage, and silver is bleached. Those coupler dyes only work like an ICC profile, no information is lost at this stage, spectral information was lost (reduced to 3 exposure values) at the exposure time, the dye coupling only cooks the metameric information for a nice result, that's also linked with the RA-4 particular paper interpertation.

What I say is that there is no universal calibration than would make Velvia 50 portraits look as if Portra 160 was used, because the captured information has reduced spectrums to 3 color levels.

In the same way when we have the negative developed we are virtually playing with the metameric conversion done, the spectrums in the negative are from 3 overlaped curves that can be higher or lower, but always having a similar shape, depending on the coupling dyes the film uses, so at the end we have 3 virtual levels for each point.

We are not playing anymore with spectrums hitting every image point, but with reduced metameric information.





IMHO we have 2 cases.

a) Slides are a reference medium. A calibrated hybrid process should deliver an sRGB file that shown in a calibrated monitor would be as close as possible to the slide viewed directly. Here we have a problem because some Velvia tones cannot be seen in a monitor. Also the Velvia slide can be 3.4D while a monitor usually has 2.0D static contrast (and 200000000000000:1 dynamic contrast :) )

b) C-41 is an industrial process intended to end in a print, but we have a necessary post-process to adjust the result. So IMHO for an scanned color negative we haven't a reference. IMHO single choice we have is selecting a nice reference for each film type. This would be taking sound prints from common shots and making the calibration from the print, or from a good digital edition of the scan, adding perhaps an smart equalization.

A C-41 negative to positive conversion could have some options: Standard, Landscape, Wedding José Villa, Vivid, Soft...

Pere, I think you are wide of the mark on a few points when talking about a negative/positive system. Digital Color Management:Encoding solutions by Giorgianni and Madden is really worth the effort if you want to understand the science of negative scanning properly (1st edition is cheap second hand). The Reproduction of Colour in Photography, Printing and Television by Hunt while old and pre-dates scanning still explains some of the important differences between a chrome and negative.

For example the RA-4 paper has different spectral sensitivity than our own eyes, its peak sensitivity to the cyan forming layer is different. As an example, this can be seen in the characteristics curves which are measured used status-m (which is an arbitrary standard a little closer to our own eyes and a typical scanner...) and results in non-parallel lines. When a negative is printed using RA-4 the resultant printing density results in parallel lines for all three layers (assuming the negative is a of neutral grey subject shot in the correct illuminant).

In fact the negative/positive system is also subject to it own Metamerism, unique to the negative/paper combination. i.e. two colors on the negative might appear different to our own eyes but the same to the paper.

I don't for one moment think that kodak/fuji who understand these relationships and almost certainly done many print density calculations and have developed system's such a PhotoCD, Cineon, fuji frontier, could not produce targets and tools that could help other vendors, or even photographers, hence my comment about it being largely proprietary ...

letchhausen
17-Jun-2018, 23:07
You can always scan as "raw" and use colorperfect to invert the negative. It provides the best results with the least amount of work with almost any scanner. The flextight software is powerful, but can be time consuming to get the right color... You really have to go in and set curves, endpoints and such for each image to get it right with flexcolor. Colorperfect alleviates a lot of that work!

I did a whole show (18 prints) doing pretty much that. After the first couple, I used some presets in the FlexColor software and that got the negs inverted and most of the way there before I pulled them into PS for final color correction. I then dusted and resized output for lightjet prints (30x40 and 40x50) and they looked phenomenal. I tried using the X5 to produce .tiffs and I was never able to get them to color correct properly and they didn't seem as sharp so I went back to 3F.

I think this might be along the lines of what Bob Carnie was talking about as well.

Pere Casals
18-Jun-2018, 01:20
Pere, I think you are wide of the mark on a few points when talking about a negative/positive system.

In fact the negative/positive system is also subject to it own Metamerism, unique to the negative/paper combination. i.e. two colors on the negative might appear different to our own eyes but the same to the paper.



Ted, I think I've not expressed well what I wanted to say. Let me try again.

It is true that the developed negative has an spectral nature, and each stage (negative, RA-4, display illumination) has it's own metamerism... no doubt.

But let's speak about encoded color information, because this has an impact in how we can calibrate a system.

We have CMY layers in the developed negative, let's take a layer, the Magenta for example of the Portra 160.

Can an spot in the M layer have any transmission spectrum ? No !!!! We only have a collection of possible spectrums

For each silver density (exposure) in the M layer (after 1st developer) we will have a unique corresponding transmission spectrum in the M layer.

This is a Bijective function (https://en.wikipedia.org/wiki/Bijection), one to one, from a linear space of densities (real numbers) to an space of functions (spectrums). Same hapens when the negative is RA-4 printed.

If you apply advanced math to that situation it can be demonstrated that a 3D LUT is exactly what maps the changes in the metameric information in all processes happening after exposure. This includes the color development result in the C-41, an RA-4 printing process, scanning and color profiles, palettes and creative 3D LUTs. (Of course this does not consider local edition, dodging, masks etc, only general transformations in the color spaces).

The books you point are really interesting, but those explains (amazing) how color was managed by analog means.

In the hybrid process we have the benefit of digitaliazation that allows for easy information abstraction.

Let me point an hybrid case, Star Wars 7 and 8 was shot in film, and the first thing they did was scanning the film, they made the digital process, and finally they also printed the film for the IMAX release. All the digital process was made in a RGB space, ending in a printed film that had an spectral complexity like the negative had, but nothing was lost when the spectral information of the Vision 3 negative was reduced to 3 numbers. Any lost information was lost at the exposure time.

Ted Baker
18-Jun-2018, 02:26
The books you point are really interesting, but those explains (amazing) how color was managed by analog means.



Giorgianni was the Designer of Kodak's PhotoCD so no its not just about analogue, I think your missing a few parts to the puzzle and you would enjoy that book and it would assist you greatly.




Let me point a case, Star Wars 7 and 8 was shot in film, and the first thing they did was scanning the film, they made the digital process, and finally they also printed the film for the IMAX release. All the digital process was made in a RGB space, ending in a printed film that had an spectral complexity like the negative had, but nothing was lost when the spectral information of the Vision 3 negative was reduced to 3 numbers. Any lost information was lost at the exposure time. They used V3 because it was taking the information they liked.


Star wars like most films shot of film in the last twenty years was scanned using Kodak Cineon or DPX system. This is something I understand well, as it is one of the few processes that is very well documented. A cineon scanner records what is referred to as print density, it uses 10bit per channel to record the density, the density measurements are calibrated as if the negative was printed on Vision 2 print stock (the stock that existed at the time) resulting in neutral grey on the print stock which can be measure using status-a measurements. These density measurements are not the same as status-m or typical measurements of the negative, but what the resultant print density would be. It was designed originally as an analogue to analogue system, with a digital intermediate.

It was from these 10bit density measurements that a CLUT can used to convert to digital projection, or digital effects can be mixed in and the whole thing output back to 10bit density measurements and printed on print stock.

I understand the concept of mapping but also what I what I was trying to point out is that you need more than an IT8 target (which is not designed for this purpose) to do it properly.

My own solution is based on the cineon documentation, and I am currently attempting to use a macbeth color checker photographed and printed on RA-4 paper and measured with a colour meter as well comparisons with fuji frontier, and noritsu scans to calibrate my own output.

Pere Casals
18-Jun-2018, 03:44
Well, an IT8 target calibrates the scanner job, what you do is clearly beyond it, I understand that. You are replicating the color analog processing for the hybrid, challenging task.

What I'm pointing is that this task should be possible with a 3D lut for each film/paper combo.

Pere Casals
18-Jun-2018, 03:55
...so I'd use 3D LUT Creator to map negative colors (ITU calibrated) directly to the corresponding print colors(also IT8) . This should work, but a particular 3D LUT should be used for each negative/paper types...

The generated 3D LUT would contain what the IT8 calibration cannot do...

Ted Baker
18-Jun-2018, 05:11
...so I'd use 3D LUT Creator to map negative colors (ITU calibrated) directly to the corresponding print colors(also IT8) . This should work, but a particular 3D LUT should be used for each negative/paper types...

The generated 3D LUT would contain what the IT8 calibration cannot do...

Putting aside that it8 targets made on negative stock don't exist. (You could make your own with a colormeter). How does your process account for the print exposure?

In reality It's a multistage process, what your starting to describe is now more involved than you first suggested ;-)

But your on the right track for sure.

Pere Casals
18-Jun-2018, 15:52
Putting aside that it8 targets made on negative stock don't exist.

IMHO this is irrelevant, common scanners work nearly the same, so we have a consistent starting point with the negative scanned image and a consistent end point with the print scan.

Process can be calibrated from reference prints that come from a range of practical situations.

Of course we can also make reference negatives/prints by using color checkers, with different exposures.



How does your process account for the print exposure?


IMHO the conversion should be assisted, as the analog printing process is. The calibration of the conversion can be done with a negative exposed in standard conditions and printed optimally, but operative conversions can be assisted with user adjusting exposure and CMY color balance as it is done in the darkroom. Those adjustments would modify the negative image before being the 3D LUT applied.





In reality It's a multistage process, what your starting to describe is now more involved than you first suggested ;-)


I reiterate what said in the first suggestion, each stage can be mapped with a single 3D LUT, but a number of sequential 3D LUTs can be combined in a single 3D LUT, so my suggestion is considering the full process is a black box and then we only need to find a 3D LUT working as the transfer funtion of the back box, this is generating the LUT that takes the transmission sprectrum of the reference negative and delivers the scanned reference print.

This is straight, today we have tools that generates a 3D conversion LUT if we have the input image and the output result we want.

interneg
18-Jun-2018, 17:19
My own solution is based on the cineon documentation, and I am currently attempting to use a macbeth color checker photographed and printed on RA-4 paper and measured with a colour meter as well comparisons with fuji frontier, and noritsu scans to calibrate my own output.

You've also got to remember that cinema print stock is colour timed for a xenon arc lamp, not daylight...

More importantly, are you making those RA4's optically or via scanning? Both of those minilab scanners (from my own experience matching Flextight scans to them and to optically printed darkroom RA4's) significantly cripple certain colours, especially at the saturated ends of the blue/ yellow continuum ('b' in LAB) - probably for perfectly reasonable production related reasons which may have more to do with sociological definitions of 'what a colour negative print should look like' and minimising chances of significant user error by a semi-skilled operator - after all there is a big price difference between a master darkroom print & a print off a Frontier. That said, these machines can do a pretty excellent job of getting rid of the mask correctly, though the subsequent 'looks' they bake-in may have more to do with what 'looks like it could be correct' rather than what the film actually has to offer. The density & colour is thence internally altered to minimise the chances of mismatch with the output media, no matter what the output media's possible gamut is - indeed, I recall that there's there quite a lot of research going all the way back to the early days of mass market colour printing which effectively states that people will often accept quite a range of colour, as long as the density is correct.

I get the sense that you are making this exercise significantly more complex than it needs to be - possibly because you are struggling with compromised scans. Fundamentally, it's relatively simple (within the context, making the film & paper emulsions work correctly together was drastically more difficult - hence the need for the mask in the first place!): the software has to remove the mask in much the same way as chromogenic paper would 'see' the negative, and from then onwards it's up to someone skilled at colour correction (which is not a problem in cinema or higher end photography). And it's that problem of skill & the cost thereof that leads to tightly calibrated compromises in systems aimed at reducing the chances of gross operator error.

As to Pere's comments about the 'looks' from Noritsu/ Frontiers, even with a limited range of choices (say 5-7) for perhaps 5-7 controls, you very rapidly end up with a couple of thousand possible combinations, very few of which are desperately difficult to reproduce. Indeed, they often seem to have more in common with the tendency to colour grade cinema to intentional colour casts, such that when a 'straight' print/ inversion of the neg is made (respecting the colour balance of the film) it actually looks odd because we've been so conditioned to expect 'off' colours.

calebarchie
18-Jun-2018, 18:11
Interneg please, I am enjoying this pissing match in every scanning related thread ;)

Ted Baker
19-Jun-2018, 01:12
IMHO this is irrelevant, common scanners work nearly the same, so we have a consistent starting point with the negative scanned image and a consistent end point with the print scan.


I am sure you can use them, just like you can use 400iso film for other than "sports and action". Maybe an Ektachrome target has the same dyes or close enough to Kodak negative stock. But perhaps others are interested in how they are supposed to work, I certainly am.



I reiterate what said in the first suggestion, each stage can be mapped with a single 3D LUT, but a number of sequential 3D LUTs can be combined in a single 3D LUT, so my suggestion is considering the full process is a black box and then we only need to find a 3D LUT working as the transfer funtion of the back box, this is generating the LUT that takes the transmission sprectrum of the reference negative and delivers the scanned reference print.


I agree, and it is a very useful technique to compute a combined LUT for multiple stages for each value in your input range i.e. make 65536 calculations instead of 50 million, for 50mexapixel scan. If I misinterpreted your first post then sorry, it seemed a little simpler than the task at hand. It is also however useful to bring the image into a linear colorspace, if you intend to use a modern editor even if the output is not expected to be thus.



This is straight, today we have tools that generates a 3D conversion LUT if we have the input image and the output result we want.


If you mean matching the specific output from scanner/software combination if you have access to the same negative. i.e. Someone gives you the finished file from an "acmetight" scanner processed with "acmeshop", and they give you the negative that is straightforward. But that is just fancy copying.


You've also got to remember that cinema print stock is colour timed for a xenon arc lamp, not daylight...


And they even turn the lights off when you view it... ;) Yes the print stock (and camera) is definitely engineered differently to make the most of different viewing environment, but a lot of the technology is the same, plus it is well documented and there is more than one vendor that makes the equipment.


More importantly, are you making those RA4's optically or via scanning?


No printed myself, borrowed a meter, will buy one later. The noritsu and frontier scans you have everything in the file. At some point I would like to try the flexcolor software to see how it works. I was disappointed to hear remarks on this thread that the flexcolor software wasn't all that special with regard to negative processing.


I get the sense that you are making this exercise significantly more complex than it needs to be - possibly because you are struggling with compromised scans.

Perhaps but like many carpenters, machinists etc who are build there own view cameras, while there are perfectly serviceable cameras available, they have a reason for doing so. I am currently interested in image processing.

Pere Casals
19-Jun-2018, 01:39
You've also got to remember that cinema print stock is colour timed for a xenon arc lamp, not daylight...

Interneg, this is true, but that problem it's very easy to overcome. As the xenon arc lamps have a well continuous 6200K (around) color temperature spectrum... the problem can be addressed with a plain color temperature correction after the calibration stage that is not to degradate the information quality at all.