PDA

View Full Version : Aztek Premier 16 bits, really?



onnect17
12-Feb-2014, 21:20
In many sites I see the claim about the Aztek Premier scanner as a "16 bit" scanner. All I see in the board is a AD9220AR which is a 12 bits ADC, not a 16 bits. Any comments?

ramon
13-Feb-2014, 01:43
In many sites I see the claim about the Aztek Premier scanner as a "16 bit" scanner. All I see in the board is a AD9220AR which is a 12 bits ADC, not a 16 bits. Any comments?

Hi onnect17,

Thank you for sharing with us the ADC IC component.

I think that they say its a "48 bit workflow" scanner and that the "output" file is 16 bit RGB.

http://www.aztek.com/premier.html
http://aztek.net/Mailers/Feb%2007/Premier.pdf
http://www.prepressexpress.com/pages/scanning/scanning/faq.html#Aztek_Premier_Spec

Just add 4 zeros to a 12 bit value, and you have a "16-bit separate RGB output". ;-)

Lenny Eiger
13-Feb-2014, 12:01
Just checked in with Aztek, spoke to Haddon. They say the 4500 and lower is in fact 12 (36) bit, some of the larger ones (7500?) are 42 bit and the Premier is full 16 bit. As far as what chip is used, I have no idea.

I would call them and ask. They always pick up the phone..

Lenny

onnect17
13-Feb-2014, 16:30
A picture is worth 1000 words...

110431

ramon
13-Feb-2014, 21:20
A picture is worth 1000 words...


True. Be carefull and turn off the flash when taking pics. And also be carefull when handling the board, those ICs are ESD (electrostatic discharge) sensitive devices.

Thank you for sharing this. Its quite interesting to see how it works internally, there is no such information on internet.

They use two logarithmic amplifiers (AD640). Both of them cost more than 8 times the ADC price. I cannot see from the board if they are cascaded to have a wider dynamic range. If so, maybe they could use "now" a single AD8310 and save more than $90 USD.

onnect17
14-Feb-2014, 01:00
The Field Service Guide is not available so I have to open it to get familiar with it. I wish howtek's way of holding the main board was less exposed to mechanical stress. I can see some users having troubles with temperatures changes or just simply moving the scanner.

8x10 user
14-Feb-2014, 09:30
The manual (http://www.aztek.com/Howtek%20Pages/Guides/Scanners/8000_Sprint.pdf)for the HR8000 shows a 12 bit DAC.

onnect17
14-Feb-2014, 10:52
The manual (http://www.aztek.com/Howtek%20Pages/Guides/Scanners/8000_Sprint.pdf)for the HR8000 shows a 12 bit DAC.

I guess you are referring to the ADC, not the DAC. I started to read the user guide for the Premier and I have to put it away. Too many errors/inconsistencies. Checked Aztek site and nothing there. Emailed Evan at Aztek thinking I was reading some beta version (revision B) and to my surprise his reply confirmed I was reading the latest version.

Then I downloaded the HR8000 user guide and realized most all the errors are dragged from the HR8000. Perhaps my expectations where set too high after reading many of the posts from Phil at the Yahoo group.

Daniel Stone
14-Feb-2014, 12:55
Well whatever it is(12bit, 14bit, or 16bit), my DPL8000(between Premier and HR8000) puts out marvelous scans on pretty much everything I throw at it. No complaints whatsoever in terms of file quality; but I do wish the drums could be a bit larger so mounting 8x10 film would be easier.

8x10 user
14-Feb-2014, 14:03
I think there was suppose to be something special about the way DPL works where it can bypass the ADC and go right to the log amps. Somehow information is uploaded into the scanner regarding each scans parameters so the scan is more optimized then a RAW scan approach. Supposedly this was part of the magic of Phils innovations and perhaps one of the scanners patents. It's been awhile since I researched this subject.

I used a DPL8000 and it did nice scans. It was a bit hard to mount an 8x10 on the drum and I ended up selling on this forum many years ago. While it is a nice scanner it was somewhat oversold in some regards. The 3 micron aperture for example is not useful for any color film types, kodakchrome works with 6 micron aperture but for most color films an aperture of 8-14 microns is ideal. When you go smaller then the grain clump size you start to you get grain-aliasing. Also the D-max of the all three lines is a strict 3.88 and some scanners are a bit better in the shadows of very dense film. So there might not be a real advantage quality wise between the Aztek and many other well-tuned high-end scanners for many large format scans. However for very thin black and white negatives and the sharpest of 35mm and medium format images, the smaller apertures selection optimized endpoints and curves would help bring faint details.

I did some some sample scans on Aztek plateau for Harvard medical of the thinnest and sharpest negs that I have ever seen. I think these were images of a virus shot with a gamma or electron microscopic film camera. The scans came out very good and my contact was excited when he saw structures that he said were 1 atom wide. They purchased the scanner, although I recommended a 3 micron drum scanner for their specific application. The plateau was faster and useful for them either way. The resolution is fixed on the plateau and you have to manually stitch the images for the highest resolution scans.

The Creo Supreme, I, II & Select, and IQ3 are three scanners that are known to have 16bit ADC.

BetterSense
14-Feb-2014, 15:00
I know nothing about the scanners being discussed but I will point out the problems with using the part number of the ADC chip to predict the resolution of the scanner. The size of the ADC doesn't necessarily mean the effective number of bits of the whole scanner is the same. It is possible to trade speed for resolution by oversampling. I routinely get a solid 12 bits out of 10-bit microcontroller ADCs, for example. Of course, the opposite is the case as well obviously.

Leigh
14-Feb-2014, 21:55
A picture is worth 1000 words...
It certainly is.

The AD9220AR is a 12-bit A-to-D converter.
110493

The entire datasheet from Analog Devices is here: http://www.analog.com/static/imported-files/data_sheets/AD9221_9223_9220.pdf

Also the Integral Nonlinearity error is 0.5LSB, while the Differential Nonlinearity error is 0.3LSB.
This means the entire Least Significant Bit is garbage, so the device is only good for 11 bits in a real circuit, not 16.

- Leigh

onnect17
15-Feb-2014, 03:07
I know nothing about the scanners being discussed but I will point out the problems with using the part number of the ADC chip to predict the resolution of the scanner. The size of the ADC doesn't necessarily mean the effective number of bits of the whole scanner is the same. It is possible to trade speed for resolution by oversampling. I routinely get a solid 12 bits out of 10-bit microcontroller ADCs, for example. Of course, the opposite is the case as well obviously.

Oversampling, at least in my book, will never give you a real extra bit of precision. It is more like an accepted method of estimation. But that's a different subject.

Now, if my numbers are correct, a raster line in the premier will demand around 2 Ms/sec. There is so much you can do with a 386sx cpu handling the data.

BetterSense
15-Feb-2014, 07:07
110504
Oversampling, at least in my book, will never give you a real extra bit of precision. It is more like an accepted method of estimation.

Your book may be out-of-date. I'm not sure what you are basing your criteria for bit "realness" on. Hopefully you don't ascribe extra levels of realness to bits described in datasheets. The realness of bits can only be determined by how well they describe reality. I assure you that oversampling techniques can return an effective number of "real" bits that is "really truly" in excess of those provided by the hardware. In fact, it's possible to get arbitrarily high precision out of a 1-bit ADC; not for free, but at the cost of bandwidth. And of course it's possible to get an arbitrarily low resolution out of any ADC.

The entire field of analog-to-digital conversion itself can be described as "accepted methods of estimation".

I know nothing about the Aztek scanners or how they are designed. I'm just pointing out the problems with looking at the ADC chip on the board of a scanner and using that by itself to make any inferences about the effective resolution of the scanner system, which could be lower than, equal to, or greater than the "datasheet resolution" of the ADC chip.

I attached a slide showing how I put oversampled bits to good use in brewing beer. Thanks to oversampling, you can see the cycloid-pattern signal caused by my circulation pump kicking on and off, all happening within the space of the ADC lsb. The extra bits are quite real.

Leigh
15-Feb-2014, 11:31
Oversampling, at least in my book, will never give you a real extra bit of precision.
That's correct, in any book.

Unfortunately, the digital world is full of techniques that claim to increase the level of accuracy.
It's all smoke and mirrors.

- Leigh

Leigh
15-Feb-2014, 11:34
I'm not sure what you are basing your criteria for bit "realness" on.
That's probably the most bizarre statement I've ever read.

- Leigh

BetterSense
15-Feb-2014, 12:16
That's probably the most bizarre statement I've ever read.

It's not bizarre. The person I quoted stated that bits of resolution acquired through oversampling are not "real bits". I believe I understand his meaning perfectly; he means the oversampled bits are not "real" in that they don't add real accuracy or don't describe reality well. But he is wrong to make a blanket statement like that, because extra bits aquired through oversampling techniques can be just as accurate ("real") as those acquired from a hardware ADC. In fact you can create an ADC of arbitrary precision (ignoring bandwidth considerations) with only one bit of hardware resolution. So either the onnect17 is misinformed, or he has a different definition of "real" in mind when he says "oversampling cannot provide real extra bits".



That's correct, in any book.

Except any basic textbook on sampling theory or signals.

There is plenty of smoke and mirrors to go around in both analog and digital techniques, but oversampling techniques are hardly smoke and mirrors, just basic sampling and signal theory. Yes, you really can get resolution below the resolution of an ADC via oversampling. I do it all the time, and the data aquired thusly is very "real". Consumer audio devices long ago went to 1-bit DACs run at ~MHz rather than 16-bit DACS run at 22kHz. Delta-sigma modulation (as used by the prized Super Audio CD format) can be called a 1-bit technique as it stores signals in 1-bit pulse frequency modulation, encoding a practically arbitrary amount of bit-depth with single-bit pulses. Is all the resolution of SACD "fake"? You can't say that oversampling is "smoke and mirrors" when the principles are in operation in millions or billions of electronic devices.

Leigh
15-Feb-2014, 12:19
It's not bizarre. The person I quoted stated that bits of resolution acquired through oversampling are not "real bits". I believe I understand his meaning perfectly; he means the oversampled bits are not "real" in that they don't add real accuracy or don't describe reality well.
OK. I certainly agree with that statement.

Manufactured bits are not "real" bits, regardless of the method or technology used to create them.

I apparently misunderstood your previous post. My apologies.

- Leigh

BetterSense
15-Feb-2014, 12:52
What does it mean to "manufacture a bit"?

Leigh
15-Feb-2014, 15:48
It means to create one or more bits by a technique unrelated to the basic conversion technique.

For example, you can create a bit by taking the average of two consecutive LSBs, and claiming that you have more resolution.

That's total nonsense, since the LSB is not significant in the first place.

- Leigh

8x10 user
15-Feb-2014, 16:55
So I think DPL is suppose to upload custom curves / end points into the ADC. Is that possible?

Leigh
15-Feb-2014, 17:00
So I think DPL is suppose to upload custom curves / end points into the ADC. Is that possible?
That would apply to how the output of the ADC is used by the other circuitry or processor.
The ADC itself (the IC) is not programmable, but the support circuitry might be.

It would be analogous to changing the contrast curve in a digital printing or display program.

- Leigh

onnect17
15-Feb-2014, 18:10
I was trying to avoid driving the thread to this subject but here we go…

My book is very old but sure starts with the Nyquist–Shannon sampling theorem (1948).
Oversampling has many uses and the most common is noise filtering (in our world, scanners software sometimes call it multisampling). I'm in fact using it to filter the data from a couple of temp sensors in a CPE2 mod.
Also is signal estimation. I say estimation because you are "predicting, guessing" what value of the signal under AND ONLY UNDER some conditions. Here's a couple of assumptions:

1. You are sampling the signal at least 4 times faster than the requirement of Nyquist for each of what you call a bit. Actually, that alone introduces noise.

2. The signal cannot change during oversampling (not’ kidding!). The assumption of the probability of the LSB being 0 or 1 has to be the same. That’s the base of the “whole thing”.

Take a look in this sort of hand drawing. It tries to show sampling (S) and oversampling (numbers) for the step function at 4x and 8x.
.
110519

After processing the data with 4x oversampling you will get a ramp, not a step. And as you increase the sampling rate the response is even worse, a nicer ramp!

That’s why I do not use oversampling as a “bit creator”. From the standpoint of image processing you will be applying some “blurring” effect. Pretty much, filtering.

There are a thousand ways to increase the resolution via hardware using cheaper ADCs, a lot safer and cleaner than oversampling. Also simple interpolation methods would introduce a smaller error.
Sure I would not use it as a tool to improve resolution, even as part of an Arduino project.

Again, that is just my opinion.

onnect17
15-Feb-2014, 19:08
It's not bizarre. The person I quoted stated that bits of resolution acquired through oversampling are not "real bits". I believe I understand his meaning perfectly; he means the oversampled bits are not "real" in that they don't add real accuracy or don't describe reality well. But he is wrong to make a blanket statement like that, because extra bits aquired through oversampling techniques can be just as accurate ("real") as those acquired from a hardware ADC. In fact you can create an ADC of arbitrary precision (ignoring bandwidth considerations) with only one bit of hardware resolution. So either the onnect17 is misinformed, or he has a different definition of "real" in mind when he says "oversampling cannot provide real extra bits".




Except any basic textbook on sampling theory or signals.

There is plenty of smoke and mirrors to go around in both analog and digital techniques, but oversampling techniques are hardly smoke and mirrors, just basic sampling and signal theory. Yes, you really can get resolution below the resolution of an ADC via oversampling. I do it all the time, and the data aquired thusly is very "real". Consumer audio devices long ago went to 1-bit DACs run at ~MHz rather than 16-bit DACS run at 22kHz. Delta-sigma modulation (as used by the prized Super Audio CD format) can be called a 1-bit technique as it stores signals in 1-bit pulse frequency modulation, encoding a practically arbitrary amount of bit-depth with single-bit pulses. Is all the resolution of SACD "fake"? You can't say that oversampling is "smoke and mirrors" when the principles are in operation in millions or billions of electronic devices.

You are mixing two things here. Acquisition is not Output/reproduction.

Those 1 bit DAC have a hell of a filter before the final output. But that's what it is, output.
Acquisition is a different story. Sure you could sample with a 1 bit ADC and the audio devices would be a lot cheaper but in reality what those devices are doing is converting the signal to a PWM equivalent, it's not a 1 bit ADC.
CDs, DVDs use the same method to store info. Quality wise, there's a lot of work to do. I would love to see those sound recording studios trade their 24 bits devices with the 1 bit ADC based recording systems. Not for now.

BTW, I still prefer LPs. I think the sound is cleaner, especially in the high frequencies.

Of course, you could print with black ink only. Most newspapers do. But sure I prefer the look of the BW out of the 3800. How about color? Are you saying that the printing systems are wasting their time adding extra color inks to the printers, after all they could achieve exactly the same gamut just dropping more dots of the CMY?

sanking
15-Feb-2014, 19:54
"The assumption of the probability of the LSB being 0 or 1 has to be the same. That’s the base of the “whole thing”.

Have you considered "qubits" instead of regular bits. There the LSB could be 0 or 1 simultaneously, which could smooth out the ramp a lot!

Sandy

BetterSense
15-Feb-2014, 19:56
After processing the data with 4x oversampling you will get a ramp, not a step. And as you increase the sampling rate the response is even worse, a nicer ramp!

That’s why I do not use oversampling as a “bit creator”. From the standpoint of image processing you will be applying some “blurring” effect. Pr areetty much, filtering.

You are making a conceptual error here. Oversampling is not upsampling. According to your reasoning, oversampling a signal provides the same result as sampling normally then "upsampling" the result. If you think that, then I see where you are coming from but it's not accurate because oversampling is totally different than upsampling or filtering.

Oversampling provides real information about the signal that could never be obtained with filtering. If you look at my temperature data this should be obvious. If you look at the raw 10-bit ADC data from this system with normal sampling, it is a flat line. There is no filtering that you could ever apply to that normally-sampled data that will conjure up any new information. Performing oversampling on the other hand allows 2 extra bits of real resolution, as you can plainly see from the data.

If you sample a linearly ramping signal, both oversampling and upsampling will give you the same result but that's purely incidental. You are totally correct about upsampling by the way.


2. The signal cannot change during oversampling (not’ kidding!). The assumption of the probability of the LSB being 0 or 1 has to be the same. That’s the base of the “whole thing”.

It is meaningless to talk about signals that don't change. What is true is that the sampling bandwidth must be higher than the signal bandwidth. Technically according to Nyquist f_sample must be greater than 4^N*f_Nyquist for each oversampled bit N.

For oversampling to work you also need some noise with amplitude of at least the LSB in the frequency around f-sample/4^N. This is rarely a problem because systems have broadband noise but you can always add some. As has been correctly pointed out, it is common for hardware designers to "over-spec" an ADC such that the lower bit is already "in the noise" and meaningless (so much for hardware-derived bits being "more real"). This is a perfect situation for oversampling which can extract the last bit of info and as many more bits as desired given the practicalities.


Sure you could sample with a 1 bit ADC and the audio devices would be a lot cheaper but in reality what those devices are doing is converting the signal to a PWM equivalent, it's not a 1 bit ADC.*

Moving the goalposts? Converting an analog voltage level to PWM is exactly a 1 bit ADC with oversampling. The classic way to generate the PWM equivalent of an analog waveform is with a comparator to a reference triangle wave. This is 1-bit analog-to-digital conversion--the injected "noise signal" in this scenario is represented by the triangle wave, with amplitude equal to the LSB (the only bit!). If you have enough bandwidth, you can encode any signal that way to arbitrary precision. The only thing that changes if you use a multi-bit ADC vs a single bit one is that you have to oversample relatively less with the multi-bit ADC!

onnect17
15-Feb-2014, 21:56
I'm sure you could ask 100 engineers and each will show you a different solution. Good for you if you want to oversample 200 times. I'm sure many others would implement the A/D conversion themselves using a D/A and most likely faster. In any case, let's see the response to the step function with real numbers.
Suppose the input signal is

100, 100, 100, 100, 100, 100, 100, 100, 101, 101, 101, 101, 101, 101, 101, 101

With 1x sampling the output will be 100, 101.

Would you mind showing your calculations after 8x oversampling? Let's see what I'm missing.

onnect17
15-Feb-2014, 22:19
"The assumption of the probability of the LSB being 0 or 1 has to be the same. That’s the base of the “whole thing”.

Have you considered "qubits" instead of regular bits. There the LSB could be 0 or 1 simultaneously, which could smooth out the ramp a lot!

Sandy

Another way to look at it is as the "complex number" equivalent of the bit. ;-)

BetterSense
15-Feb-2014, 22:22
I don't understand the problem you are posing. What are the numbers supposed to represent?

onnect17
15-Feb-2014, 22:59
I don't understand the problem you are posing. What are the numbers supposed to represent?

Ok, let's try more familiar numbers. You're sampling the temperature in a sensor. Here's a stream of 8x sampled data using your 10 bits ADC.

28.5, 28.5, 28.5, 28.5, 28.5, 28.5, 28.5, 28.5, 29.0, 29.0, 29.0, 29.0, 29.0, 29.0, 29.0, 29.0

Now you apply some calculations to obtain values equivalent to a 12 bits ADC. Would you mind sharing the math behind the calculations?

ramon
15-Feb-2014, 23:36
Bettersense,

The 12 bit ADC AD9220 (max 10Msps) at normal drum speeds (around 900 rpm) and usual dpi resolutions (2000 ~ 4000 dpi) cannot use oversampling to get 4 extra bits.

Do the maths. I have done it.

ramon
15-Feb-2014, 23:39
Drum speed: 900 rpm (15 rps)
Drum linear area per revolution: 12" (304mm)
DPI: 4000 dpi

15 rps * 12" * 4000 dpi = 720,000 samples per second

Oversampling needed to get 16 bits from a 12 bit ADC:

f_oversampling = 4^(4 bits) * f_sampling
f_oversampling = 256 * f_sampling
f_oversampling = 256 * 720,000 sps
f_oversampling = 184,320,000 (184 Msps)

So it needs 184 millions of samples per second. Maximum speed of AD9220 is 10 Msps.

It has 18.4 times lower adquisition time than the required oversampling speed.

End of oversampling discussion.

ramon
15-Feb-2014, 23:41
Well whatever it is(12bit, 14bit, or 16bit), my DPL8000(between Premier and HR8000) puts out marvelous scans on pretty much everything I throw at it. No complaints whatsoever in terms of file quality; but I do wish the drums could be a bit larger so mounting 8x10 film would be easier.

Yes. Forget about marketing numbers. At the end, this is what matters !

Leigh
16-Feb-2014, 00:08
After processing the data with 4x oversampling you will get a ramp, not a step. And as you increase the sampling rate the response is even worse, a nicer ramp!
So by oversampling, you've changed a step into a ramp.

OK, except...
The real data is a step, so by oversampling you've introduced an error that increases with sample rate.

Thus, by oversampling you've increased the error, not the accuracy.

Also, realize that oversampling only applies to designing an ADC in the first place.

If you're using a 12-bit ADC, there's nothing you can do to increase its ACCURACY, only its resolution.

The LSB of the 12-bit data word is rendered meaningless by the errors within the device itself.
Thus any bits below that are of no significance. This is all per the manufacturer's datasheet linked earlier.

Oversampling serves to increase the temporal accuracy of a transition, which may be important.
For example, if you take 1 sample per second, you can determine the transition time ±1 second.
If you take 1000 samples per second, you can determine that time within ±1 millisecond.

That timing resolution is important in some signal processing algorithms.
However, in this case the signal being processed is random. It's not a periodic waveform, so
processing algorithms based on periodicity (frequency) do not apply.

- Leigh

onnect17
16-Feb-2014, 00:35
So by oversampling, you've changed a step into a ramp.

OK, except...
The real data is a step, so by oversampling you've introduced an error that increases with sample rate.

Thus, by oversampling you've increased the error, not the accuracy.
...

- Leigh

That's exactly my point. The result should be another step, not a ramp.

onnect17
16-Feb-2014, 00:37
Drum speed: 900 rpm (15 rps)
Drum linear area per revolution: 12" (304mm)
DPI: 4000 dpi

15 rps * 12" * 4000 dpi = 720,000 samples per second

Oversampling needed to get 16 bits from a 12 bit ADC:

f_oversampling = 4^(4 bits) * f_sampling
f_oversampling = 256 * f_sampling
f_oversampling = 256 * 720,000 sps
f_oversampling = 184,320,000 (184 Msps)

So it needs 184 millions of samples per second. Maximum speed of AD9220 is 10 Msps.

It has 18.4 times lower adquisition time than the required oversampling speed.

End of oversampling discussion.

Ramon,
I do not think that BetterSense was referring in particular to the Premier. I guess it was clear from the beginning the SCSI interface would not be able to handle it.

He was making a general statement regarding techniques to increase sampling resolution.

If we set the scanner sampling at 8000 dpi, a distance of 12.56", that's 100480 pixels. The drum should be rotating at that point at aprox. 460 rpms, in other words it will take .13 secs to raster a line or it will raster 7.7 lines per sec. But that's one color so we should multiply by 3. So,

100480 * 3 * 7.7 = 2.32 Msamples/sec

As an interesting note, the Howtek 4000 uses 3 ADCs, one of each color.

BetterSense
16-Feb-2014, 07:34
Ok, let's try more familiar numbers. You're sampling the temperature in a sensor. Here's a stream of 8x sampled data using your 10 bits ADC.

28.5, 28.5, 28.5, 28.5, 28.5, 28.5, 28.5, 28.5, 29.0, 29.0, 29.0, 29.0, 29.0, 29.0, 29.0, 29.0

Now you apply some calculations to obtain values equivalent to a 12 bits ADC. Would you mind sharing the math behind the calculations?

First, I don't know what you mean by "8x sampled data", sorry. You have 16 numbers there, talk about 8x sampling, and expanding a 10-bit value to 12-bits, and those are not consistent with each other. I don't know the sample frequency. I don't know the bandwidth of the signal. I don't know anything about the noise in the system. So I can't really "apply calculations". All those things matter. As I have said before, you can always upsample data after the fact and pretend you have more information. I think you still think that's what ovetsampling is, which is why you are trying to get me to "apply calculations" to a string of numbers.

Here is my actual code which works for my system and my application.
I'm sorry the forum mangles the indenting


//blocking
uint16_t adc_read(uint8_t me){ //expects register value
uint16_t ad_bucket=0;
ADMUX &= 0xF0;
ADMUX |= me;
for (int i=0; i<16; i++){
ADCSRA |= (1<<ADSC);
while(ADCSRA & (1<<ADSC));
ad_bucket += ADCW;
}
return (ad_bucket>>2); //12 bits oversampled
}

BetterSense
16-Feb-2014, 07:53
That's exactly my point. The result should be another step, not a ramp.

It was never stated that the underlying data in the hand-drawing was a step. I assumed the underlying data is a linear ramp, and the drawing was representing reconstructed signals. If the underlying data is a step, then the drawing is just wrong then; it does not represent what actually happens in oversampling at all. It just shows a total lack of understanding of what actually goes on during oversampling.

Oversampling, done properly, increases real resolution just like adding more bits to the ADC hardware. It does not "smooth out" or "blur" signals; that would be filtering, and would be equivalent to saying the sampling was not recovering the higher signal frequencies. This is why the oversampling frequency must be greater than 4^N*f_Nyquist, then this does not happen, there is no step function smoothing or blurring whatsoever.

onnect17
16-Feb-2014, 09:57
It was never stated that the underlying data in the hand-drawing was a step. I assumed the underlying data is a linear ramp, and the drawing was representing reconstructed signals. If the underlying data is a step, then the drawing is just wrong then; it does not represent what actually happens in oversampling at all. It just shows a total lack of understanding of what actually goes on during oversampling.

Oversampling, done properly, increases real resolution just like adding more bits to the ADC hardware. It does not "smooth out" or "blur" signals; that would be filtering, and would be equivalent to saying the sampling was not recovering the higher signal frequencies. This is why the oversampling frequency must be greater than 4^N*f_Nyquist, then this does not happen, there is no step function smoothing or blurring whatsoever.

Read again the post. It's very clear I said "step function". But even without it you should be able to recognize the step function anywhere if you have any background in electronics.

onnect17
16-Feb-2014, 10:09
First, I don't know what you mean by "8x sampled data", sorry. You have 16 numbers there, talk about 8x sampling, and expanding a 10-bit value to 12-bits, and those are not consistent with each other. I don't know the sample frequency. I don't know the bandwidth of the signal. I don't know anything about the noise in the system. So I can't really "apply calculations". All those things matter. As I have said before, you can always upsample data after the fact and pretend you have more information. I think you still think that's what ovetsampling is, which is why you are trying to get me to "apply calculations" to a string of numbers.

Here is my actual code which works for my system and my application.
I'm sorry the forum mangles the indenting


//blocking
uint16_t adc_read(uint8_t me){ //expects register value
uint16_t ad_bucket=0;
ADMUX &= 0xF0;
ADMUX |= me;
for (int i=0; i<16; i++){
ADCSRA |= (1<<ADSC);
while(ADCSRA & (1<<ADSC));
ad_bucket += ADCW;
}
return (ad_bucket>>2); //12 bits oversampled
}


Well, well. This is what happen when people copy and paste code without analyzing it. The whole routine is doing an AVERAGE of the samples and is doing it wrong. The return at the end should divide by 16, not 4. So the last line in the code should read:

return (ad_bucket >> 4); // dividing by 4^2

So after all the whole new BIG BANG called oversampling all I see is a simple average. I'm speechless.

PS.
I hope the impact on the taste or alcohol level in the beer is not major. In any case, please do not name the label if any.

BetterSense
16-Feb-2014, 12:22
The whole routine is doing an AVERAGE of the samples and is doing it wrong... return at the end should divide by 16, not 4

No, the routine, because it is operating under certain conditions on a certain system, is doing oversampling not averaging. How many times must I repeat that oversampling is not the same thing as averaging or filtering?

I'm just about done here because even when I post simple source code illustrating the technique, people insist it's wrong rather than try to understand it. I post data gathered using the source code showing 12-bit resolution from a 10-bit ADC and people ignore it. I point out that high resolutions in both ADC and DAC are commonly achieved using 1-bit hardware and oversampling techniques in commercial technology. I post the criteria under which oversampling provides real resolution, and people just insist that oversampling is a filtering technique and "smooths" high frequencies, because that flatters their intuition. Nyquist be damned...largeformatphotography.info has it all figured out!

Hint: the divide-by-4 is not an error; it is THE ENTIRE POINT. Oversampling is not simply averaging, no matter how much the groupthink wants it to be.

http://www.maximintegrated.com/app-notes/index.mvp/id/1870

www.atmel.com/Images/doc8003.pdf

Leigh
16-Feb-2014, 12:46
I post data gathered using the source code showing 12-bit resolution from a 10-bit ADC
You realize you're violating one of the basic laws of the universe.

If it was possible to get 12 or more ACCURATE bits of information from a 10-bit ADC...

Nobody would make ADCs with higher accuracy and higher cost than the 10-bit part.

Yet there are numerous such devices.

Those who claim such devices believe their observations without having any calibration refs.

Smoke is free, and mirrors last forever.

- Leigh

BetterSense
16-Feb-2014, 12:55
Lol!

During the time Marangoni was trying to invent radio, there was a guy in the US who was prosecuted for fraud for getting people to invest in his attempts to transmit voice across the Atlantic, get this, without wires. Clearly such a thing is impossible, just like crazy theories about oversampling. This thread has now transitioned into entertainment.


If it was possible to get 12 (or 14 or 16) ACCURATE bits of information from a 10-bit ADC...
Nobody would make ADCs with higher resolution and higher cost than the 10-bit part.


You are more right than you know. Increasingly, for many applications, that has already happened. For example, these guys are selling as many bits as you will pay for; up to 24, measured internally with a one-bit oscillator, and those mathematical laws of the universe you appeal to...the difference is you have no idea what you are talking about, and they paid attention in math class.

http://www.linear.com/products/no_latency_delta_sigma_adcs?gclid=CNaCvdi10bwCFY87OgodlxkAWA

Leigh
16-Feb-2014, 13:14
the difference is you have no idea what you are talking about, and they paid attention in math class.
http://www.linear.com/products/no_latency_delta_sigma_adcs?gclid=CNaCvdi10bwCFY87OgodlxkAWA
Wow. You're really on top of technology.

We were using Delta/Sigma ADCs in Motorola's Digital Voice products in 1978.

One-bit D/S encodes whether the current sample equals or differs from the previous sample.

It's not a measurement function in any sense of that description.

- Leigh

BetterSense
16-Feb-2014, 13:17
One-bit D/S encodes whether the current sample equals or differs from the previous sample.

It's not a measurement function in any sense of that description.


Delta-sigma ADCs are not measurement devices? They are not used to perform analog-to-digital conversion? Ok, will just note that and file it away.

I thought of an analogy.

Instead of using an array of photodiodes as an image sensor, you can create a digital image with only one photodiode, and obtain arbitrary resolution; VGA, HD, 4k, whatever. All you have to do is move the photodiode around the image plane and take lots of samples, transforming a 1-pixel image into an arbitrary number of pixel. This requires you to measure much faster than the image moves to avoid motion blur (this is the sampling bandwidth theorem). But you don't have to use a single-pixel sensor, you can use an intermediate resolution sensor and do the same thing, which is commonly done and called "stitching". In fact, no matter how high-res of a sensor you have, you can always get more resolution by stitching (subject to the sampling bandwidth theorem). But using your own logic, stitching violates the laws of the universe, and if it were possible to get more resolution by stitching together (not averaging!) multiple lower-resolution samples, nobody would make MFDB's.

Substitute time domain for spatial domain and you have this thread.

Leigh
16-Feb-2014, 13:43
Delta-sigma ADCs are not measurement devices?
Nope. D/S devices are not measurement devices. They're comparators, nothing more.

A D/S device cannot be used to measure DC voltage. A full ADC can.

You can build an ADC based on D/S technology, but that's not what I was talking about.

- Leigh

8x10 user
16-Feb-2014, 13:59
I'm not sure about bits but oversampling the resolution by using a larger aperture then the scanning resolution is one of the best things about a drum scanner. You don't want to go smaller then the grain clump size but the location of the clumps is stochastic so I higher scanning rate will make it possible to locate the clumps more accurately giving a better final image.

In terms of real world performance I ran a DPL8000 head to head with an Eversmart Supreme. The test was with 35mm Kodakchrome and I found that with the Max DR option that the Eversmart Supreme had a higher D-Max and less noise in the shadows. I agree the last bit on the Aztek is pretty much useless (when you analyze the noise). In terms of sharpness the Aztek was slightly better when comparing an 8000 PPI, 6 micron scan with the 5,600 PPI Creo scan. I later found out the cold mirror on the supreme had huge scratches on it but unfortunately I sold the DPL8000 before I could redo the test scans with a freshly serviced machine. I have seen some very poor examples from some used creo scanners and it seems there is just more that can go wrong with its more complicated optical pathway. This is why you sometimes have to be careful with used machines. I was very lucky the one I had did not have additional problems. That is why I always recommend refurbished scanners over random used scanners. They cost a little bit more but you know all the parts are going to be tested first. Some parts can be very expensive. The good news is that if you take care of either of these scanners they will last a long time.

8x10 user
16-Feb-2014, 14:14
I pulled this from an old post that Phil mad on the Aztek user group forum.

"TRIDENT only scans everything in Log mode. The HR8000 has a Photo
Multiplier low voltage video issue with noise. For instance if you look
very carefully at the HR8000 U.S. Air Force resolution target test at the
web site www.scannerforum.com you can see in the targets black background
lines or streaks in the scan. This is noise created by the HR8000 video
circuit and Trident.

My reasons to point out solutions to issues in your scanner within
Digital PhotoLab is not only, because I am interested in selling it to you.
Of course we would be happy to have join us and the AZTEK DPL users. Yet
the driving reason to mention it is we at AZTEK over now a couple of decades
have had a joint venture with Howtek to developed Digital PhotoLab and your
scanner together. Your scanner hardware has a number of features in it that
only Digital PhotoLab can take advantage of. DPL is the only software
capable of fully utilizing your HR8000 hardware! One of these features for
instance is density linear calibration scanning. Which means to you that
most often the log circuit is not used, but instead the linear circuit that
you have never experienced. Also the noise in the shadows of the log
circuit are not amplified in the HR8000 via DPL. Other important examples
of DPL features in the HR8000 include but are not limited to: Film Focus at
the speed of the final scan (not arbitrarily always 1000 RPM), Batch Crop
Reinitialization (avoiding Crop offsets in Batch Scanning), 16 bit RGB LUTS
and data paths to all captures, SCSI dynamic termination (higher data
reliability for big scans), 4 Giga Byte maximum data capture. There is
separate program for each supported DPL scanner that knows all of the unique
features and tailors operation for that equipment. I personally assisted in
the development of your scanner hardware with advanced features beyond the
previous Howtek scanners and optimized both that hardware design and the DPL
HR8000 design for each other. This was not done for Trident."

onnect17
16-Feb-2014, 14:33
No, the routine, because it is operating under certain conditions on a certain system, is doing oversampling not averaging. How many times must I repeat that oversampling is not the same thing as averaging or filtering?

I'm just about done here because even when I post simple source code illustrating the technique, people insist it's wrong rather than try to understand it. I post data gathered using the source code showing 12-bit resolution from a 10-bit ADC and people ignore it. I point out that high resolutions in both ADC and DAC are commonly achieved using 1-bit hardware and oversampling techniques in commercial technology. I post the criteria under which oversampling provides real resolution, and people just insist that oversampling is a filtering technique and "smooths" high frequencies, because that flatters their intuition. Nyquist be damned...largeformatphotography.info has it all figured out!

Hint: the divide-by-4 is not an error; it is THE ENTIRE POINT. Oversampling is not simply averaging, no matter how much the groupthink wants it to be.

http://www.maximintegrated.com/app-notes/index.mvp/id/1870

www.atmel.com/Images/doc8003.pdf


This is getting amusing. Here are some fragments from Atmel's document:

It is important to remember that normal averaging does not increase the resolution of the conversion.
I know that.


Decimation, or Interpolation, is the averaging method, which combined with oversampling, which increases the resolution.

I never heard of such an absurd definition of interpolation but at least they recognize they are averaging. And not, it will not increase the resolution, even if you may think so.


Digital signal processing that oversamples and lowpass-filters a signal is often referred to as interpolation.

Even worst. What an insult to DSP.


In this sense, interpolation is used to produce new samples as a result of ‘averaging’ a larger amount of samples. The higher the number of samples averaged is, the more selective the low-pass filter will be, and the better the interpolation.

You need a function to start talking about interpolation. There are not new samples produced.


The extra samples, m, achieved by oversampling the signal are added, just as in normal averaging, but the result are not divided by m as in normal averaging. Instead the result is right shifted by n, where n is the desired extra bit of resolution, to scale the answer correctly.

If you now have a 12 bits base from the previous 10 bits is like shifting 2 bits to the left and then to average you shift 4 to the right (or divide by M) ****IS THE SAME THING THAN **** shifting n (in this case 2) to the right.


Right shifting a binary number once is equal to dividing the binary number by a factor of 2.

And somebody rediscovered the hot water.


As seen from Equation 3-1, increasing the resolution from 10-bits to 12-bits requires the summation of 16 10-bit values. A sum of 16 10-bit
values generates a 14-bit result where the last two bits are not expected to hold valuable information.

As good as the other 2 bits. It's just residue in the averaging operation.


To get ‘back’ to 12-bit it is necessary to scale the result. The scale factor, sf, given by Equation 3-2, is the factor, which the sum of 4n samples
should be divided by, to scale the result properly. n is the desired number of extra bit.

Same thing I was saying. See above.


This is by far the most complicated and artificial way of reselling the "averaging" calculation I ever seen.

8x10 user
16-Feb-2014, 14:44
Another excerpt form one of phils Forum posts. This one from the Hi_End forum


Trident does all scanning in "Log Mode". The software tells the
hardware to perform a generic log to the power amplification of all light
that it senses. This means that the hardware is blindly amplifying
everything it sees. Digital PhotoLab (DPL) instead scans by downloading
density custom curve definitions for the amplification of what it sees. In
this way we can simultaneously pump up (amplify) dark areas while
suppressing things that are two bright. This way also chromes, negatives,
reflective and the various flavors of each of those can be treated totally
uniquely as necessary. This is not the way other scanning software works.
Its patented by AZTEK and the capability to support it is trapped within all
Howtek and AZTEK scanners.

8x10 user
16-Feb-2014, 14:49
So it sounds like it is 12-bit but the 12 bits is calibrated beforehand by adjusting the amplification settings prior to digitization and all of the LUT's are done using 16 bit calculations.

onnect17
16-Feb-2014, 15:04
Most if not all the Howteks have a DAC in front to set the limits for the ADC. The corrections via LUT are done post adquisition.

onnect17
16-Feb-2014, 15:22
Another excerpt form one of phils Forum posts. This one from the Hi_End forum

Actually it is not such a bad thing. Not much different than the saying "measure the shadows and develop for the highlights".
It's great to have the option to give a lift (increase detail) in the high density, low values areas. That itself increases the SNR but to the cost of reducing detail in the highlights. It's a trade off. Actually many systems, including audio, use a similar principle. Remember Dolby NR ?

The problem is that with 12 bits only there is not much space.

8x10 user
16-Feb-2014, 15:39
12 bits are not too bad as long as it is the right 12 bits.

Lets say the we take a 16 bit Creo scanner with 16 bit A/DC and an active chilled CCD (supreme) and multisample each scan ("MaxDr") for increased accuracy. The scanner, use a SCOOM scan once output many times; high integrity 16-RAW scan workflow. With a high density film most of those bits are used and the image quality is excellent. Density is logarithmic with an increase of 1.0 relating to a 10x increase in contrast ratio. So a D-max of 1.0, is a 1:10 ratio, 2.0 is a 1:100, 3 is a 1:1,000, and 4.0 is a 1:10,000 ratio. Log viewing matches our vision well and is the good way of thinking about and scanning slide film. It is not too bad to take a pure 16 bit 4.0 dmax log scan to covert a of negative maybe a 2.0 contrast range despite a d max of 2.0 being 100x times brighter then the end point of scan. Instead of 99% of the data lost to the 2.0-4.0 range, only 75% of it is lost leaving 14 bits of good data.

Now on the other hand Azteks approach is to modify the signal before it is converted digitally so that less alterations need to be made to the 12 bit file to prevent the histogram from being combed by edits made to the data after it leaves the ADC. This is also a valid approach but it requires each scan be properly optimized during the scan phase to achieve the best results. I would think that a wide gamut log scan would better then a linear scan for chromes but the phil seem to recommend linear scans for chromes mostly likely due to the unique issue with the low single noise on the 3 micron scanner and how the two individual circuits amplify the low single. A log scan should have a closer matching curve to the way we see slide film. However negative film is a completely different story, it is not only "thinner", it is "flatter" because it records a scene linearly and the negative is not normally converted to log until it is printed. So a 12 bit 2.0 dmax calibrated linear scan would have similar integrity to a 16 bit RAW scan that is calibrated logarithmicly at a 4.0 dmax but only the less then 2.0 portion of the data is used. Now if a 12 bit raw scan was done with a 4.0 dmax then only the less then 2.0 portion was used "bit integrity" would be lower, especially if drastic corrections are needed to correct for the "flatness" (linear response) of the negative.

8x10 user
16-Feb-2014, 16:06
One other thing I will say is that I don't agree with the results from the "scannerforum" that phil put together. Obviously some of the scanners are not functioning properly that he showing examples from. Also it is not clear if the scans on the other machines were done in a way that provides optimal image integrity. The target itself is something plays on the weaknesses of other scanners while avoiding the weaknesses of the Aztek. The orthochromatic micro film test target is something that would take great care to produce the best scan on a flatbed. The D-min and D-max is lower on that film then most films including black and white. This creates more issues for CCD scanners. The film should be fluid mounted to prevent focusing errors and flare and it should also be masked. Many machines have bad defaults including automatic defocusing for screening purposes or larger apertures. The Creo scan most likely was done at low resolution, with the lower quality settings, no masking, no fluid mounting and there could have been a resulting focusing error. The heidelberg scanners require you to manually lower the aperture setting and most likely the smallest was not used, there is a higher quality setting on the primescan and it make not have been used and who knows if blur/sharpening was turned or what the correction settings were. Scanning this film with the Aztek would have much easier because its strong point would be scanning thin negatives, and the settings could have easily had precorrections to help deliver a more contrasty sharper edged result. The Aztek has shadow noise reduction built into the software with it on the step wedge would have came out nice by averaging local pixels. This works well with big areas of even film like the wedge but the noise can interfere with fine structures that are hidden in the shadows. This is where the much larger amount of light being digified by the CCD can give it an edge. In the end of the day all of the top tier scanners will outresolve your large format film and can produce extremely high quality images as long as they are working properly and if one knows how to use them right.

8x10 user
16-Feb-2014, 18:15
So the difference is that with DPL you can set the end points and then use either linear or logarithmic amplification.


Most if not all the Howteks have a DAC in front to set the limits for the ADC. The corrections via LUT are done post adquisition.

8x10 user
16-Feb-2014, 18:28
So there should be a level setting DAC before the amps on that circuit board. Can one of you circuit savvy samaritans locate it and tell us how many bits are available in the preamp leveling options? I assume the amount of amplitude is also variable to control the other endpoint? How many amp options are there?

8x10 user
16-Feb-2014, 18:32
Phil certainly seemed to understand how to get the best he could get out of the hardware that was used by howtek.

onnect17
16-Feb-2014, 20:20
So there should be a level setting DAC before the amps on that circuit board. Can one of you circuit savvy samaritans locate it and tell us how many bits are available in the preamp leveling options? I assume the amount of amplitude is also variable to control the other endpoint? How many amp options are there?

Another couple of shots. They show the first stage of preamp and a DAC/MUX. Keep in mind most of the components in the pics to the right are already visible in the pic to the left except for the DAC/MUX and few others.

110576 110577

Some part numbers in case is hard to see them in the photo.

ad603 --> Variable gain amp.(3)
dg613 --> analog switch (3)
ad843 --> Op Amp (3)
AD7568 --> 8x 12bits DAC
ad817 --> Op Amp. (2)
AD8174 --> Mux

ah693973
17-Feb-2014, 09:39
Check out US Patent 5424537 for info on how Howtek sets their levels.

Andy

onnect17
17-Feb-2014, 10:36
Check out US Patent 5424537 for info on how Howtek sets their levels.

Andy

Thanks Andy. Great find!

Previously I downloaded another patent document (5,515,182) which described many elements present in the D4000.
The 5424537 answers many questions related to the acquisition stage.

Lenny Eiger
18-Feb-2014, 17:48
I'm sorry. I've thought about this for a few days. I had a long talk with Phil Lippincott about his machine. He was very proud of it. He did a lot of the engineering for scanning companies and had some sort of role in the MacBeth Color chart, among many other things. He was working with Howtek on the HR8000 and was very upset that they went with a brass main screw, likely because of cost issues. When they finally acquired Howtek, the first thing they did was to create the DPL8000, which changed just a few things, and swapped the brass out for a stainless screw.

He described in detail how they then went to the whole system and rebuilt it from the ground up, to use his words, "sparing no expense" to make the absolute finest scanner they could make in that size. They even took all of the smaller boards and merged them into one larger main board. He specifically said that they went to full 16 bit.

The guy was the kind of genius that was often hard to talk to. One certainly had to do a lot of listening. He had a great mind, and was a real mover in his industry. I am certain there is no one that knew his scanner as well as he did. I don't know exactly what he did, or how he accomplished it. I am guessing that I might find a different chip if I opened up my scanner. Maybe you have an earlier board, from an HR 8000, for example.

However, given the brilliance of this guy, and the kind of relationship we had, I don't see him lying to me. I don't want to cast aspersions at anyone. I just don't think you have it quite right, there's something that's being missed. It's too bad that Phil isn't here anymore, to answer your concerns. I'm certain he could do so easily.

Lenny

Peter De Smidt
18-Feb-2014, 19:21
And in any case it's a terrific scanner.

onnect17
19-Feb-2014, 06:33
I agree 100% that Phil was very passionate about the scanner (just read some of the posts in the yahoo group) and I have not doubt in my mind that when they mentioned "16 bits path" he meant that the architecture of the electronic design was relying on a bus 16 bits wide, not the precision of the acquisition stage. Unfortunately is very easy for users to get confused and assume (and repeat) the analog conversion is using 16 bits too.

I wish I found an owner of a original HR8000 willing to open it and compare all the differences with the premier.

analoguey
27-Feb-2014, 08:00
:-)
Thought I'd left my computer engineering days wayy back. To find such a discussion here.
Maths, C and bits. Nice, educative discussion.