What is happening is that when the image resized the output pixels are derived by considering several input pixels at each output location. The math typically results in output values that have fractional components. (ie (181 + 182) / 2 = 181.5 ) Well, it's not a good idea to just round up or down and simply discard the fractional part. This would introduce it's own artifacts and could change the over-all intensity/color. So in image processing, what happens is that these round-off "errors" as they are called, are distributed to the neighboring output pixel positions as additional inputs. I suppose you could think of it as a micro-blurring process.

The end result is that you can get images that look the same, but have new pixel values that were not there to begin with.

Just about any operation that involves local pixel calculations, as opposed to strictly global ones (like brightness, contrast, etc.) will (and should) use such "error distribution" as part of the algorithm.