In the case where processed digital images are presented in court we have to establish two things:
The area where I see most concern is where images are processed using proprietary software, for example Photoshop, where the implementation details of of any image processing algorithm are inaccessible. Some algorithms are quite complex and the results may be sensitive to subtleties in the implementation. An image that has been extensively processed using proprietary software may well be challenged in court. Refuting that challenge will be difficult if one does not have access to the source code.
Another area of difficulty is recording all the steps used in the image enhancement process. Some applications such as Photoshop provide an extensive 'history' recording process. However, this sort of facility is designed primarily with the aim of being able to undo processes, or revert back to an earlier step in the sequence of processes. The history recording may not allow access to particular parameter values used in any individual step.
It should be noted that some MATLAB functions cannot be viewed. These are generally lower level functions that are computationally expensive and are hence provided as 'builtin' functions running as native code. These functions are heavily used and tested and can be relied on with considerable confidence.
In general, image files store data to 8 bit precision. This corresponds to a range of integer values from 0-255. A pixel in a colour image may be represented by three 8 bit numbers, each representing the red, green and blue components as an integer value between 0 and 255. Typically this is ample precision for representing normal images.
However as soon as one reads this image data into memory and starts to process it it is very easy to generate values that lie outside the range 0-255. For example, to double the contrast of an image one multiplies the intensity values by 2. An image value of 200 will become 400 and numerical overflow will result. How this is dealt with will vary between image processing programs. Some may truncate the results to an integer in the range 0-255, others may perform the mathematical operations in floating point arithmetic and then rescale the final results to an integer in the range 0-255.
It is here that numerical precision, and hence image fidelity, may be lost. Some image processing algorithms result in some pixel values with very large magnitudes (positive or negative). Typically these large values occur at points in the image where intensity discontinuities occur, the edges of the image are common sources of this problem. When this image with widely varying values is rescaled to integers in the range 0-255 much of this range may be used just to represent the few pixels with the large values. The bulk of the image data may then have to be represented within a small range of integer values, say from 0-50. Clearly this represents a considerable loss of image information. If another process is then applied to this image the problems can then accumulate. Trying to establish the extent of this problem, if any, is hard if one is using proprietary software.
Being a general programming language it is possible to have complete control of the precision with which one represents data in MATLAB. An image can be read into memory and the data cast into double precision floating point values. All image processing steps can then be performed in double precision floating point arithmetic, and at no intermediate stage does one need to rescale the results to integers in the range 0-255. Only at the final point when the image is to be displayed and/or written to file does it need to be rescaled. Here one can use histogram truncation to eliminate extreme pixel values so that the bulk of the image data is properly represented.