Skip to content

Conversation

sengebusch
Copy link
Contributor

I added two functions into light table:

1st.:
you can select now two or more RAW images and create a mean image.
If the images show the same scene, this reduces read noise and shot noise.

This is mainly of interest for long exposure dark frames which should be subtracted
from long exposure images. This reduced the extra noise by the dark frame.

2nd.:
you can select two RAW images, including the written DNG from above.
The image with the lower mean value gets identified as dark frame and will be
subtracted from the other image.
This reduces fixed pattern noise, e.g. dark current non uniformity, specially in
log exposure shots or at high temperature.

Best practice is:

a) do the shot first and then take about 16 dark images, i.e. with the lens cap on, with the same exposure time.

b) create the mean image of the 16 dark images

c) subtract the mean dark image from the shot

This increases read noise in the final image by about 10 % only.
Using a single dark image instead of the mean image increases the read noise
in the final image by about 41 % which make the noise reduction profile less efficient.

I copied the code from create HDR as a template.
I kept the same structure and naming convention to keep the code readable.

Karsten

victoryforce and others added 30 commits May 27, 2025 11:52
Instead of a period in the first sentence, add a blank line
before the auxiliary sentence for better visual separation.
loading screen: reclaim mem if user toggles preference
…intenance

[maintenance] TIFF loader: fix a typo in the URL, remove unused code
…iptions-remove-ending-period

[GUI] Remove period from description texts of some modules for consistency
to recent pot file
…-250528

Translations update: en@truecase and Ukrainian
                    input is visible.

RELEASE_NOTES.md - added release note to Lua bug fixes section
…le-description-fixes

[GUI] Sigmoid module description grammar/style fixes
dt_get_thread_num() can be the same value for multiple chunks under some circumstances. Using it for the start and end calculation will thus be unreliable, resulting in parts of the image containing just black pixels.

Fixes  darktable-org#18127.
Can test "in-real" as without access to any windows system.
@TurboGit
Copy link
Member

I would have done that in the darkroom not in lighttable. The HDR code is in lighttable because from multiple images you create a new one. In the case here, we want to correct one image using a dark-frame. So I would have created a new module possibly named dark-frame having a way to load a specific image created by the camera and use this to correct the main image.

You may have a look at the external raster mask or overlay module (using an image from the collection) for inspiration.

@jenshannoschwalm may have further hints because it is our expert in this area.

@sengebusch
Copy link
Contributor Author

@TurboGit

I would have done that in the darkroom not in lighttable. The HDR code is in lighttable because from multiple images you create a new one.

that is exactly the same for the mean image and dark frame subtraction.
This is why I placed it into lighttable.

@jenshannoschwalm
Copy link
Collaborator

You could also do it like suggested by @TurboGit as in overlay, that would mean you'd have to implement "how to select multiple files".

@sengebusch
Copy link
Contributor Author

@ Hanno
as the work has to be done on RAW files without filters applied, like for create HDR,
I still think that lighttable is the better place.

Also the workflow is fine in lighttable:
as you do real shot and dark images just one after each other,
all files can be selected with just two clicks in lighttable.

select_mean
wrote_mean
select_darkframe
wrote_final

In my opinion this is much faster to handle than a solution in darkroom.

@sengebusch
Copy link
Contributor Author

The HDR code is in lighttable because from multiple images you create a new one.
for the mean image this will be the same, i.e. we create a new image, so I would keep this in lighttable.

In the case here, we want to correct one image using a dark-frame.
for the dark frame subtraction, a module in darkroom could be an alternative, but it has to be the very first filter
even before demosaicing, as we want to remove the RAW artefacts.
Also, this does not work sufficient on jpeg, etc.

Well, I will start to work with the implemented solution in real life and check, how it performs.
The first tests were impressive.

But I also will take a look into the overlay module and see, if it could make sense to implement
the dark frame subtraction there, as it is not necessary to create a new image for this.

@Donatzsky
Copy link

Donatzsky commented Jul 28, 2025

The dark frame subtraction belongs in the darkroom, it seems to me, since you're actually processing an image. Not sure about the mean image.

@sengebusch
Copy link
Contributor Author

The mean image is a new one, like HDR it gives new information.
The resolution can be beyond integer limits, i.e. read noise and shot noise
gets reduced in some cases below the LSB, not only for dark frames,
but also for multiple shots of the same scene.

So I would keep the mean image creation in lighttable.

For dark frame subtraction my questions are, how I can make sure that:

  • this is done with RAW images only
  • this is the very first filter before demosaicing, black / white point, etc.
  • that the images are of same size, orientation, etc.
  • how can I read a second RAW image in the pixel pipe
  • how can I easily select the dark image following to the shot

@jenshannoschwalm
in #19110 you mentioned:
There is one big problem ahead. The darktable pixelpipe does not support reading more than one raw file for an image to be processed.
and suggested "Create HDR" as starting point.

At the moment both features work fine in ligthtable, so they can be used and tested.

But I will take a look to the overlay code, but probably I need some help with
the questions above.

Thank you very much for your support.

@ralfbrown
Copy link
Collaborator

I'll opine that mean image should be done in lighttable, while dark frame subtraction is more appropriate for darkroom - in addition to likely wanting to apply the same dark frame to multiple images, the dark frame itself is likely to be a mean image. So we create one new file for the mean image, but then avoid adding a new image for each image to which DFS is applied.

You might want to check whether the composite module can already be used for DFS - centered at 100% scale, 100% opacity and using a difference blend mode. I suspect that its current limitation of rendering the overlay at 8 bpc will cause problems, though.

@ralfbrown ralfbrown added feature: new new features to add scope: image processing correcting pixels labels Jul 28, 2025
@sengebusch
Copy link
Contributor Author

@ralfbrown
from handling point of view, the composite module really is a good choice. Thank you for the suggestion.

From processing point of view, we should keep in mind that the DFS will fail, if any other filter was applied!
It has to be done on RAW data even before demosaicing.

I do a lot of DFS for many different camera types in my daily scientific and industrial work and I know very
well which problems you can run in, if any filter was applied.
Even if DT will not be used for science often, I would really like to try to write this processing in a correct way.

(at the moment for my work, a correct result is more important to me than the question, if lighttable or darkroom
is the better place. Currently it works fine.)

But let me try to understand the pixel pipe and where to place this new module.
Perhaps a combination of composite (for the image selection) and black/white point (for handling
RAW data early in the pipe) could do the job.

Thank you for this fresh and productive discussion.

@ralfbrown
Copy link
Collaborator

Right, forgot that DFS needs to be done on the raw raw data. That means it should come between raw black/white point and (legacy) white balance in the pixelpipe.

@parafin
Copy link
Member

parafin commented Jul 29, 2025

Definitely not after rawprepare (raw black/white point). It has to be either before rawprepare (and add the expected black point value), or be integrated into rawprepare as an optional replacement of black point subtraction. Note that rawprepare already has support of applying gain map from DNG, which is flat-field correction and is closely related to dark-frame subtraction.

@sengebusch
Copy link
Contributor Author

@parafin
rawprepare sounds as a good place.
If we subtract a dark frame, the black point is subtracted automatically, as both images should have the same.

But we have to make sure not to clip at zero due to read noise, so we should add again some margin ...
something in the range of fife times the read noise, i.e. fife sigma, should be fine but unfortunately
the read noise of the images could be different due to mean value, so it is not easy to determine.

Alternatively, we go through the final image and add just enough offset that no pixel is clipped at zero.
If there is no clipped pixel at all, we do nothing.

@sengebusch
Copy link
Contributor Author

From last year problems with rawprepare and GFX100 I learned that
plugins/darkroom/rawprepare/allow_editing_crop=TRUE
gives the possibility to set values which are used by rawprepare.

I will take a look into the code, perhaps this could be a template to pass the 2nd image to rawprepare.

@parafin
Copy link
Member

parafin commented Jul 29, 2025

Clipping is totally separate issue, which doesn’t depend on whether you’re using just black level or dark frame. If you shift zero, then image is no longer linear, and all subsequent maths are invalid. As far as I understand, image is already in the float format in the rawprepare output, so it can hold negative values. Though I’m not sure which following iops can handle them and when the clipping will happen.
My point is that if you think there is an issue with zero clipping, you should raise a separate issue and solve it separately.

@sengebusch
Copy link
Contributor Author

you are right, as rawprepare outputs as float, clipping is no issue.
It seems that the exposure module could be used to correct clipped images, if really necessary.
So lets start with DFS only and assume that DFS would replace the black point subtraction.

@TurboGit TurboGit marked this pull request as draft July 30, 2025 19:10
@jenshannoschwalm jenshannoschwalm changed the base branch from master to align_image_stack August 4, 2025 15:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature: new new features to add scope: image processing correcting pixels
Projects
None yet
Development

Successfully merging this pull request may close these issues.