Keywords: sensor, temporal

Summary

This demo describes how to enable temporal integration of pixels. This allows motion blur from scene object and/or platform motion to be included in the output data product. This demo complements the TemporalIntegration1. In that demo, the platform is fixed and the car in the scene is moving. In this demo, the platform is moving and the car in the scene is fixed. In both cases, the car looks blurred due to the relative motion of the car and platform.

Details

At the core, the DIRSIG model is a ray tarcer that samples the scene with rays that have zero area. In order to spatially integrate a pixel, a number of rays must be used to spatial sample the angular extent of the pixel. However, this spatial sampling performs intersections at an instant in time. By default the clock for each focal plane is essentially a read-out clock that reflects what the focal plane sees at that instant in time. In order to account for motion (temporal variation) of either the objects in the scene or the platform, the signal onto each pixel must be sampled temporally (similar to why each pixels must be sampled spatially to get spatial variation within the pixel).

Note
Like spatial sampling, temporal sampling results in increased run time due to the additional calculations that must be performed.

Important Files

Platform Description

As of DIRSIG 4.7.3, the temporal integration parameters can be edited on the Clock tab of the Focal Plane editor. The excerpt below shows where these parameters appear in the .platform file.

            <capturemethod>
              ...
              <temporalintegration>
                <time>0.1</time>
                <samples>10</samples>
              </temporalintegration>
            </capturemethod>
            <detectorarray spatialunits="microns" >
              <clock type="independent" temporalunits="hertz" >
                <rate>10</rate>
              </clock>
              ...
            </detectorarray>

The <clock> for this focal plane has a 10 Hz rate, which means the array is read-out every 0.1 seconds. Within the <temporalintegration>, the <time> is the integration time, which that can be controlled independent of the clock (read out) rate. In this case we have set it to 0.1 seconds, which means that we will integrate the signal onto each pixel for the entire period between each read out.

Important
There is no logic in place at this time to check if the integration time is less than the read-out time. As a result it is possible to integrate for 0.2 seconds but read out that signal every 0.1 seconds.

The <samples> variable controls how many times we temporally sample each pixel during the integration time. In this example, the total integration time is 0.1 seconds and we are using 10 samples within that time. For each temporal sample, the same spatial sampling scheme is employed and each temporal sample is equally weighted in the final result. As the model traces each spatial/temporal sample ray, the platform and any moving scene geometry will be appropriatedly positioned. Therefore, motion of the platform and or scene geometry will be different for each temporal sample.

Note
Increases in the number of temporal samples generally leads to a linear increase in run-time. For example, 10 temporal samples will take twice as long as 5.

Platform Motion

The demo.ppd file describes the platform moving at about 55 MPH (about 25 m/s).

Impact on Radiometric Units

Turning this temporal integrations means that the radiance is integrated vs. time. That means your Watts term will go from Watts to Watts * seconds, or Joules/second * seconds = Joules (energy). If you decrease the <time> then the energy (Joules) will decrease. Integrate longer, more energy (Joules). The output image files will go from being Watts/cm2/sr to Joules/cm2/sr.

Setup

The simulation is currently configured to read out a 320 x 240 2D array once (a single task with no duration). This produces a single output image frame.

  1. Run the file blur_off.sim

  2. Review the results in demo.img

  3. Run the file blur_on.sim

  4. Review the results in demo.img

  5. Manipulate the <samples> variable in blue_on.platform and rerun.

When the temporal sampling and integration is enabled, the car will be blurred due to it moving while the detector is integrating the signal.

Results

The two images below show the impact of modeling the temporal intregration that occurs between array read-outs for the simple framing array camera modeled in the demo. The first image is what results when the integration is not performed or when the number of samples is set to 1. The car that is driving across the scene is instantaneously captured.

blur_off.png
Figure 1. Output image without temporal integration between readouts (samples = 1).

The second image is when temporal integration is enabled for a time window of 0.1 seconds (the full period between read outs) with 10 temporal samples. Because the car travels a significant distance in during that integration time, the the image of the car is blurred in the velocity direction.

blur_on.png
Figure 2. Output image with temporal integration between readouts (samples = 10).

Modeling Time-Delayed Integration (TDI)

Because each pixel is imaged multiple times and combined, this scheme is very similar to the Time-Delayed Integration (TDI) architectures used in many pushbroom focal planes. In a real TDI pushbroom array, each primary pixel has N additional pixels (usually referred to as "stages") behind it in the along-track direction. The first stage pixel is integrated and the resulting charge is moved into the second stage for the next integration cycle. In the meantime, the platform has advanced in the along-track direction so that the second stage pixel is now looking at the same location on the ground. This transfer and platform advance continues until the last stage where the final accumulated charge is read out. As a result, a new line of data comes out for each read-out cycle but it reflects the integration of charge across N integration times, which is much longer than a single read-out period. TDI is employed because the accumulation of charge effectively increases the overal integration (exposure) time, which enhances the signal to noise.

Because the temporal integration mechanism in DIRSIG accumulates the signal across all the samples in the integration time, and because the platform and scene objects move for each temporal sample the output produced with this method is nearly the same as TDI.

Note
N temporal samples is a good approximation of N TDI stages.

The missing component in this approach is that DIRSIG does not model the noise from each stage. Therefore, the user must apply the effective noise level for the combination of all TDI stages.