Click the left, right, up and down icons in the lower-right on any slide. Left and right traverse high level topics. In-depth material for selected topics is available via the down arrow. Use the ESC key to see an overview and directly navigate to topics.
The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model is a physics-driven data generation package. It is used to simulate electro-optical and infrared (EO/IR) data (primarily images) collected by ground, airborne and space-based imaging systems.
The primary goal of the model is to support science and engineering trade studies for remote sensing systems.
DIRSIG models how user-defined virtual sensors collect data in a user-created 3D world. That world is defined by 3D geometry using common facet and volumetric representations. The materials applied to the scene geometry are spectral with coverage across the entire EO/IR spectrum.
The model uses an arbitrary bounce, path tracing numerical radiometry approach for light propagation.
Simulated Port of Tacoma Change Pair
Check out this Open Access (free) paper in IEEE JSTARS for a detailed description of the DIRSIG5 model.
A complex object represented as a mesh of facets (aka polygons)
A complex volume is represented by a grid of 3D "voxels".
Plots of bi-directional reflectance distribution functions (BRDFs) with varying levels of specularity.
Plots of the bi-directional reflectance distribution function (BRDF) for a material as a function of wavelength.
Path Tracing Concept
The simulated collection systems can produce (but are not limited to) the following types of data products:
The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model has been actively developed at the Digital Imaging and Remote Sensing (DIRS) Laboratory at Rochester Institute of Technology (RIT) for three decades.
The software has been freely available to the user community since 1999 as long as that user has attended a DIRSIG Basic Training session.
A DIRSIG RGB simulation of a space-based sensor imaging southern California with cirrus clouds
When the DIRSIG model is provided highly crafted scenes, the results are qualitatively and quantitatively impressive. These are images created by DIRSIG of the "Harvard Forest" scene developed at RIT.
Overhead simulation of the "Harvard Forest" scene
DIRSIG's built-in image viewer showing the output of a multi-spectral WorldView-3 (WV3) simulation, including the band names, centers and widths.
A conceptual representation of the optional raster truth data cube.
The raster truth data cube presents as a multi-band image with band names indicating the feature as viewed in DIRSIG's built-in image viewer.
Animation of an RGB simulation followed by a subset of raster truth products including the most abundant material, range, relative angle to the sensor, bounce count, sun fraction and sky fraction.
The RAMI phases have all focused on benchmarking "models designed to simulate the transfer of radiation at or near the Earth's terrestrial surface, i.e., in plant canopies and over soil surfaces."
RAMI simulations typically consist of radiance scans across portions of the hemisphere for "abstract" (e.g., statistically distributed) and "actual" vegetation canopies. All participating models use the same sets of inputs for each defined problem and then submit their results.
DIRSIG image simulations of some of the RAMI-V "actual" canopy scenarios.
Final results from RAMI-V have not been published yet, but we have compared DIRSIG to submitted results from the earlier RAMI-IV phase. In the cases shown here, DIRSIG agrees favorably with the average of the results from models that participated in that phase.
Scenes are generally created for specific projects and, hence, their extent and spatial resolution are driven by requirements for those projects.
LANDSAT-8 Simulation of Lake Tahoe (30 m GSD)
Handheld camera simulation of a vehicle under a camouflage net (0.3 m GSD)
Demonstration of wide-area scene approaches currently in development.
The "Alpine Scene" project is an internal project to explore and optimize methods for building large area scenes that are eventually 100s of km across. This prototype scene is not a real-world location, but is inspired by Mt Hood. The initial iteration of the scene is 40 km x 40 km and contains 10,000,000 conifer trees.
This is a DIRSIG full-motion video (FMV) simulation of the SkySat-1 satellite imaging Tacoma, WA. The perspective of the scene changes as the satellite moves in its orbit. The scene contains dynamic content including moving vehicles and people.
An airfield scene featuring different types of motion
Simulation of high-altitude Unmanned Aerial System (UAS) over Irondquoit, NY with a traffic simulation that was generated by and imported from the Simulation of Urban MObility (SUMO) traffic model.
DIRSIG leverages the physics-driven MODTRAN™ model developed by Spectral Sciences, Inc.(SSI) for atmospheric radiative transfer (direct solar and diffuse sky illumination, path scattering, path emission, path transmission, etc.). DIRSIG pre-builds databases unique to each simulation that incorporates geolocation, day of year, time of day and the MODTRAN description of the atmosphere (aerosols, visibility, etc.).
MODTRAN is not bundled with DIRSIG and must be provided by the user.
Dawn
Midday
Dusk
The diurnal (dawk to dusk) simulation above demonstrates the automated coupling with MODTRAN and DIRSIG's built-in solar and lunar ephemeris modules.
DIRSIG supports refraction along paths in the atmosphere and can be directly coupled to the temperature, pressure and water vapor profiles utilized in MODTRAN. Below are simulations of a very long (20 km) slant path view of a 1 x 1 meter USAF bar target. The mean path refraction is a few degrees and the wavelength dependent refraction (angular dispersion from the mean path) between the RGB channels is around 1 microradian.
Without wavelength dependent refraction (all wavelengths refract based on the average wavelength)
With wavelength dependent refraction (this simulation takes 3x longer to compute 3 wavelength dependent solutions)
DIRSIG has a pair of plugins that leverage the industry standard OpenVDB format for storing volumetric data such as clouds and plumes. The plugins support data-driven motion and temporal evolution of these volumes.
Volumetric optical properties support descriptions of the spectral extinction, absorption and/or scattering.
The same path tracing radiometric solution is used for volumes that is used for traditional 3D scene geometry. The paths through these volumes might employ 10s of "bounces".
A DIRSIG simulation of a "tornado" VDB sequence that has been assigned cloud optical properties.
A visible region simulation of a cloud over MegaScene1. This cloud VDB model was downloaded from a 3rd party source, but commercial tools to generate clouds are available. Clouds can be modeled using default spectral scattering and absorption optical properties or be overriden by user-supplied properties.
Visible region animation of a water vapor plume from a mechanical draft cooling tower (MCDT).
The simulated video above is modeling a consumer 2D array RGB sensor on a UAV. The simulation includes the jitter of the small vehicle platform which results in the observed bouncing and blur.
Lens Distortion
CFA Patterns
The user can operate an array without temporal integration (output is instantaneous radiance) or with temporal integration using either a global or rolling shutter.
No Integration
Global Shutter
Integration time and readout clock
Rolling Shutter
Integration time, line delay time and readout clock
The pan channel from a DIRSIG simulation of an airborne multi-spectral pushbroom system featuring detector variations and a dead pixel.
The red, green and blue channels from a DIRSIG simulation of an airborne multi-spectral pushbroom system showing the offsets in the raw data between focal planes.
DIRSIG simulations under different atmospheric conditions made with a generic pushbroom V/NIR/SWIR airborne hyper-spectral system.
Uniform Channels
Non-Uniform Channels
DIRSIG's per-pixel truth system can provide data to measure the performance of various HSI algorithms (e.g., sub-pixel target detection).
The model supports a variety of data- and model-driven temperature prediction solutions. The simulation above leverages the MuSES EO/IR Signature simulation software for the vehicle temperature signatures and one of DIRSIG's built-in models for the rest of the scene.
Simulation of a passive low-light system imaging a parking lot. The low-light radiometric product from DIRSIG was then processed by a user program to model a conventional micro-channel plate (MCP) image intensified CCD camera (ICCD).
Output of the BundledObject2 demo included with DIRSIG shows the user how to add motion to sources and attach those sources to objects in motion.
The model supports bi-directional propagation of the transmitter beam, time of flight tracking along all paths and user-defined receiver detection. The user can use the existing platform model to incorporate various scan patterns.
You can learn more about this modality in the LIDAR Modality Handbook.
A simulation of an airborne GmAPD scanning LIDAR system. The dense Level-1 point cloud arises from the dark current noise inherent to the system being modeled.
The same Level-1 point cloud shown after being clipped in the data viewer to reveal the terrain and objects on the terrain in the scene.
A TEM22 beam example from the LidarUserBeam1 demo.
A mosaic of outputs from water-centric DIRSIG simulations.
The DIRSIG model supports modeling ground-to-space and space-to-space collection scenarios in support of Space Domain Awareness (SDA) or Space Situational Awareness (SSA) missions.
Real and DIRSIG simulated images of the Hubble Space Telescope (HST) as imaged from the shuttle during a servicing mission (simulation featured at the 2014 AMOS conference and produced at Lockheed-Martin by David Bennett, et al).
A mosaic of chips for a single Tu-16 ("Badger") bomber.
A collection of chips for a multiple targets on a collection of grassy backgrounds.
ChipMaker was originally designed to support algorithm users rather than sensor engineers. Machine Learning (ML) algorithms have been (generally) trained with higher level processed data (L2+) where many sensor characterstics have been compensated for or corrected in some way. Furthermore, most algorithm users are not aware of engineering level details of the sensor they are interested in. Hence, the sensor modeled in ChipMaker is simplified and configured with higher level descriptors:
ChipMaker is an evolving capability and as ML usage and training changes, the tool and workflows will evolve as well.
The DIRSIG4 model supports modeling Synthetic Aperture Radar (SAR) systems and outputs the complex phase history that must be externally processed to focus into a traditional SAR image product.