DIRSIG Overview

An overview of the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model


Learn more at www.dirsig.org

Click the left, right, up and down icons in the lower-right on any slide. Left and right traverse high level topics. In-depth material for selected topics is available via the down arrow. Use the ESC key to see an overview and directly navigate to topics.

What is DIRSIG?

The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model is a physics-driven data generation package. It is used to simulate electro-optical and infrared (EO/IR) data (primarily images) collected by ground, airborne and space-based imaging systems.

The primary goal of the model is to support science and engineering trade studies for remote sensing systems.

DIRSIG models how user-defined virtual sensors collect data in a user-created 3D world. That world is defined by 3D geometry using common facet and volumetric representations. The materials applied to the scene geometry are spectral with coverage across the entire EO/IR spectrum.

The model uses an arbitrary bounce, path tracing numerical radiometry approach for light propagation.

Simulated Port of Tacoma Change Pair

Check out this Open Access (free) paper in IEEE JSTARS for a detailed description of the DIRSIG5 model.

Scene Geometry

  • DIRSIG is fundamentally a ray tracer
    • Light paths are represented as a "ray", which is represented by an origin (3D point) and a direction (3D vector).
    • Scene geometry is represented by mathematical surfaces (e.g., planes, spheres, boxes, etc.).
    • If the ray and a surface intersect, then they share a point – the point where the line equation for the ray and the equation for the surface are equal.
  • The most common surface is a "facet" (aka a polygon).
    • A facet is a mathematical plane that is constrained by a set of vertices – 3D points that are on the plane.
  • Complex objects are represented as a mesh, or a collection of facets.
    • Object meshes are generated externally in 3D content creation tools and imported into DIRSIG via common 3D formats.
  • Simple/specialized objects are supported natively
    • Curved surfaces (e.g., spheres) can be represented mathematically to avoid quantizing these surfaces with facets.

A complex object represented as a mesh of facets (aka polygons)

A complex volume is represented by a grid of 3D "voxels".

Scene Materials

    Surfaces and volumes are assigned materials
    • Materials include spectral optical properties (e.g., reflectances, emissivities, absorptions, extinctions, etc.).
    • Advanced properties describe a directional dependence in addition to spectral dependencey.

Plots of bi-directional reflectance distribution functions (BRDFs) with varying levels of specularity.

Plots of the bi-directional reflectance distribution function (BRDF) for a material as a function of wavelength.

Numerical Radiometry Method

  • DIRSIG is a radiance model
    • It computes the rate of energy transfer – the flux per unit area, unit wavelength, and unit solid angle.
    • Radiance is an instantaneous measure of energy transfer (power).
  • DIRSIG uses a Monte Carlo integration method known as path tracing to estimate the global illumination (infinite bounce) flux arriving into a pixel.
    • It leverages reverse ray tracing to find multi-bounce paths that link sources of energy to a given pixel.
    • Rich surface and volume optical properties (e.g., BRDFs) drive the directions of paths through the scene.
    • High value sources (sun, moon, man-made, etc.) are explicitly handled to minimize sampling noise.
    • Many (10s to 100s) of paths are accumulated to estimate the radiance onto the pixel.

Path Tracing Concept

General Capabilities

The simulated collection systems can produce (but are not limited to) the following types of data products:

  • Single/Broad Band (e.g, panchromatic)
  • Multi-Spectral Imaging (MSI)
  • Hyper-Spectral Imaging (HSI)
  • Low-light (amplified)
  • Mid-Wave Infrared (MWIR)
  • Long-Wave Infrared (LWIR)
  • Full-Motion Video (FMV)
  • Laser radar (Linear mode, Geiger mode, Waveform)


Primary Use Cases

  • Design Trade Studies
    • Construct and test a new sensor designs in a virtual environment and evaluate design trade offs.
    • Produce example data products for testing internal data processing pipelines and to share with stake holders (customers, end users, etc.).
  • Phenomenology Studies
    • Explore how different real-world situations and processes can manifest themselves in different sensing modalities.
  • Algorithm Training
    • Generate data sets to train conventional data-driven and modern machine learning (ML) algorithms
    • Leverage access to image formation parameters to automatically generate meta data, labeling, etc.
  • Algorithm Testing
    • Generate test data to supplement expensive field collections, with control over image formation parameters we cannot control in the real world (e.g. the weather).
    • Provide per-pixel truth for better evaluation of algorithm performance.

History

The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model has been actively developed at the Digital Imaging and Remote Sensing (DIRS) Laboratory at Rochester Institute of Technology (RIT) for three decades.


The software has been freely available to the user community since 1999 as long as that user has attended a DIRSIG Basic Training session.

Detailed History

  • Started in the mid-80s as a 3D long-wave infrared (LWIR) simulation tool for understanding the slant-view phenomenology of ships on the ocean.
  • DIRSIG2 was started in 1992
    • Around 8,000 lines of C and FORTRAN
    • Used internally for research at RIT
  • DIRSIG3 was started in 1998
    • Around 20,000 lines of C, C++ and FORTRAN
    • This version included a graphical user interface(GUI)
    • DIRSIG 3.2 was the first publicly released version (1999)
  • DIRSIG4 was started in 2004
    • Around 250,000 lines of C++
    • This version included LADAR/LIDAR and polarization
  • DIRSIG5 was started in 2018 (Current)
    • Around 150,000 lines of C++ (C++17), but uses the DIRSIG4-era GUI
    • This version is multi-threaded and supports Message Passing Interface (MPI)

DIRSIG vs. Traditional Renderers

  • Unlike most traditional RGB-only rendering packages, DIRSIG was designed from the ground up with a focus on simulating complex, remote sensing systems.
    • All calculations are performed spectrally and it can output an arbitrary number of channels.
    • It employs absolute radiometry and physical units.
    • It leverages physics-based atmospherics using community accepted tools.
    • It models non-traditional “cameras” (not just 2D framing arrays) unique to this field.
    • It has support for thermal wavelengths (self emission).
    • It has support for active modalities (laser radar).
    • It provides per pixel telemetry ("truth") that can be used to understand the data, evaluate algorithms, etc.

A DIRSIG RGB simulation of a space-based sensor imaging southern California with cirrus clouds

Model Strengths

  • Physics-driven, single-solution spectral radiometry
    • The same path tracing solution is utilized for all wavelength regions, surfaces and volumes, etc.
  • Atmospheric modeling
    • Leverages MODTRAN (not included) for atmospheric radiative transfer (direct and diffuse illumination, path scattering, path emission, path transmission, etc.).
    • Planned expansion to support the 6S atmospheric radiative transfer model.
  • Complex data collections
    • The primary sensor can model a variety of staring and scanning cameras, it supports multi-camera payloads and a single simulation can model multiple payloads.
    • Each sensor payload can be dynamically positioned (flight lines, orbits, etc.) during the collection.
  • Actively Developed for 35+ Years
    • The model is constantly improving thanks to strong relationships with its research sponsors and user community.

Model Weaknesses

  • Limited detection modeling (improving)
    • This is primarily due to historical usage, where most users wanted at-aperture radiance and would use their own in-house detection models.
    • Current versions of DIRSIG include a basic detection model that supports Shot (arrival) noise, dark current noise, read noise, quantization, etc.
  • Dependence on 3rd party atmospheric models
    • MODTRAN, 6S, etc. are powerful community accepted models, but they are 1D (they lack horizontal variation) and employ plane-parallel techniques.
    • Ideally the atmosphere would support horizontal variations and better solutions for large areas where earth curvature matters.
  • Limited built-in temperature prediction
    • The built-in THERM model is limited to environmentally loaded scenarios where lateral conduction doesn't matter.
    • The recently introduced MuSES plugin allows the user to leverage a high-fidelity, transient thermal solver.
  • Non-Commercial business model
    • The free access business model means development relies on competitive research grants and contracts rather than steady revenue from sales.
    • RIT has recently established a new Enterprise Center business model that allows the team to provide users with options for tech support contracts.

How do you learn to use it?

  • There is a multi-day training course that is required to gain access to the software.
    • The course is a balance of lecture and hands-on tutorials.
    • The step-by-step tutorials account for +100 pages (+52,000 words) of material.
  • There is a vast amount of DIRSIG documentation available.
  • There are almost 130 individual demonstration simulations included with DIRSIG.
    • Each demo provides a working example of a specific feature.
    • Each demo includes a README that describes the key aspects that the demo is attempting to capture.

How realistic is it?

When the DIRSIG model is provided highly crafted scenes, the results are qualitatively and quantitatively impressive. These are images created by DIRSIG of the "Harvard Forest" scene developed at RIT.

Scene Construction Challenges

  • The biggest limitation in DIRSIG simulations is the quality of the input descriptions.
  • The commercial film and computer gaming industries have spoiled the average person into believing that photo realism is easy and has been commoditized.
    • The reality is that creating photo realistic scene content is a special craft involving highly trained individuals.
    • A big budget movie or game will employ 1000s of 3D and 2D artists to create digital content.
  • Even with access to state-of-the-art real-time graphics platforms like Unreal Engine and Unity, the limiting factor is creating 3D content for those platforms.
    • Furthermore, these graphics platforms and the content creators that use them are usually focused on the visible wavelengths.
    • Modeling all the wavelengths of interest to the remote sensing community requires extension to these platforms and training for the content creators.
  • The remote sensing community typically does not employ 3D content creators to use in these applications.
    • However, when we do apply advanced tools and talents to the task (like we did in this "Harvard Forest" scene), the results are compelling.

Overhead simulation of the "Harvard Forest" scene

The "Harvard Forest" Scene

  • This virtual site model was developed to support a program that is looking at sub-canopy mapping using waveform LIDAR.

    • A 700 x 500 meter scene model of a managed site in central MA.
    • 750k individual vegetation objects (tree, bush, shrub, etc.) from 76 unique geometry models for 42 unique species (+12M facets).
    • 124 spectral vegetation curves (including procedural variations) representing ~30 unique species.
  • This site model reflects approximately 1 man year of work, including on-site visits to document and characterize the area.
  • RIT is actively engaged in research to help automate the production of large-area site models.

What is the primary output?

  • DIRSIG's primary output uses the data+header file pair format commonly employed by the ENVI image exploitation software.
    • These files can be directly opened in most electronic light table (ELT) applications including ENVI, ERDAS, Opticks, etc.
    • At RIT, we frequently use the GDAL geospatial toolkit to perform common ground processing tasks, including geo/ortho projections, etc. directly on the DIRSIG output images.
    • DIRSIG comes with its own basic, built-in image viewer that can be used for basic image visualization tasks.
    • These files can be easily ingested into Python (specifically via Spectral Python (SPy)), Matlab, etc. with existing libraries and/or toolboxes.
    • The format supports an arbitrary number of channels, all major integer and floating-point datatypes and flexible meta-data descriptions.
  • The type of data in the primary output of a sensor depends on how the sensor has been configured.
    • A simple radiance simulation will output radiances as absolute floating-point values.
    • A more complex sensor with a detection model can output in digital counts (driven by the setup).

DIRSIG's built-in image viewer showing the output of a multi-spectral WorldView-3 (WV3) simulation, including the band names, centers and widths.

What is truth output?

  • The model also supports a raster "truth" data cube, that captures a feature vector for each pixel.
    • Users can request features including the most common material, average temperature, location, sun shadow fraction, sky fraction, etc.
  • This data can be used to gain understanding into the simulation or as truth for evaluating algorithm performance.

A conceptual representation of the optional raster truth data cube.

The raster truth data cube presents as a multi-band image with band names indicating the feature as viewed in DIRSIG's built-in image viewer.

Example Truth Images

Animation of an RGB simulation followed by a subset of raster truth products including the most abundant material, range, relative angle to the sensor, bounce count, sun fraction and sky fraction.

Has it been validated?

  • The DIRSIG model has undergone multiple verification and validation (V&V) studies over the years.
    • A summary of these activities through 2008 is available on the DIRSIG website here. An effort to update this list with the activities from the past decade is underway.
    • Results from V&V activities are published in journal papers, conference papers and student theses and dissertations.
  • The GitLab revision control platform used for DIRSIG software development includes a continuous integration (CI) facility that runs 100s of verification checks each day.
  • Since dedicated experiments to support V&V activities are expensive, most efforts are opportunistic and focus on available data or intercomparisons with other established models.

RAMI-V Participation

The RAMI phases have all focused on benchmarking "models designed to simulate the transfer of radiation at or near the Earth's terrestrial surface, i.e., in plant canopies and over soil surfaces."

RAMI simulations typically consist of radiance scans across portions of the hemisphere for "abstract" (e.g., statistically distributed) and "actual" vegetation canopies. All participating models use the same sets of inputs for each defined problem and then submit their results.

DIRSIG image simulations of some of the RAMI-V "actual" canopy scenarios.

Unofficial RAMI-IV Results

Final results from RAMI-V have not been published yet, but we have compared DIRSIG to submitted results from the earlier RAMI-IV phase. In the cases shown here, DIRSIG agrees favorably with the average of the results from models that participated in that phase.

How big are scenes?

Scenes are generally created for specific projects and, hence, their extent and spatial resolution are driven by requirements for those projects.

LANDSAT-8 Simulation of Lake Tahoe (30 m GSD)

Handheld camera simulation of a vehicle under a camouflage net (0.3 m GSD)

Wide-Area Scenes

  • A single DIRSIG scene is limited to 5 km across due to precision management challenges using a single-precision ray tracing engine.
    • However, multiple scenes can be tiled to assemble scene quilts that span large areas.
  • RIT is currently working on streamlined scene construction workflows to make it easier to assemble these wide-area scene quilts.

Demonstration of wide-area scene approaches currently in development.

Wide-Area Scene Prototyping

The "Alpine Scene" project is an internal project to explore and optimize methods for building large area scenes that are eventually 100s of km across. This prototype scene is not a real-world location, but is inspired by Mt Hood. The initial iteration of the scene is 40 km x 40 km and contains 10,000,000 conifer trees.

Alpine Scene

What about Dynamic Simulations?

This is a DIRSIG full-motion video (FMV) simulation of the SkySat-1 satellite imaging Tacoma, WA. The perspective of the scene changes as the satellite moves in its orbit. The scene contains dynamic content including moving vehicles and people.

Motion Options

  • The DIRSIG model supports several motion representations:
    • Linear paths driven by direction and velocity.
    • Circular paths driven by a center point, radius and velocity.
    • Arbitrary paths driven by waypoints and times.
    • Orbital motion driven by Two-Line Elements (TLEs) and a standard SGP4 propagator.
    • Orientation descriptions support auto-alignment with the velocity, stare points, data-driven (quaternions, Euler angles, etc.), etc.
  • Motion can be assigned to scene objects
    • Vehicles driving, planes flying, etc.
  • Motion can be assigned to sensor vehicles
    • Vehicles carrying the sensors can follow programmed paths, orbits, etc.

An airfield scene featuring different types of motion

Simulation of high-altitude Unmanned Aerial System (UAS) over Irondquoit, NY with a traffic simulation that was generated by and imported from the Simulation of Urban MObility (SUMO) traffic model.

How does it handle the atmosphere?

DIRSIG leverages the physics-driven MODTRAN model developed by Spectral Sciences, Inc.(SSI) for atmospheric radiative transfer (direct solar and diffuse sky illumination, path scattering, path emission, path transmission, etc.). DIRSIG pre-builds databases unique to each simulation that incorporates geolocation, day of year, time of day and the MODTRAN description of the atmosphere (aerosols, visibility, etc.).

MODTRAN is not bundled with DIRSIG and must be provided by the user.

Dawn

Midday

Dusk

Date and Time Aware

The diurnal (dawk to dusk) simulation above demonstrates the automated coupling with MODTRAN and DIRSIG's built-in solar and lunar ephemeris modules.

Atmospheric Refraction

DIRSIG supports refraction along paths in the atmosphere and can be directly coupled to the temperature, pressure and water vapor profiles utilized in MODTRAN. Below are simulations of a very long (20 km) slant path view of a 1 x 1 meter USAF bar target. The mean path refraction is a few degrees and the wavelength dependent refraction (angular dispersion from the mean path) between the RGB channels is around 1 microradian.

Without wavelength dependent refraction (all wavelengths refract based on the average wavelength)

With wavelength dependent refraction (this simulation takes 3x longer to compute 3 wavelength dependent solutions)

Clouds and Plumes

DIRSIG has a pair of plugins that leverage the industry standard OpenVDB format for storing volumetric data such as clouds and plumes. The plugins support data-driven motion and temporal evolution of these volumes.

Volumetric optical properties support descriptions of the spectral extinction, absorption and/or scattering.

The same path tracing radiometric solution is used for volumes that is used for traditional 3D scene geometry. The paths through these volumes might employ 10s of "bounces".

A DIRSIG simulation of a "tornado" VDB sequence that has been assigned cloud optical properties.

Clouds

A visible region simulation of a cloud over MegaScene1. This cloud VDB model was downloaded from a 3rd party source, but commercial tools to generate clouds are available. Clouds can be modeled using default spectral scattering and absorption optical properties or be overriden by user-supplied properties.

Plumes

  • The general OpenVDB support provides a mechanism for users to import plumes generated by a variety of models.
    • DIRSIG includes a puff-based, Lagrangian factory stack plume model derived from the work of Alfred K. Blackadar.
  • The video (right) is a ground camera simulation of water vapor (scattering and absorption) plumes in a bank of cooling towers.
    • This simulation is in the visible, but simulations of a gas plumes (absorption and emission) in the visible as well as the NIR, SWIR, MWIR and LWIR regions can be performed.

Visible region animation of a water vapor plume from a mechanical draft cooling tower (MCDT).

Modeling 2D Framing Array Sensors

The simulated video above is modeling a consumer 2D array RGB sensor on a UAV. The simulation includes the jitter of the small vehicle platform which results in the observed bouncing and blur.

Relevant Features

  • The DIRSIG model includes a sensor plugin suitable for most 2D array applications:
    • User-defined array sizes, pixel pitches and inter-pixel gaps.
    • User-defined spectral response functions including support for Color Filter Array (CFA) setups including Bayer and ColorSense patterns, that requires the user to externally demosaic the color data.
    • User-defined integration times, global and rolling shutters and user-defined read-out clocks.
    • Basic camera geometry using an effective focal length and optional 5-parameter lens distortion model.
    • Basic detection model that includes photon arrival (Shot) noise, quantum efficiency (QE), dark current noise, read noise and quantization via a user-defined A/D converter.

Lens Distortion

CFA Patterns

Integration and Readout Options

The user can operate an array without temporal integration (output is instantaneous radiance) or with temporal integration using either a global or rolling shutter.

No Integration

Global Shutter

Integration time and readout clock

Rolling Shutter

Integration time, line delay time and readout clock

Multi-Spectral Imaging (MSI)

  • The user can configure an arbitrary number of independent focal plane arrays (FPAs).
    • Each focal FPA can be used to model a unique spectral channel in the system.
  • Each focal plane can be geometrically defined using either:
    • A parametric description (pixel sizes, pixel spacing, etc.), or
    • A data-driven approach, where the user describes the position and properties for each detector element in the array.
  • Multiple smaller focal planes can be grouped to model sub-chip assembly (SCA) style focal planes.
    • Relative and absolute offsets of each SCA can be captured in order to create low-level (e.g., Level 0) datasets to test ground processing chains.
  • Basic detection model that includes photon arrival (Shot) noise, quantum efficiency (QE), dark current noise, read noise and quantization via a user-defined A/D converter.
  • Focal planes can support time-delayed integration (TDI).

The pan channel from a DIRSIG simulation of an airborne multi-spectral pushbroom system featuring detector variations and a dead pixel.

The red, green and blue channels from a DIRSIG simulation of an airborne multi-spectral pushbroom system showing the offsets in the raw data between focal planes.

Hyper-Spectral Imaging (HSI)

DIRSIG simulations under different atmospheric conditions made with a generic pushbroom V/NIR/SWIR airborne hyper-spectral system.

Hyper-Spectral Imaging (HSI)

  • Under-the-hood all calculations are performed spectrally
    • The user can provide an arbitrary list of channels, which are applied to the underlying spectral solution
    • Channels can be described parametrically (shape, center and width) or using data-driven approaches
  • The user can configure common HSI collection architectures:
    • Pushbroom: 1D across-track arrays scanned in the along-track direction via vehicle motion,
    • Whiskbroom: Limited detectors scanned in the across-track direction via scanner sub-systems and in the along-track direction via vehicle motion.

Uniform Channels


Non-Uniform Channels

Hyper-Spectral Imaging (HSI)

DIRSIG's per-pixel truth system can provide data to measure the performance of various HSI algorithms (e.g., sub-pixel target detection).

Mid-wave and Long-wave Infrared

The model supports a variety of data- and model-driven temperature prediction solutions. The simulation above leverages the MuSES EO/IR Signature simulation software for the vehicle temperature signatures and one of DIRSIG's built-in models for the rest of the scene.

Passive Low-Light Sensing

  • DIRSIG scenes can include an arbitrary number of light sources.
    • Sources can be arbitrarily located and directed in the scene.
    • Sources are described by a a spectral radiant intensity description.
    • Sources angular pattern can be described using an cosine model or files adhering to the IES standard.
    • Sources can be temporally modulated by a user-supplied frequency description.

Simulation of a passive low-light system imaging a parking lot. The low-light radiometric product from DIRSIG was then processed by a user program to model a conventional micro-channel plate (MCP) image intensified CCD camera (ICCD).

Output of the BundledObject2 demo included with DIRSIG shows the user how to add motion to sources and attach those sources to objects in motion.

Active Laser (LADAR/LIDAR)

The model supports bi-directional propagation of the transmitter beam, time of flight tracking along all paths and user-defined receiver detection. The user can use the existing platform model to incorporate various scan patterns.


You can learn more about this modality in the LIDAR Modality Handbook.

A simulation of an airborne GmAPD scanning LIDAR system. The dense Level-1 point cloud arises from the dark current noise inherent to the system being modeled.

The same Level-1 point cloud shown after being clipped in the data viewer to reveal the terrain and objects on the terrain in the scene.

Relevant Features

  • User-defined transmitter (laser source):
    • Same flexible clocking mechanisms as passive sensors.
    • Parametric or data-driven spatial (transverse) profile.
    • Parametric or data-driven spectral pulse (line) shape.
    • Parametric or data-driven temporal pulse shape.
  • User-defined receiver:
    • Same flexible geometric focal plane array (FPA) description as passive sensors.
    • Same flexible spectral response as passive sensors.
    • User-define range gate (listening window).
  • Support for mono-static (co-bored Tx/Rx), bi-static (separate Tx/Rx) and multi-static (multiple Rx and/or Tx).

A TEM22 beam example from the LidarUserBeam1 demo.

LADAR/LIDAR Output and Detection

  • LADAR/LIDAR simulations output to a DIRSIG specific waveform file containing the temporally digitized photon fluxes (for a user-defined range rate) for each pixel.
    • This file includes embedded platform ephemeris (location, orientation, etc.).
  • This radiometric data product can then ingested by one of the supplied DIRSIG detection models to produce Level-0 time-of-flight (ToF) data for each element:
  • Time-of-flight (Level-0) triggers from the detector model can then be combined with the platform ephemeris data to produce raw (Level-1) point cloud data.
    • The supplied tools can produce standard (ASCII/Text, LAS, BPF, etc.) 3D point clouds.
  • The user can also write their own detection and point generation tools.

Surface and sub-surface water applications

A mosaic of outputs from water-centric DIRSIG simulations.

Usage for Machine Learning

  • DIRSIG also features a workflow (referred to as "ChipMaker") for generating image chips for training Machine Learning (ML) algorithms.
  • Rather than configuring and running 1000s of individual simulations, the model has a way to define ranges for each dimension that will be randomly sampled for a user-defined number of images generated during a single simulation:
    • Atmosphere type, aerosols and visibility (via DIRSIG's FourCurve Atmosphere plugin)
    • Solar illumination zenith and azimuth,
    • Sensor view zenith and azimuth, and
    • Sensor GSD
  • In addition to the primary imagery, the user can request:
    • To include one or more standard truth features (to make target masks, etc.),
    • Image pairs that feature the chip with and without the target, and
    • A JSON meta-data report for each chip.

A mosaic of chips for a single Tu-16 ("Badger") bomber.

ChipMaker Workflows

  • The original ChipMaker workflow involved deploying 100s (or 1000s) of target instances into a single, large-area scene and each chip would be a randomly selected instance.
    • Existing DIRSIG scene assembly tools can automate the deployment of target instances into scenes.
  • An alternative workflow was recently introduced where DIRSIG is provided a set of "target" scenes and "background" scenes, and the model randomly selects a target and background pairing for each chip.
    • The "background" scenes in this approach are referred to as "scene-lets". They are standard DIRSIG scenes, but they are much smaller (approximately 100 x 100 meters).
    • The idea is to make libraries of these scene-lets for different scenarios (e.g., "woodland", "desert", "urban", etc.) by leveraging procedural scene construction techniques.

A collection of chips for a multiple targets on a collection of grassy backgrounds.

ChipMaker Assumptions

ChipMaker was originally designed to support algorithm users rather than sensor engineers. Machine Learning (ML) algorithms have been (generally) trained with higher level processed data (L2+) where many sensor characterstics have been compensated for or corrected in some way. Furthermore, most algorithm users are not aware of engineering level details of the sensor they are interested in. Hence, the sensor modeled in ChipMaker is simplified and configured with higher level descriptors:


  • Most real data has been ortho-projected and the chip FOV spans a small area.
    • ChipMaker does not have a traditional geometric camera model (pixel sizes, focal length, etc.) because the target user generally doesn't know this information. Hence, the user specifies a GSD and the camera model is orthogonal (parallel rays).
  • Most real data has had some form of calibration and noise reduction performed on it.
    • ChipMaker does not include traditional detector spectral responses, quantum efficiencies, noise metrics (Shot, dark current, read, etc.) because the target user generally doesn't know this information. Hence, the channel descriptions are (currently) simple passbands and noise is modeled as a lump sum additive contribution.

ChipMaker is an evolving capability and as ML usage and training changes, the tool and workflows will evolve as well.

Experimental Capabilities

  • There are some experimental features and modalities that were supported in the previous generation of DIRSIG (DIRSIG4) that are not currently supported in the current generation of DIRSIG (DIRSIG5) but are under active development.
    • Synthetic Aperture Radar (SAR)
    • Space Domain Awareness (SDA)

Synthetic Aperture Radar (SAR)

The DIRSIG4 model supports modeling Synthetic Aperture Radar (SAR) systems and outputs the complex phase history that must be externally processed to focus into a traditional SAR image product.

Space Domain Awareness (SDA)

The DIRSIG4 model supports modeling ground-to-space and space-to-space collection scenarios in support of Space Domain Awareness (SDA) or Space Situational Awareness (SSA) missions.

Real and DIRSIG4 simulated images of the Hubble Space Telescope (HST) as imaged from the shuttle during a servicing mission (simulation featured at the 2014 AMOS conference and produced at Lockheed-Martin by David Bennett, et al).

Is it Open Source?

  • DIRSIG has been a closed source model since its inception.
  • Why doesn't RIT adopt an Open Source model for DIRSIG?
    • Several US government organizations have voiced concerns that an open source code would quickly fracture into numerous custom variants that are never merged into a "gold standard" version.
      • Sadly financially competitive organizations (e.g., commercial contractors) have defied requests to merge improvements to the DIRSIG source code made under a handful of trial projects over the years.
    • Forked variants would undermine the government's trust in comparing results from competing parties (e.g., companies bidding on a payload or algorithm).
  • How does RIT support Open Science initiatives that strive to make publications, data and software easy to access with respect to DIRSIG?
    • Regarding "ease of access" ...
      • There are no license managers, dongles, etc. Trained users can install and use the software whereever they wish.
      • Despite not being commercial software and without any financial support for the university, the team has delivered free updates to the user community for 25+ years.
    • Regarding "ease of integration" ...
      • All the inputs and outputs to the model are documented.
      • The documentation and training course introduce strategies for integrating the software into larger modeling workflows.
      • Each release introduces more ways for the end-user to add capabilties and features to the model.
    • Regarding "scientific transparency" ...
      • Our publications, documentation and training lectures detail the numerical methods used in the model.
      • RIT is committed to publishing papers that are open access (free) when possible (not an option with some conferences and journals).
      • When possible RIT participates in community validation and verification (V&V) activities.

How do you get it?