What is DIRSIG?

The name DIRSIG is an acronym for "Digital Imaging and Remote Sensing Image Generation". The first part of the formal name comes from the Digital Imaging and Remote Sensing (DIRS) Laboratory at the Rochester Institute of Technology (RIT) where the model was created.

The DIRSIG model is a complex synthetic image generation application which produces simulated imagery in the visible through thermal infrared regions. The model is designed to produce broad-band, multi-spectral and hyper-spectral imagery through the integration of a suite of first principles based radiation propagation sub models. These sub models are responsible for tasks ranging from the bi-directional reflectance distribution function (BRDF) predictions of a surface to the dynamic scanning geometry of a line scanning imaging instrument. In addition to sub models that have been specifically created for the DIRSIG model, several of these components (MODTRAN and FASCODE) are the modeling workhorses for the multi- and hyper-spectral community. All modeled components are combined using a spectral representation and integrated radiance images can be simultaneously produced for an arbitrary number of user defined bandpasses.

There are two versions of DIRSIG? What are the differences?

The DIRSIG5 model is the version that is currently developed. However, some features from the previous generation of the model (DIRSIG4) have not yet been (or in some cases will not ever be) made available in DIRSIG5. In addition, DIRSIG5 has many unique features that were never part of DIRSIG4. The current DIRSIG releases include both the DIRSIG4 and DIRSIG5 executables. The table below attempts to summarize the differences in the two models.

Table 1. DIRSIG4 vs. DIRSIG5 Features

Multi-threaded Execution



Parallel Execution with MPI


yes (both OpenMPI and MPICH options are provided)

DIRSIG4 Scenes



DIRSIG4 Plumes


yes (via the vdb_tool and PlumeVDB plugin).

DIRSIG4 Atmospheres (Simple,Uniform & Classic)


yes (via the BasicAtmosphere plugin)

DIRSIG5 New Atmosphere


yes (via the NewAtmosphere plugin)

DIRSIG5 FourCurve Atmosphere


yes (via the FourCurveAtmosphere plugin)

Single-band, MSI and HSI sensors



Vis/NIR/SWIR sensors



MWIR/LWIR sensors



DIRSIG4 Platforms


yes (via the BasicPlatform plugin)

Multiple Platforms


yes (multiple instances of the BasicPlatform plugin in the JSIM file)

Mono-static LIDAR


yes (via the BasicPlatform plugin)

Bi-static LIDAR


yes (via the BasicPlatform plugin)

Multi-static LIDAR


yes (multiple instances of the BasicPlatform plugin in the JSIM file)

Ground-to-Space and Space-to-Space Scenarios





no (but on the roadmap)



no (and not on the roadmap)

DE4xx Solar/Lunar Ephemeris

yes (DE405)

yes (DE420 via the SpiceEphemeris plugin)

SimpleSolar Ephemeris


yes (via the SimpleSolarEphemeris plugin)

Data-driven Ephemeris


yes (via the DataDrivenEphemeris plugin)

THERM Weather


yes (via the ThermWeather plugin)

NSRDB Weather


yes (via the NsrdbWeather plugin)

DIRSIG4 Interactive Mode


no (the roadmap includes new interfaces that would a provide similar capability)

Machine Learning Data Generation


yes (via the ChipMaker plugin)

Streamline Spherical Data Collection


yes (via the SphericalCollector plugin)

Another way to verify features is to browse the example demos and check the version compatibility badges for a given demo.

Who develops DIRSIG?

The DIRSIG software is developed at Rochester Institute of Technology (RIT) by the "Modeling and Simulation Group" within the Digital Imaging and Remote Sensing (DIRS) Laboratory.

Is the DIRSIG software open source? Is the source code available?

No. The software has been developed over the course of two decades under funding from a variety of commercial and government sponsors. Furthermore, RIT has invested a significant amount of it’s own resources into the development of the software. As a result, the source code is the property of RIT and the model is distributed in binary form only.

Since RIT does hold the rights to the source code, we are commonly asked why we don’t make it available to the users. Without any mechanism that forces users to propagate contributed changes back to RIT, the concern of both RIT and our primary government users is that the code will fracture into numerous customized versions. The government wants to know that if two users submit their results on a given task where DIRSIG was used, that the same model was used. When multiple versions evolve, verification and validation also must be questioned. Therefore, RIT does not plan to make the source code generally available.

How does RIT support Open Science initiatives that strive to make publications, data and software easy to access with respect to DIRSIG?

  • Regarding "ease of access" …​

    • There are no license managers, dongles, etc. Trained users can install and use the software wherever they wish.

    • Despite not being commercial software and without any financial support from the university, the team has continuously delivered free updates to the user community for 25+ years.

  • Regarding "ease of integration" …​

    • All the inputs and outputs to the model are documented.

    • The documentation and training course introduce strategies for integrating the software into larger modeling workflows.

    • Each release introduces more ways for the end-user to add capabilities and features to the model.

  • Regarding "scientific transparency" …​

    • Our publications, documentation and training lectures detail the numerical methods used in the model.

    • RIT is committed to publishing papers that are open access (free) when possible (not an option with some conferences and journals).

    • When possible RIT participates in community validation and verification (V&V) activities.

Despite technically being "closed source" software, we publish in open literature (conference papers, journal papers, and student theses) exactly how various calculations are performed. In some cases, if a user really wants to see or understand a specific calculation, then we will reveal that portion of the code.

Is the DIRSIG software export controlled?

The DIRSIG software is currently designated as EAR-99. Hence, the software cannot be used in an embargoed or sanctioned country, used by a prohibited end-user, or used in a prohibited end-use.

Doesn’t the DIRSIG software fall under ITAR control?

RIT’s Office of Legal Affairs and Office of Compliance and Ethics, together with input from our U.S. Government research sponsors made the determination that an EAR-99 designation was appropriate for the DIRSIG software. However, the DIRSIG model can be used to simulate systems that would fall under Traffic in Arms Regulations (ITAR) control. End users of the DIRSIG software should not "launder" away ITAR controls by faithfully simulating data from a sensor that would be ITAR controlled in the real world.

RIT has also found that interpretation of the ITAR policies varies greatly from organization to organization. Therefore, the Software Agreement explicitly places the responsibility on the end user to interpret and resolve ITAR control issues.

Access and Requirements

How do I get the DIRSIG Software?

DIRSIG has been developed over the course of over two decades through a combination of internal, commercial and government funding. In agreement with our past and current government partners, DIRSIG is distributed only to users that meet the following requirements:

  • The user must have attended a DIRSIG training class (a course fee is involved).

  • The software is designated as EAR-99 controlled, which means DIRSIG cannot be used in an embargoed or sanctioned country, by a prohibited end-user, or used in a prohibited end-use. Hence, individual users must be background checked by RIT’s Office of Legal Affairs.

  • The user and the user’s organization must agree to the DIRSIG End User License Agreement (EULA).

Why am I required to take training in order to have a copy of DIRSIG?

DIRSIG is a complex software code. The Basic DIRSIG Training class will prepare users on the operation and uses of DIRSIG. Our experience is that most new users without training will require some level support causing us to get inundated with questions for which we are not equipped or funded to answer. This only leads to frustration by both parties.

Can I get an "evaluation" copy of DIRSIG?

The DIRSIG software is not controlled with license keys, dongles, etc. Hence, there is no such thing as a limited-time, evaluation, etc. copy of DIRSIG. If you would like to learn more about the software, please contact us and we will make every effort to inform you about the model, demonstrate it for you, etc.

Can I get "early access" to DIRSIG before training?

As many of the answers above explain, we strongly believe that training is required to correctly use this complex model. Therefore, we do not grant "early access" to the software until users have been to a training session.

Is the DIRSIG license transferable?

The DIRSIG End User License Agreement (EULA) grants access to trained users. The company or organization that pays for training is not buying a license for DIRSIG. They are paying for training and that training is associated with a given individual. The company or organization does not retain any access to DIRSIG when a trained employee leaves for another company or organization. Likewise, the trained user’s access is not revoked when they leave for another company or organization. As long as the individual adheres to the terms of the EULA, they are free to use the software at any employer or for personal use.

What is the size of the DIRSIG user base?

There are currently 950 registered users of the software (Dec 2023). The active user population is estimated to be about 1/3 of that number based on software release downloads.

What are the costs associated with the DIRSIG software?

The software is currently free to qualified users. The only cost is the attending the DIRSIG Basic Training Course, which is required for new users.

What computing platforms is the DIRSIG software available for?

The DIRSIG software is supported on Windows, the Linux platform and the Mac OSX platform (another UNIX-based operating system). Below is the list of platforms that release builds are created for:

  • Microsoft Windows 8 and Windows 10 on Intel/AMD x86 processors (64-bit only).

  • Linux 2.6 kernel distributions on Intel/AMD x86 processors (64-bit only).

  • Apple MacOS 11 and later on Intel x86 and ARM64 processors (64-bit only).

The user experience on all platforms is the same. The DIRSIG user interface is written using the Qt Developer Framework which allows us to create robust user interfaces with native look and feel from the same code base.

Consult the Installation Guide for more detailed software requirements.

What are the hardware requirements to run DIRSIG?

Consult the Installation Guide for more detailed hardware requirements.

How often is the DIRSIG software updated?

Updates to the software are release 3-5 times per year, depending on development schedules and reported problems from the user base.

Is DIRSIG multi-threaded, parallelized, GPU-accelerated, etc.?

The current DIRSIG4 code base was originally architected in 2001 before multi-core CPUs and general purpose graphical processing units (GPUs), etc. were on the market. As a result, the DIRSIG4 code was not parallelized at the micro-scale (using multi-threading to take advantage of multiple cores on a single computer) or at the macro-scale (using the Message Passing Interface (MPI) to take advantage of multiple computers). At the training course, we discuss several strategies for breaking large simulations in a a set of smaller ones that can be run separately (and in parallel) and then recombined. There are no plans to add parallelization or other accelerations to the DIRSIG4 code base.

The new DIRSIG5 code base (made available to users in 2019) is multi-threaded and a MPI-enabled Linux build is also available. For more information on multi-threaded and multi-node execution, consult the DIRSIG5 Command-Line Guide and the DIRSIG5 MPI Manual.

A GPU-accelerated version has been under investigation for some time, but notable increases over the CPU versions have not yet been realized. The GPU acceleration effort focuses on the optimizing the ray-tracing component of the calculation, which accounts for about 60% of the run-time. Due to the complexity and extent of the code, it is impractical to reimplement the entire model as a GPU code.

Does DIRSIG use a graphical or command-line interface?

Both. Most users are introduced to DIRSIG via the graphical user interface (GUI). Behind the scenes the GUI is simply creating and storing simulation description files (which are a variety of XML, JSON and ASCII/Text files) and then spinning off processes that execute the DIRSIG programs that read these files as inputs. The input file formats are documented to facilitate manual or programmatic creation and/or modification. Advanced users typically utilize the command-line tools directly or via scripts to automate the execution of the model.

Is there a programmatic interface to DIRSIG?

The overall DIRSIG modeling capability is manifested by a set of command-line tools and respective input configuration files. At this time there are not specific packages or modules for Python, Matlab, etc. to generate these input configurations in memory and directly execute the model. However, the contents and formats of the input files are well documented (in either XML, JSON or ASCII/Text) and can be programmatically created and/or modified. Many users and user teams create tooling (programs and scripts) that generate inputs for DIRSIG and manage the execution of the DIRSIG model itself. For example, a user might create a shell script that iterates through a set of embedded loops over the various parameter spaces and constructs the appropriate inputs for the model and then runs the model (as an external process) on those inputs.

What kinds of formal development policies are employed?

The DIRSIG model is developed by 4-5 full time staff members that have been involved with the project for nearly 40 years combined. Internally, DIRSIG is revision controlled with a local installation of GitLab using a "branch and merge to master" approach. Our GitLab configuration facilitates continuous integration of the code base. Software testing is accomplished via an extensive set of unit tests, integration tests and full simulation tests. Development branches get unit and integration testing for each code push. Merges to master are likewise unit and integration tested. The master branch gets run against an extensive suite of full simulations every evening and as part of a release. All testing is performed on all three supported platforms. The MPI releases (OpenMPI and MPICH) are tested in their respective MPI environments.

Can I run DIRSIG on remote (e.g., cloud-based) computing resources?

The end-user license agreement (EULA) allows authorized users to run DIRSIG anywhere. The DIRSIG software does not utilize license keys, dongles, etc. to make the deployment of the software easy. The primary requirement of the EULA is to insure that only authorized users can access the software.

Are containerized versions of DIRSIG available?

RIT does not provide containerized versions of the DIRSIG software (e.g. for Docker, Kubernetes, etc.). However, many users generate their own container images with DIRSIG installed in them. The primary requirement of the EULA is to insure that only authorized users can access the software.

Can I embed DIRSIG inside software we develop?

The end-user license agreement (EULA) specifically forbids redistribution of the DIRSIG software. Internal tools that run DIRSIG behind the scenes are allowed provided that other conditions of the EULA are met. Creating a software product that is accessed by external and/or untrained users via direct or indirect means (e.g., a cloud- or web-based application) is forbidden in the EULA without expressed, written consent from RIT.

How do users report problems?

RIT runs an instance of the Bugzilla bug tracking system. The bug tracking system for DIRSIG can be found at

What third party software is integrated with DIRSIG?

A variety of third party libraries and tools have been integrated into DIRSIG. These elements are incorporated and distributed in compliance with their individual license terms. A complete listing can be found by viewing the licenses via the DIRSIG software (for example, using the --licenses option in the command-line interface).

What other software is required to use DIRSIG?

DIRSIG leverages the MODTRAN atmospheric radiation code, which is developed by Spectral Sciences, Inc. Although there are ways to run the model using empirically based atmospheric contributions, this is strongly advised against for use in rigorous studies. The MODTRAN software is currently available from the MODTRAN website.

What other software is convenient (but not required) to use DIRSIG?

The DIRSIG software comes with a basic image viewer, but the output image data created by the model can be directly read into the ENVI image processing and analysis package. Using ENVI is not a requirement, but does provide advanced image analysis options. The DIRSIG output data is a simple, double precision floating-point, band interleaved by pixel format that can be easily read into most data analysis and visualization packages (including Python and Matlab).

Business Model

How is DIRSIG development funded?

The DIRSIG model has been funded by a combination of government and commercial organizations during its lifetime. This includes a significant investment by RIT.

What long-term contracts support software releases, etc.?

None. RIT self-funds general development and software releases.

Can my organization fund DIRSIG development?

Yes. The DIRS laboratory is a research contract driven entity. Please contact us if you are interested in sponsoring research, especially research that can involve students.

How do new features get added to DIRSIG?

RIT is constantly taking feedback from the current and potential user base regarding needs. If RIT has the resources, then RIT will self-fund the development of a new feature. Alternately, RIT can establish contracts with users to fund development.

If I fund a new feature, does everyone get to use it?

Yes, there is only one version of DIRSIG. This is the funding model that has made DIRSIG what it is today. Every user is leveraging the investment of others. However, new features are not made public until they have reached a certain level of maturity (robustness, supporting documentation, etc.). Therefore during a development project, the funding organization has the advantage of shaping the implementation and gaining valuable insights and experience using that new feature well in advance of other users.

Can I buy a support contract?

Yes and a support contract is strongly encouraged for relatively new users or for experienced users applying DIRSIG on a unique application. The support contract will enable RIT to answer questions, evaluate modeled results, or make small modifications to the code. Visit the DIRSIG Services webpage for more information.

Modality Support

What wavelengths, modalities, etc. can be simulated with DIRSIG?

The DIRSIG model is primarily used to model systems that operate in the visible through the long-wave thermal infrared (LWIR) or 0.2 to 20 microns. Under the hood, the model is operating on a spectral basis which allows the user to simulate broad-band, multi-spectral and hyper-spectral sensors. The ultimate spectral resolution of the model is limited by the available resolution of the optical properties (reflectance, emissivity, absorption, etc.) and the atmospheric model (MODTRAN4 and older was limited to 1 wavenumbers and MODTRAN5 and newer is limited to 0.1 wavenumbers).

The model could be used for UV simulations (down to 0.2 microns), however, optical characterizations of materials at these wavelengths are not common and the development team is not aware of this being attempted.

What general classes of sensors can be simulated with DIRSIG?

The model has a flexible model that allows the model a virtual collection "platform". Attached to that platform at various user-defined locations and orientations are "instrument mounts" where instruments or sensors can be attached. Each mount can be either static or dynamically oriented relative to the platform as a function of time. Each instrument or sensor is attached to its corresponding mount using a user-defined location and orientation specification. Within each instrument, the user can create a set of virtual focal planes with various dimensions (number of pixels), pitches (individual pixel size), offsets (array to array offsets) and clockings. Each focal plane can have one of many available "capture method" models assigned to it. These capture models allows the user to model the spatial and spectral responses of the associated focal plane.

Using this flexible description, the user can simulated a variety of different types of different sensors. This includes, but is not limited to, the following:

  • A fixed, broad band, 2D array camera (e.g. an LWIR camera).

  • A ground vehicle based 2D array video camera.

  • An airborne 2D Bayer pattern (structured filter array) camera on a UAV (e.g. a surveillance camera).

  • An airborne or space-based, multi-spectral "pushbroom" (1D array) instrument.

  • An airborne, whisk-broom scanning, hyper-spectral instrument (e.g. AVIRIS).

  • An airborne platform with a "cow’s udder" array of cameras, each pointing in a different direction relative to the platform.

  • An airborne hyper-spectral "pushbroom" instrument with an RGB video "context camera".

  • An space-based, multi-spectral "pushbroom" instrument with separate, modular RGB arrays and a separate, modular high resolution pan array.

  • An airborne Geiger-mode APD LADAR/LIDAR system.

  • A vehicle mounted, side-looking linear-mode LADAR/LIDAR system.

  • A Michelson Fourier Transform Spectrometer (FTS) on a tripod.

This list is not intended to be complete but rather convey that the model is very flexible and can model a wide variety of systems.

What limitations are there on the direction a sensor can look?

None. Although the DIRSIG model is primarily used for "overhead" or "downlooking" geometries, the platform location and orientation model allows the user to point the simulated sensors in any direction (up, down, slant, etc.).

Can DIRSIG model a LIDAR/LADAR system?

Yes. Consult the LIDAR Modality Handbook for more information.

Can DIRSIG model a polarized system?

Yes, the DIRSIG4 model’s internal radiometry engine automatically adjusts to propagate polarized fluxes when the need arises. If the sources of illumination (Sun, Moon, Sky, user-define sources, etc.) are polarized, or if the surface or volume optical properties are polarized then the model will shift to a full Stokes Vector and Mueller Matrix based calculus automatically.

The DIRSIG4 software can be used with the MODTRAN4-P atmospheric radiate transfer model and has several built-in polarized BRDF models.

Important MODTRAN4-P was an experimental, polarized version of MODTRAN4 developed back in 2002. It was only made available to a limited number of research collaborators, which included RIT at the time. The model was never made available to the general public and remains unavailable to this day.

The model can output fully spectral-polarimetric radiances for processing by external sensor models. Additionally, the model has support for configuring user-defined spectral-polarimetric filters on a per-channel (per-band) basis.

Important The current generation of DIRSIG (DIRSIG5) does not support polarization and there are no plans to add this feature. For more information about the differences between DIRSIG4 and DIRSIG5, please consult this table.

Can my sensor look up at an exo-atmospheric object?

RIT has been working on the "Space Situational Awareness" (SSA) or "Space Domain Awareness" (SDA) application area. Although this application area is not as mature as others, RIT is very interested in supporting this research. The SSA Handbook outlines the current capabilities of the model and some of the proposed work RIT is interested in pursuing.

Can DIRSIG model a real and/or synthetic aperture RADAR system?

Yes, but this capability is still in development. Consult the RADAR Modality Handbook for more information.

Important The current generation of DIRSIG (DIRSIG5) does not support these types of simulations (yet). For more information about the differences between DIRSIG4 and DIRSIG5, please consult this table.

Virtual Scene Content

What 3D geometry formats are supported?

DIRSIG5 will read the Alias OBJ and Autodesk FBX file formats directly, but it also supports a custom format that can be created via authoring tools supplied with the model. Other geometry formats can be readily exported to the Alias OBJ or Autodesk FBX format for importation into a DIRSIG scene. Note that geometry used with the DIRSIG model must be associated with DIRSIG specific spectral material descriptions that support the multi-modal nature of the model.

Does DIRSIG support volumetric objects?

DIRSIG5 supports voxelized objects (clouds, plumes, etc.) through OpenVDB, which is an open-source library for storing sparse 3D grids. These volumetric data grids are paired with appropriate spectral material descriptions (see below).

Is there interoperability with Unreal Engine, USD, 3D Tiles, etc.?

Because 3D content creation can be complex and labor intensive, RIT is very interested in interoperability with commonly used tools and workflows. The geolocated scene data model currently utilized by DIRSIG was internally developed before commonly used scene data models including Unreal Engine (UE), Universal Scene Description (USD), 3D Tiles and others were developed. Even today, most of these scene data models are focused on RGB visualization. In contrast, the DIRSIG model is focused on full spectrum simulation and leveraging these popular scene models is difficult as they typically do not support spectral reflectances or thermodynamic properties. However, some of these scene data models (specifically, USD and 3D Tiles) support extensions that could be leveraged to store full spectrum scene models. RIT is actively exploring interoperability with several of these scene data models.

What kinds of materials can be modeled?

The material description sub-model is very flexible. The top most classification of materials is broken into "surface" and "volume" descriptions. The only difference between these two classes is the type of optical descriptions associated with them.

Most "hard target" or "background" materials fall into the surface materials category. A surface material can have a reflectance model, an emissivity model or both associated with it. Both the reflectance and emissivity models are responsible for providing spectral coefficients for supplied geometries. There are several reflectance models including directional hemispherical reflectance (DHR) and Bidirectional Reflectance Distribution Function (BRDF) models. There are also a handful of emissivity models. In the event that the reflectance is known but the emissivity is not, the latter will be computed from the former for the requested geometry. If the emissivity is known and the reflectance is not, the computed reflectance is assumed to be diffuse (Lambertian).

Materials like gas plumes, clouds, water, etc. fall into the volume materials category. These materials have absorption, scattering and/or extinction properties associated with them. Like with surface materials, if two of the three properties are know, the third will be computed. For example, if the scattering and extinction are known, the absorption will be computed. If the scattering is the unknown quantity, then the computed scattering is assumed to be isotropic.

Can "dynamic" data can be simulated with the DIRSIG model?

Yes. The DIRSIG model supports both dynamic scene content and dynamic platform positioning, platform orientation and platform relative pointing (e.g. scanning). The user can create dynamic scene content such as moving vehicles, spinning helicopter rotors, etc. through the scene motion mechanisms. The platform model is inherently dynamic and allows the user to supply platform location and orientation as a function of time. The sensors are attached to the platform via "mounts" of which several mount models are available including many that scan as a function of time (whisk scan, user-defined scan, etc.). The clocks for all the scene motion, platform motion, mount motion and focal planes can be synced to a central clock to make scripting of a complex collection easier.

The simulated data collection is controlled by the user specifying time windows during which the simulated system collects data. By specifying a time window, the simulated focal planes will collect sequences of images (based on the focal plane clock rate) which can be externally combined to make movies.