Abstract
Computer simulation can be a useful tool when designing robots expected to operate independently in unstructured environments. In this context, one needs to simulate the dynamics of the robot’s mechanical system, the environment in which the robot operates, and the sensors which facilitate the robot’s perception of the environment. Herein, we focus on the sensing simulation task by presenting a virtual sensing framework built alongside an open-source, multi-physics simulation platform called Chrono. This framework supports camera, lidar, GPS, and IMU simulation. We discuss their modeling as well as the noise and distortion implemented to increase the realism of the synthetic sensor data. We close with two examples that show the sensing simulation framework at work: one pertains to a reduced scale autonomous vehicle and the second is related to a vehicle driven in a digital replica of a Madison neighborhood.
1 Introduction
1.1 Motivation and Contribution.
In the fields of robotics and autonomous vehicles (AVs), simulation can play a key role in understanding how candidate designs respond when placed in a myriad of operational scenarios. The motivating factor of the research described herein is that of improving the realism of the simulation, so that the behavior of the virtual robot in simulation is similar to that of the physical robot in the real world. In other words, we seek to reduce the so-called simulation-to-reality gap [1].
Herein, we outline a sensing simulation module that has been built into a multi-physics simulation framework called Chrono [2,3]. This new module, called Chrono::Sensor, generates synthetic data for camera, lidar, GPS, and IMU when any of these sensors is immersed in a virtual environment whose time evolution is simulated via Chrono. By combining dynamics simulation and sensing simulation, we seek to establish a comprehensive analysis platform for complex robotic systems operating in unstructured, complex, and dynamic environments.
The primary objective of Chrono::Sensor is to provide realistic synthetic sensor data for use in the investigation, development, testing, and optimization of robots through simulation. The interest is in generating synthetic data streams that reflect the distortions and noise levels encountered in the actual sensor data. For instance, while photorealism can be an important aspect of a virtual environment, Chrono::Sensor camera simulation is concerned primarily with data realism such that the perception and control algorithms, not humans, find the synthetic data indistinguishable from real data. It is important to note that photorealism and data realism are not mutually exclusive, but unlike video gaming, the emphasis here is on data realism. Moreover, we seek a solution that is versatile (supports multiple sensors); efficient (soft-real time targeted); and accurate in order to provide a simulation-in-robotics experience that demonstrates effective sim-to-real transferability attributes.
In this contribution we (i) introduce a sensing library built alongside an open-source, multi-physics simulation framework for the purpose of simulating complex robotic systems and AVs; (ii) describe an extensible and robust sensing framework for implementing sensor noise and distortion models to improve synthetic data realism; and (iii) outline the methods and models currently supported in the Chrono::Sensor for generating synthetic sensor data from a dynamic virtual environment. To that end, the article is structured as follows. Section 1.2 overviews other simulation frameworks used for autonomous agent analysis. Our sensor simulation framework is described from a software implementation point of view in Sec. 2. The sensor models are outlined in Sec. 3. The document concludes with two demonstrations of Chrono::Sensor usage in Sec. 4. Specifically, we use a detailed vehicle model and equip it with multiple sensors to either evaluate a navigation algorithm or generate data for offline testing of autonomous systems. Ultimately, this contribution concerns sensor simulation models, and not the specific use of sensor simulation in particular applications, a very important task which nonetheless is too broad to be approached here. Section 5 presents a conclusion and future work is discussed.
1.2 Related Work.
One of the most widely used robotics simulation platforms is Gazebo [4–6] owing to the fact that it is free, open source, it is closely integrated with the robot operating system (ROS), it offers a breadth of sensors, and it interfaces to several dynamics engines (Bullet [7], ODE [8], DART [9], and Simbody [10]). On the downside, it lacks depth in relation to: physics modeling (i.e., flexible bodies, fluid–solid interaction, deformable terrains, etc.); and sensor sophistication. CoppelliaSim (previously known as V-REP) [11] follows a similar paradigm to Gazebo, with which it shares a similar set of strengths and weaknesses. The major additional weakness of CoppelliaSim is that it is not provided free and open source. Recently, NVIDIA has released a robot simulation platform called ISAAC Sim [12], which leverages the PhysX dynamics engine [13] to support robot simulation. While ISAAC Sim is perhaps the most modern simulation software for robotics, its closed-source nature introduces a barrier to adoption and extension. Additionally, since little is published on their sensor models and dynamics validation, it is difficult to judge its ability to simulate complex scenarios. Additional frameworks exist for robot simulation, including Webots [14] and USARSim [15], but they largely lack the feature set and/or fidelity of the aforementioned frameworks, particularly so for unstructured, off-road conditions. Beyond this list, many frameworks focus on dynamics and have sought to add sensing capabilities to broaden the scope of their simulation capabilities. These include frameworks such as PyBullet [16] and MuJoCo [17].
For on-road AV navigation simulation, the most broadly utilized platforms are Carla [18] and AirSim [19]. Both are developed on top of Unreal Engine [20], with AirSim also supporting use of Unity three-dimensional (3D) [21]. Unity 3D and Unreal Engine are game development platforms that have seen broad use in AV and robot simulation. In video gaming, vehicle dynamics model fidelity is of less concern than simulation speed. Since they build on gaming engines, Carla and AirSim can provide photo-realistic images and support complex and artistically generated virtual environments. In terms of sensing, photorealism is important but not necessarily sufficient for camera simulation. Moreover, the ray-casting implementations in Carla and AirSim used for lidar simulation are low-fidelity methods that limit the realism of the synthetic point clouds. Beyond visual sensors, AirSim does provide a detailed approach for generating realistic noise and distortion for dynamics-based sensors including GPS, IMU, barometer, and magnetometer. While these simulators represent some of the most capable on-road frameworks, they are unable to support off-road environments with more complex dynamics, e.g., deformable terrain, which is important in capturing the vehicle–environment interplay for terrains with mud, sand, snow, etc. Two simulators for off-road scenarios are VANE [22] and MAVS [23], with the later leveraging Chrono for vehicle dynamics support. VANE is closed source, but published literature [24–27] illustrates the capabilities of the framework for military applications. MAVS provides high-fidelity lidar simulation [28,29] but it is not freely available as open source. A real-time desktop framework for robotic simulation, called ANVEL [30], is also designed for military applications and is tightly developed alongside VANE. Finally, Vortex [31] offers a real-time simulation solution with support for AV simulation.
Other frameworks for AV simulation include AutonoVi [32], PreScan [33], SynCity [34], rfPro [35], VIRES [36], dSPACE [37], and NVIDIA DRIVE Sim [38]. Most of these frameworks are commercial and closed-source, with limited breadth of published use to show simulation results. Relatively little information is available for the closed-source platforms to determine the level of sophistication and realism of the synthetic sensor data and dynamics support.
2 Simulation Framework
Chrono::Sensor extends the Chrono dynamics engine to provide a sensing simulation solution for robotic systems operating in complex and unstructured environments. Chrono::Sensor generates synthetic data based on the choice of number and type of sensors (RGB camera, lidar, GPS, and IMU) and passes these synthetic data to the robot control stack to determine the next set of robot commands. Based on this set of commands, the dynamics engine determines the evolution of the robot and its interaction with the environment. This sequence: sensing → command → dynamics, loops for the duration of the simulation, with specific frequencies of the control stack, sensing, and dynamics maintained according to real-world hardware (sensing and control) and physical requirements (dynamics). An example is shown in Fig. 1, where an instrumented all-terrain vehicle is interacting with the environment by deforming the terrain on which it operates. The scene is shown from the perspective of a third-person, Chrono::Sensor camera.
2.1 Simulation of Physics with Chrono.
Herein, the discussion of Chrono is limited to covering the components relevant to the sensor simulation. An overview of Chrono can be found in Ref. [2]. The Chrono dynamics engine supports rigid and flexible body dynamics with additional simulation modules for granular mechanics and fluid–solid interaction. Through the Chrono::Vehicle module [39], Chrono provides template-based support for specifying the topology of a vehicle from subsystems such as suspension, tires, steering mechanism, powertrain, and track subsystem. This allows a user to build a model from designed or measurable parameters rather than requiring construction of each individual body and kinematic constraint. Figure 2 illustrates the components of a Chrono::Vehicle suspension model. All of these components, including other moving obstacles, robot joints, vehicles, or agents, are visible to the sensors in Chrono.
Chrono can simulate vehicle–terrain interaction in complex off-road environments with deformable terrains [40]. While Chrono’s primary objective is high-fidelity modeling, which might come at a high computational cost, simulations with trivial-to-compute dynamics; i.e., a vehicle in an on-road maneuver operating on rigid terrain, can be performed faster than real-time. This paradigm of “as fast as possible” simulation carries over to Chrono::Sensor, where there is no hard constraint on the simulation to be real time—instead, it is expected to run as fast as possible with real-time being an arbitrary line that comes into play for hardware/human-in-the-loop scenarios. In fact, for many applications in machine learning there is significant benefit to faster-than-real-time simulation. An example of reinforcement learning for off-road navigation is discussed in Ref. [41]. Therein, off-road navigation using a military convoy was learned in Chrono using reinforcement learning, and then demonstrated in simulation on deformable soil in off-road environments with silt-like and snow-like terrains.
2.2 Simulation of Sensing with Chrono::Sensor.
Chrono::Sensor communicates with the Chrono dynamics engine to collect information about how the robot changes the environment and the environment changes the robot. Then, based on the set of sensors endowed upon the robot by the user, Chrono::Sensor generates synthetic data from the virtual world. In the case of the camera and lidar sensors, the synthetic data are generated via GPU-based ray-tracing, leveraging hardware accelerated support [42], and the headless rendering capabilities provided by OptiX [43]. Ray tracing allows the sensor models to closely mimic the data acquisition of light-detecting sensors (i.e., lidar and camera). While our solution relies on OptiX as a ray-tracing library, OptiX itself does not simulate cameras or lidars, but rather allows this simulation framework to model physics-based data acquisition using ray tracing. For IMU and GPS data, the Chrono system is queried and the ground-truth simulated data are augmented to form the distorted and noisy data that would be generated from a real sensor. The sensor framework is closely coupled with Chrono, but is extensible through a filter-graph that is applied to each sensor in a post-process step represented in Fig. 3. The figure shows the general implementation applied to each sensor, which follow similar flows to generate data, augment the data, and then return the data to the user. Each step can include specific models for the sensors, with an interface to implement and add custom filters to the framework. This filter graph is used for implementing specific models, as discussed in Sec. 3.
Due to the dynamic simulation requiring a much higher update frequency than sensor simulation (dynamics: order 1 kHz; sensors: order 10–100 Hz), the sensor framework uses a separate thread to manage the rendering and synthetic data curation. This thread manages a process in which multiple dynamics engine simulation steps complete during a single render step, yet there is time coherency in that neither the dynamics nor the sensing drift apart in time. An illustration of this is shown in Fig. 4. In this paradigm, a defined frequency of a sensor, e.g., 50 Hz, will be guaranteed to maintain 50 Hz in simulation time regardless of whether the computational bottleneck is sensing or dynamics related. When a system includes many sensors, and when more graphics hardware is available, Chrono::Sensor can leverage multiple render threads each managing a separate GPU for simulating a subset of the sensors present in the virtual test. This is particularly useful for scenarios with many agents with numerous sensors that also operate at various update frequencies in a highly complex virtual environment. Results that characterize that performance and scaling of Chrono::Sensor can be found in Ref. [44].
For camera and lidar simulation, Chrono::Sensor does not need to push an image to the display when simulating robots or AVs. Since no image needs to be rendered on a screen, the screen-based graphics contexts are bypassed altogether, resulting in “headless rendering.” Due to the headless nature of Chrono::Sensor, many parallel simulations can be launched in unison on remote compute hardware. In addition, the framework is developed to be middleware, allowing the user to link their C++ application directly to Chrono or make direct calls to Chrono through the PyChrono module rather than set up a client-server model as would be needed for game engines like Unity and Unreal. These features are particularly relevant for reinforcement learning problems, when the computation associated with the simulation component is intensive and one would like to launch tens of simulations at the same time, using a multi-core chip or multiple nodes in a cluster.
3 Sensor Models Implemented in Chrono::Sensor
3.1 The Lidar Model.
The lidar sensor available in Chrono::Sensor is based on scanning time-of-flight sensors frequently used in automotive and robotic applications. The model implementation leverages the GPU OptiX library since lidar is fundamentally a ray tracing measurement. Each ray in the lidar is launched in parallel using the GPU and is traced with its own origin acting as its sole light source. The diffuse reflectance model based purely on geometry is used to calculate the relative intensity of the return. The implementation is flexible enough to allow custom materials that encode lidar reflectance on a per-object or per-triangle basis, analogous to any camera-based graphics model. For this to be useful, the reflectance of the object must be measured in the lidar-specific wavelength and attached to the Chrono body.
To accurately characterize the range and intensity of the lidar returns, and to generate physically realistic artifacts in the measurements, modeling and configurable quantities are provided by the user. Foremost are the basic parameters that define the quantity and update frequency of the lidar data: horizontal field of view, vertical field of view, scanning frequency, channels, angular resolution, and maximum distance. For modeling temporal artifacts, lag and collection time parameters are included. Lag defines the delay between when the data are measured and the time at which the user receives these data. While this importance is often overlooked by sensor simulation environments, it can play a critical role when understanding time-evolutionary behavior of a control stack. The lidar collection time is the time window over which the data are collected. For a 360 deg scanning lidar, this is typically equal to the update period or 1/UpdateFrequency.
When the collection window is greater than 0, the movements of the sensor and scene relative to the sensor are taken into account using interpolation between keyframes. Building the scene from keyframes and using interpolation of the sensor position allow each lidar beam to be traced at the correct instant resulting in fine precision artifacts such as lengthening and shortening of quickly moving objects such as vehicles. This interpolation generates a continuous scan, avoiding non-physical artifacts that are often introduced when using ray-casting from within game engines. A comparison of the lidar data from an instantaneous model and from a continuous model is shown in Fig. 5, with data generated using Chrono::Sensor. The data correspond to a simulated experiment where a lidar is moved at 20 m/s along a path lined with cylindrical obstacles of 1 m diameter at 2 m intervals. Since the lidar is scanning in a counter-clockwise direction, the cylinders on the right appear shorter than the cylinders on the left.
In addition to temporal distortion modeling, the lidar implementation allows for beam discretization when the divergence angle of the lidar rays plays a significant role in the acquired data. This is the case in many automotive applications where medium to long range lidar are common. Furthermore, manufacturers can leverage beam divergence to detect multiple returns, effectively increasing the number of points in the resulting point cloud. An illustration of this process is shown in Figs. 6 and 7, as modified from Ref. [45]. This is an effect that is not possible to reproduce effectively with single ray casting. The sampling model is illustrated in Fig. 8, where the radius (R) of the samples is user-defined. Since Chrono::Sensor uses GPU-accelerated ray tracing, the discrete samples for each beam are all launched in parallel and reduced after launch, resulting in a sub-linear scaling of simulation time with respect to discrete samples.
With the raw data generated, noise is modeled based on the measurement uncertainty. Rather than applying a Gaussian noise independently to x, y, z and intensity values of the point cloud, the noise is applied to the range, angular, and intensity measurements of the lidar, and mapped into noise on the point cloud. The noise parameters can be estimated from the accuracy presented on most lidar data sheets or through calibration data collected from a known object.
As a final step, the data are provided to the user based on a parameterized lag. This models the processing interval of the lidar sensor such that the user will obtain data at an appropriately delayed time. This is a physical artifact with which perception and navigation algorithms must contend, so to accurately train or evaluate a control stack, the simulation should account for the time delay of received data.
The lidar model is set up to allow the physics-based characteristics to be based on quantities found in a datasheet. These can include frequency, range, channels, angular resolution, and beam divergence. To further improve the model, noise on the distance and angular measurements would need calibration data to be estimated. The accuracy of these models can significantly change the generated point clouds. Accurately capturing noise can improve the realism of the sensor, but quantification of that improvement remains an open research question.
Since the virtual environment plays an important role in the simulation of lidar data, the resolution of the 3D environment should be fine enough to allow geometric characteristics down to the scale of the beam divergence. Additionally, material properties, which are the subject of ongoing development, will be introduced to allow the user to specify reflectance in the lidar wavelength.
3.2 Camera Model.
For simulating a typical RGB camera, Chrono::Sensor takes in several user-defined parameters: image size (height, width), frequency, horizontal field of view (FOV), exposure time, lag, and lens model type. These parameters are chosen for simple lens models as they are typically specified in camera data sheets, allowing the model to be physics-based rather than empirically based on calibration data. Additionally, the user can define computer graphics settings such as super-sampling factor and ray-tracing depth to improve the rendering fidelity or render time. Just like the lidar simulator, the camera simulation method uses the OptiX library, leveraging real-time-ray-tracing techniques. The rendering accounts for material and lighting of the environment. The simulation pipeline is built to mirror the image acquisition process [46], see Fig. 9. Specifically, in terms of computer graphics models, Chrono::Sensor implements physics-based rendering which includes reflectance, refractance, and Cook-Torrance [47] lighting models.
To account for another important distortion, motion blur is factored in at the ray-tracing level. To mimic the cause of the distortion, the interval over which data are collected is determined by the user-defined exposure time. Using keyframes for the scene and camera position, each pixel is super-sampled with random launch times within the exposure window. Based on the ray’s random time, the scene can be queried through the entire exposure time, interpolating the entire set of object positions from the scene and camera keyframes. This method introduces blur that captures the effects of a moving camera, as well as changes or movements in the environment.
While these noise models account for the sensor-level measurement error, they do not factor in the effects of the image signal processor (ISP), which often correlates the noise spatially, temporally, and chromatically [52–54]. Figure 11 illustrates why the EMVA noise model, while accurate at the image sensor level, does not fully model the final noise on a processed image such as an RGB JPEG image. Further work will focus on implementing components of the ISP to correctly augment the standard noise, and validate the final model.
As a final step in the camera simulation process, the data are made available to the user only after the lag time has elapsed. Similar to a lidar, the camera processes the measured data and provides sensed information to the user after a finite amount of time. This is modeled in Chrono::Sensor to account for inevitable temporal error that may factor into perception algorithm choices and pose estimation.
Many of the camera model parameters can be found in sensor data sheets including field-of-view, frequency, resolution, and exposure time. For other parameters such as noise levels for the implemented models, calibration experiments often need to be performed. A common noise calibration method is the mean-image method to estimate a noise distribution that can be used to fit a Gaussian or intensity-dependent model. An example noise estimation was performed in a related work [44].
The camera model relies on having a sufficiently realistic virtual environment, with 3D geometry of a resolution and fidelity similar to that found in the gaming community. For both lidar and camera, the visual properties must be specified to the same level as a video game, including colors, reflectance, refractance, texture and normal maps, and other lighting characteristics. The precise level of sensor realism is a function of both the virtual environment and the sensor model. Additionally, the intended use case (i.e., perception training, development testing, algorithm certification, etc.) will drive the level of realism required.
3.3 GPS and IMU.
The GPS and IMU models are parameterized by a set of intrinsic values including update frequency, collection time, and lag. These sensors can be attached to any object within the Chrono simulation and can have an arbitrary attachment position and orientation. Therefore, these sensors can be placed on or in any location of a vehicle or robot including wheels, robotic arms, etc. These sensors make use of the dynamic quantities computed for the body by the dynamics engine to determine the proprioceptive data of interest. Even though GPS and IMU do not have a direct equivalence to the exposure time of a camera, a collection window is still used as a parameter for the IMU and GPS sensor models. This helps in two respects. First, when the sensor does signal collection over a finite amount of time as with GPS, the simulation can capture this trait. Second, because the simulation steps through time using numerical integration, the acceleration values can often be much noisier than in reality. Then, the collection time provides an opportunity to smooth the ground truth data used in the sensor model.
The sensor model parameters such as update frequency can be found in the sensor datasheet. For GPS and IMU noise models, these are inherently empirical and require calibration or noise estimation. Steady-state noise can be calibrated using stationary sensors, and using statistical methods and curve fitting to approximate noise components such as standard deviation and time constants.
4 Demonstration of Chrono::Sensor
Two examples are provided herein to show Chrono::Sensor at work. The first is a scaled version of an autonomous vehicle that navigates a closed course using only a lidar mounted on the top of the vehicle. The car has no prior knowledge of the course and can navigate safely between the barriers. A third person view of the car and course can be seen in Fig. 12. A front facing gray-scale camera is mounted on the front of the car (Fig. 13), yet in this setup its output is not used by the control algorithm (the control policy relies on the lidar information only). The implemented control algorithm parsed the point cloud into clusters representing the right and left barriers, found a center point as a target location, and used a PID steering controller to navigate to the target. This is not intended to demonstrate a state of the art controller, but rather flex the sensor simulation framework for demonstration purposes. The point cloud generated by the lidar is shown in Fig. 14 and matches the same frame shown by both cameras. Both the cameras and lidar are simulated in Chrono::Sensor. The vehicle is modeled using Chrono::Vehicle with parameters based on a remote control car that has been modified to drive autonomously.
The second demonstration pertains to a sedan in a replica environment. Specifically, through collaboration with Continental Mapping [57], an airplane flying over Madison, WI, generated a replica of a neighborhood in which the virtual vehicle was driven around (this is not an AV, details below). The human-driven vehicle is equipped with a simulated camera and lidar for generating and saving data that can be used offline for evaluating perception algorithms. The collected data are shown in Figs. 16–18.
Figure 15 provides a third-person perspective of the ego vehicle. The vehicle was driven by a student via a Logitech console using the data stream from this very camera for human-in-the-loop control. The data shown in Fig. 16 represent data that could be used for software-in-the-loop control of the vehicle, or for offline training or evaluation of perception algorithms. This camera is placed in the simulation and attached to the hood of the ego vehicle.
Data from a lidar attached to the roof of the virtual vehicle are shown in Figs. 17 and 18. The point cloud is colored by vertical position of each point (height of the point) for readability. The data are aligned with the vehicle coordinate system with X-forward, Z-up, and Y-left. The full videos for both demonstrations can be viewed at [58] under the “Autonomous Vehicles” heading.
5 Conclusion and Future Work
This contribution outlines the Chrono::Sensor framework for software/hardware/human-in-the-loop simulation of robots and AVs. The sensor models represent a foundational component of the Chrono simulation platform endowing it with an expandable sensing solution that complements its multi-physics simulation engine. By leveraging ray-tracing techniques for light-based sensing, high-fidelity models and methods are implemented that mimic the light-acquisition process of real sensors. The IMU and GPS models can augment ground-truth data that comes from the Chrono dynamics engine with realistic noise and lag. The lidar and camera models discussed herein include noise, distortion, and lag. Through Chrono, Chrono::Sensor is available in a public repository as open-source under a BSD3 license for unfettered use/modification/redistribution. The included demonstrations show example data generation for autonomous vehicles simulated in Chrono with Chrono::Vehicle and Chrono::Sensor. Chrono::Sensor has also been used in an off-road AV navigation scenario in Ref. [41]. The source code is freely available [3,59] along with documentation [60] and tutorial/demo examples [61]. The implementation and models herein can be modified and used in external software, but are designed primarily to be a simulation component in Chrono. Chrono as a whole, including Chrono::Sensor are designed as middleware and can be used or called from external frameworks.
The ongoing research associated with this framework seeks to tackle both the breadth and depth of sensing in order to facilitate the simulation of robotic systems and on/off-road autonomous vehicles. To support a broader set of robotic systems, future work will seek to expand the collection of available sensors, to include stereo and omnidirectional cameras (commonly used in robotic systems), encoders, odometers, and magnetometers. For on/off-road AVs, we plan to add exteroceptive sensors such as radar and infrared.
Future work also includes the improvement of the realism of the sensors discussed herein, an effort motivated by a desire to reduce the simulation-to-reality gap. Characterizing and closing the sim-to-real transferability gap will continue to be a paramount challenge to ours and any other simulation-in-robotics effort. Important effects that could significantly impact sensing realism, particularly for off-road scenarios, are tied to environmental attributes, e.g., weather conditions, dust, smoke, foliage, haze. These are beyond the scope of this paper and regarded as directions of future research.
While the underlying ground-truth measurements of the sensor models have been verified within the simulation framework [44], an important path of future work pertains to validating the sensor models against real-world data, and comparison with other sensor simulation frameworks. Presently, this calls for more research to understand and develop methods to meaningfully compare sensor data in a manner that accounts for the intended use of that data (sometimes sloppy sensing can be good enough). In addition to basic modeling research, ongoing work aims at improving the performance and scalability of the simulation framework to provide “as-fast-as-possible” simulation; and increasing the support for deformable objects (the geometries that need to be rendered change in time) to allow for robotic and AV simulation that includes, for instance, soft-robots or deformable terrains.
Acknowledgment
This work was partially supported by National Science Foundation grant cps–1739869. Additional support came from the Safety Research using Simulation (SAFER-SIM) program. SAFER-SIM is funded by a grant from the U.S. Department of Transportation’s University Transportation Centers Program (69A3551747131). The U.S. Government assumes no liability for the ideas expressed in this document or the use thereof. The authors would like to thank Continental Mapping of Sun Prairie, WI, for providing the virtual world replica of Park Street in Madison, WI used herein.
Conflict of Interest
There are no conflicts of interest.
Data Availability Statement
Data provided by a third party listed in Acknowledgment.