Oct 28, 2021

From Optical Design to Mechanical Packaging – Using Zemax OpticStudio and OpticsBuilder to Develop a Flash Lidar System

Category: Product News
In the consumer electronics space, engineers leverage lidar for several functions, such as facial recognition. In this blog, we explore using OpticStudio to evaluate sequential models that comprise a flash lidar optical system.

In the consumer electronics space, engineers leverage lidar for several functions, such as facial recognition and 3D mapping. While vastly different embodiments of lidar systems exist, a “flash lidar” solution serves to generate an array of detectable points across a target scene with solid-state optical elements. The benefit in obtaining three-dimensional spatial data for use in a small-form package has caused this solid-state lidar system to become more commonplace in consumer electronics products, such as smart phones and tablets.

In this article, we will explore using OpticStudio to evaluate sequential models that comprise a flash lidar optical system. Conversion to Non-Sequential Mode is demonstrated and used to insert additional details, such as real-world source properties and scattering geometries. Custom analyses can be created and are used here to obtain depth information of the observed scene. Finally, OpticsBuilder is leveraged to provide housing for the full flash lidar system using native OpticStudio geometry, enabling quicker iteration between the optical and opto-mechanical engineer for the packaging of the module.

Sequential Analysis of the Flash Lidar System

The overall composition of a flash lidar system involves two modules – a transmitting module to generate the detectable points that impinge upon the scene, and a receiving imaging module to capture the points. The transmitting module usually consists of some collimating optics to project the source light into the far field as well as some diffractive optical element to generate many orders of this projection in two dimensions.

he overall composition of a flash lidar system involves two modules.

The receive module subsequently obtains an image of the projected array. Typically, some post-processing involving the time that the return signal was received versus when it was generated by the source is performed to calculate time-of-flight data, which in turn yields depth information of the scene.

Using OpticStudio, optical engineers can design the projection and imaging optics that comprise the flash lidar system. For this model, we have designed a 10mm focal length system to collimate the output of an LED array that has an active area of 1.6mm by 1.6mm. For the diffractive element that generates many orders of the projected source, we use a pair of Diffraction Grating surfaces that are orthogonal to each other to obtain X- and Y-axis orders. The Diffraction Gratings have a line pair per micron parameter value of 0.2, which yields a 19.39° diagonal half field of view of the scene when we consider the first and central orders of the Diffraction Grating pair in use with the collimating lens.
Using OpticStudio, optical engineers can design the projection and imaging optics that comprise the flash lidar system.

To ensure that the full projection is imaged onto the receive sensor, the imaging optics were designed to have a half field of view of 20°. Various optimization targets were leveraged, including those that ensure the aspheric lenses in this module had adequate thickness across the entirety of each part (for example, sufficiently large edge thicknesses for mounting requirements). The small-form imaging system includes a final cover window typically used for these systems. Since both the projection and imaging modules are meant to be small and mass-produced, elements are defined with plastic materials that are compatible with injection moulding manufacturing processes.

One aspect of ensuring that the systems at this stage perform adequately involves evaluating the imaging performance of this lens as compared to the size of the spots the receive module needs to detect. The RMS spot size of our central field point at 1 meter away from the transmit system is taken from the sequential model, which is reported as 2.089 mm. Therefore, with our imaging system, the imaged spot has a size of 6.9703e-3 mm on the focal plane. Since we take this spot to be the theoretically smallest size possible, this will return the highest spatial frequency requirement, which is ensuring sufficient contrast at roughly 72 lp/mm. Using the FFT MTF analysis, the imaging lens has contrast of 72.2%, which we take to be sufficient contrast to observe this spot.

Using Non-Sequential Mode for End-to-End Lidar Modeling

With the sequential designs performing at a satisfactory level, we turn to a full system perspective in OpticStudio by converting the designs to Non-Sequential Mode. This enables us to perform a non-sequential ray trace analysis. The “Convert to NSC Mode” tool enables an automatic transition to non-sequential counterparts, allowing us to quickly combine and refine the model.

In the non-sequential model, after we combine both modules into a single file, the source properties for the projection optics, sensor dimensions and resolution of the imaging system, and arbitrary scene geometry can be added into the model for real-world analysis. We assume the source with the 1.6mm x 1.6mm active area is comprised of a 5x5 array of individual diodes, each with an X/Y divergence angle of 11.5°.

For sake of demonstration, the diffraction orders on the projection module are assumed to have ideal transmittance into the +/- 1 and central orders for each axis. The optical elements from both modules are also assumed to have ideal transmittance.

To start with some simple geometry, we define a reflective, Lambertian-scattering wall at one meter away to interact with the light from the projection module. The resulting scatter from this object emits into a hemisphere, and by default, there will be a severe undersampling of signal on the detector plane for the imaging module. We can alleviate this low signal issue with OpticStudio’s Importance Sampling feature. Importance Sampling will selectively generate scattered rays that emit in the direction of a specified target sphere centered on any object defined in the non-sequential model. The energy contained in the scattered rays are modified based on the scatter profile in use such that they exhibit real-world performance.

A result of this energy attenuation is that we need to take care to ensure that relevant non-sequential settings in OpticStudio are appropriately defined to obtain some signal on our imaging module. In this case, a parameter defines which rays can be traced based on the lowest allowable threshold of exiting ray energy at any interface relative to a starting ray’s energy.

The energy attenuation in Importance Sampling sometimes results in child scattered rays to fall below this threshold. However, we can manually lower this value, and therefore detect our projected dot pattern on our imaging module.

A key analysis for the flash lidar module is the ability to retrieve timed response of each observable dot as sensed by the imaging optics. While there are no native analysis features that compute this value, the ZOS-API serves as a way to extract, post-process, and present data that OpticStudio generates. A User Analysis was compiled to open a saved ray database (.ZRD) file and extract path lengths of various rays that land on the imaging detector. A scene mimicking a desk or tabletop setup with some relevant geometry was used to demonstrate the User Analysis. After a non-sequential ray trace runs, the User Analysis is executed, which allows for output of the distance that each projected dot has travelled.

After a non-sequential ray trace runs, the User Analysis is executed, which allows for output of the distance that each projected dot has travelled.

From the depth map output, we can verify positional information of each object in the scene. The floating sphere reports a shorter travel distance (~0.5 meters) as compared to objects like the cup upon the table (~0.9 meters) and the reflecting wall (~1 meter).

Finalizing Flash Lidar Packaging with OpticsBuilder

With the optical design in-hand, the next stage in developing the flash lidar system is generating mechanical housing to hold the optics within each module in addition to overall housing for full lidar model. This requires accurate conversion from OpticStudio to the CAD software that the opto-mechanical engineer will utilize. OpticsBuilder enables a seamless transition between optical and opto-mechanical engineers by reconstructing the native OpticStudio geometry in the compatible CAD software of choice.

In OpticStudio, users can generate a file for direct import into OpticsBuilder by using the Prepare for OpticsBuilder tool. Once loaded into OpticsBuilder, the same ray tracing engine is used to simulate optical performance:


When the engineer constructs the mechanical housing, they will also be able to define optical properties such as coatings and scattering profiles to interact with the rays, providing quick feedback on the impact of the new components with the overall optical performance. Furthermore, they can extend geometry on the optical components to allow for added mounting material without affecting the design of the optical elements themselves.
Once ready for performance validation, the engineer can simulate a new trace and compare various metrics before and after the addition of the housing. Visualization of performance issues, such as beam clipping, can be done within OpticsBuilder by viewing specific raysets: 

Finally, should there be a need to send this design iteration to the optical engineer, OpticsBuilder allows for file export which can be natively read by OpticStudio. This will retain the geometry and optical properties as-defined in OpticsBuilder for further assessment, feedback, and re-design between software packages:

Conclusion

In this article, we have explored the use of Zemax OpticsStudio and OpticsBuilder to characterize a flash lidar module and generate some nominal housing for the optical components. Both sequential and non-sequential ray tracing modes were leveraged to assess performance metrics such as image quality of the receive module and end-to-end performance of the system when considering reflective, scattering geometry from an observed scene. The ZOS-API provided a means to create a custom analysis that would generate depth information within the software using the non-sequential ray trace data from OpticStudio. Finally, OpticsBuilder was used to build mechanical housing for the system, which allowed a workflow that enabled fluid transfer of files between OpticStudio and the CAD software used to create the housing.

Read the full Knowledgebase article here.

Get started with Zemax optical design software, try it for free!

Author:

Angel Morales
Optical Engineer
Zemax An Ansys Company

 

 



Related Articles:

How to create a Time-Of-Flight User Analysis using ZOS-API