A flexible “unsupervised” deep learning framework for 3D X-ray imaging to help understand metals better
Metals are used everywhere in our daily life, from the knife in the kitchen, the engine of the car, to the steel ropes of the Great Belt Bridge– from the tiniest components to the largest constructions on land or at the water, see Figure 1.
Metals are composed of small crystals and the properties of the metals depend on these crystal microstructures — which are 3 dimensional structures. Traditionally we have used microscopes to study the crystal microstructures. Such investigations are limited to 2D, i.e. giving only 2D pictures of the 3D ‘world’. Thereby, serious misinterpretations have been made. New possibilities for non-destructive 3D imaging are at present being developed. Most of these new 3D imaging methods rely on X-rays from large international synchrotron sources, but one method is available, based on diffraction of X-rays from a laboratory source. Thereby we can now have daily access to ‘view’ the 3D metal microstructures which will revolutionize materials science and development. The basic configuration of this X-ray imaging technique, named laboratory-based diffraction contrast tomography (LabDCT) [1], is to use a 2D detector behind a sample for recording spots diffracted from each individual crystal, i.e. signals arising from constructive interference of the incident beam with atoms located on specific lattice plane (see Figure 2).
However, the diffraction spots often suffer from undesired background noise, which can make precise spot identification difficult and therefore may render mistakes in reconstructing the microstructure. Conventionally, a rolling median processing through a stack of images is used to remove noise followed by filtering, e.g. using Laplacian of Gaussian filter, to enhance the spot contrast. This approach works fine for spots with good signal-to-noise ratios, whereas it can lead to both over or under segmentation for spots with low signal-to-noise ratios. Furthermore, it requires a well-trained expert to tune the processing parameters.
To overcome these limitations, scientists at the Technical University of Denmark have set up a flexible deep learning framework to remove the noise in the diffraction images [2]. And we can do that without tiresome manual labeling. Thanks to a forward simulation model [3], images containing the diffraction spots but no noise were generated and regarded as content images. New images were created by adding very simple artificial noise onto the content images and used to train a deep learning network to generate output images. A loss network gets inputs from the content image and the output image and minimizes the loss value, thereby, yielding the optimal deep learning model (see Figure 3).
By implementing this within the fast.ai library [4], it took about 5–6 hours to train a good model and now it only takes a few seconds to efficiently remove the noise in real experimental data (one example is shown in Figure 4), using a moderately powerful desktop PC with a NVIDIA GeForce RTX 2080 with 8 GB RAM.
This is a remarkable result and shows great potential for removing noise in diffraction images by deep learning methods. As a next step we will investigate if deep learning methodologies can also be used to improve the spatial resolution of the microstructural mapping — i.e. can deep learning help us to see finer details in the pictures, somewhat similar to what is used when training self-driving car algorithms. In this example we are “lucky” that the environment is more under control than a self driving car, containing less variance and thereby the simulated data is a lot more similar to the real data than a computer game simulating a car driving in the real world. That is why this approach is very interesting. Similar approaches can probably be used in many other Engineering tasks, e.g. fluid mechanics finding the viscosity based on simulations compared to a video of the flow, which could be tracked by a deep learning network.
References:
[1] C. Holzner, L. Lavery, H. Bale, A. Merkle, S. McDonald, P. Withers, Y. Zhang, D. Juul Jensen, M. Kimura, A. Lyckegaard, P. Reischig, Diffraction contrast tomography in the laboratory–applications and future directions, Microscopy Today, 2016, 24, pp. 34–43.
[2] E. Hovad, H. Fang, Y. Zhang, L.K.H. Clemmensen, B.K. Ersbøll, D. Juul Jensen, Unsupervised deep learning for laboratory-based diffraction contrast tomography, Integrating Materials and Manufacturing Innovation, 2020, https://doi.org/10.1007/s40192-020-00189-x.
[3] H. Fang, D. Juul Jensen, Y. Zhang, A flexible and standalone forward simulation model for laboratory X-ray diffraction contrast tomography, Acta Crystallographica A, 2020, 76, pp. 652–663.