EUVlitho includes the following two program sets.
- Electromagnetic simulator for EUV lithography based on the 3D waveguide model
- CNN reproducing the results of the electromagnetic simulations for fast EUV lithography simulation
The following paper explains the details of the programs.
- H. Tanabe, M. Shimode and A. Takahashi, "Rigorous electromagnetic simulator for extreme ultraviolet lithography and convolutional neural network reproducing electromagnetic simulations," to be published.
EUV lithography simulator depends on oneAPI (2023.2.0), Eigen (3.4.0), CUDA (11.8) and MAGMA (2.6.2).
CNN depends on Pytorch Ligtning (2.1.0) and CUDA (12.1).
Main program is emint/intensity.cpp. Please modify the makefile (makeint). The input is the mask pattern (mask.csv) and the output is the image intensity (emint.csv). The optical and absorber settings are written in the main program. The ML setting is written in the subroutine ampS in include/header.h. It follows the paper by N. Davydova et al. (SPIE 88860A). Usually you do not need to modify this subroutine. The mask pattern can be generated by mask/mask.cpp. Please generate your mask pattern by yourself.
Below is an example of the input mask pattern and the output image intensity (EM). We use Graph-R to plot the image intensity but you can use any plottng software.
The first step is the data preparation for the training and validation. Two programs, m3.cpp and compress.py are in cnn/data directory. The first program m3.cpp generates mask patterns (mask.csv), M3D parameters (inputxx.csv) and the list of the diffraction orders (inputlm.csv). The diffration orders contributing to the image intensity depend on the mask size and NA. If the third column of the list is 0, the diffraction order is close to the boundary of the overlapping area of the pupil and the source (sigma=1). In this case, only a0 has a value and ax and ay do not have values (a0, ax and ay are M3D parameters). The total numbers of a0, ax and ay can be calculated from this list. The second program compress.py transforms mask.csv, inputxx.csv and inputxx.csv into maskdata/mask.npy, ampdata/*.npy and inputlm/*.npy which will be used as the inputs for the following CNN training.
The trainig models for a0, ax and ay are in cnn/model directory. For example, the training model for real(a0) is cnn/model/re0/re0.py. Please prepare the maskdata and ampdata for the training and validation. The program also requires inputlm. The data augmentation technique (pattern shift) is used in the program to enlarge the number of the original data set (ndata=20000) to the number of the training data set (ntrain=1000000). M3D parameter a0 is normalized for each diffraction order and the list of the normalization factor is written in factor0.csv. The model after the training model.ckpt and the normarization factor factor0.csv will be used in the following CNN prediction,
The program for the CNN prediction is cnnpredict/predict/fullmodel.py. The input mask pattern to this program (maskinput.npy) is generated by the program compmask.py. After the CNN prediction the output M3D parameters need to be divided by the normalization factors. This will be done by the program cnnpredict/nnpredict.cpp. The output is nnpredict.csv which will be used in the following image intensity calculation.
There are two programs in the image intensity calculation. cnnabbe/intensity.cpp uses Abbe's method and cnnsocs/intsocs.cpp uses our STCC method. The input files to both programs are mask.csv and nnpredict.csv. The otputs of cnnabbe/intensity.cpp are ftint.csv and nnabbe.csv. ftint.csv is calculated by the thin mask model which uses the Fourier transformation (FT) of the mask patter. nnabbe.csv is calculated by Abbe's method using M3D parameters. The output of cnnsocs/intsocs.cpp is nnsocs.csv. The difference between nnabbe.csv and nnsocs.csv is very small.
The figures below show the difference between emint.csv and ftint.csv (EM-FT) and the diffrence between emint.csv and nnabbe.csv (EM-CNN). The image intensity calculated by using the CNN model is close to the result of the electromagnetic simulation.
- H. Tanabe, A. Junguji and A. Takahashi, "Weakly guiding approximation of a three dimentional waveguide model for extreme ultraviolet lithography simulation," JOSA A 41(2024)1491. https://doi.org/10.1364/JOSAA.516610
- H. Tanabe, S. Sato, and A. Takahashi, “Fast EUV lithography simulation using convolutional neural network,” JM3 20(2021)041202. https://doi.org/10.1117/1.JMM.20.4.041202
- H. Tanabe and A. Takahashi, “Data augmentation in extreme ultraviolet lithography simulation using convolutional neural network,” JM3 21(2022)041602. https://doi.org/10.1117/1.JMM.21.4.041602
- H. Tanabe, A. Jinguji, and A. Takahashi, “Evaluation of convolutional neural network for fast extreme violet lithography simulation using 3nm node mask patterns,” JM3 22(2023)024201. https://doi.org/10.1117/1.JMM.22.2.024201
- H. Tanabe, A. Jinguji and A. Takahashi, “Accelerating extreme ultraviolet lithography simulation with weakly guiding approximation and source position dependent transmission cross coefficient formula,” JM3 23(2024)014201. https://doi.org/10.1117/1.JMM.23.1.014201