There is a conflict between the manufacturing and simulation engineer, it is a matter of priority for each discipline. It involves the priority of precision vs accuracy. You may assume the manufacturing engineer has gone to great effort to insure precision, far less effort to insure accuracy. For example, it is important that a 1000C setting gets the same physical result from lot to lot, it is less important that it is exactly 1000C. Highly precise processes with errors in accuracy exceeding 50% have been seen in high yielding processes (it wasn’t temperature in this case). Precision and accuracy are defined here in the normal manner.
Fundamentally the manufacturing process does not require high accuracy in all aspects, it is helpful but not a must. However precision has direct impact on yield, and performance. For manufacturing precision is a major component of repeatability (other components include defects, etc.). For the manufacturing engineer
For the Simulation Engineer precision is a given, run to run simulations give identical results unless we explicitly turn on stochastic models. For simulations, accuracy can be divided into two parts, input and output. It is expected and often assumed that input accuracy is 100%, I ask for a specific dose, and I get exactly that dose, ask for a specific temperature, and that is what you get, and if you don’t it’s a software bug. Output accuracy of the simulator, unfortunately, is not 100%.
This means is the simulation engineer cannot just concern themselves with calibration of the software, you must also look into accuracy, or calibration, of both process inputs (recipes), and measurements. While you are at it, you will need to worry about the precision of both the process inputs and measurements too. A process input might be an implant dose of 1e13, as listed in the spec. But is it really 1e13 that is delivered to the device? For a measurement do you know that the tools were calibrated? Was the test setup the same as your simulation? Was the device actually the same?
There’s a tendency to blame the software for mismatches between measurement and simulation. The software certainly isn’t perfect and may be very wrong. There’s a good chance that something else in the chain is off too, perhaps many things are off. To do the correlation all aspects need to be understood. Failure to understand all aspects ultimately limits the ability to make predictions with the software and limits its usefulness.
For the many factors that influence the accuracy of a simulation, the simulation engineer has varying amounts of control. If you’re using commercial software, improving the software is difficult, you can file an enhancement request, but unless you’re a huge customer, you’ll wait. You may not be able to get the manufacturing group to calibrate implant doses, temperatures, etc., but if you can identify the delta, you can easily correct for it. Similarly, if you have correlation problems, when you dig in and discover the measurement device and simulated device differ substantially, e.g. they measured using a long channel device and you simulated a short channel, you can change your device or get short channel data.
The simulation engineer must understand all factors, not just chalk it up to the simulator being wrong. Actual measurement may be king, but you need full front to back transparency on how the data was achieved.