|||
0.加快计算速度的参数选取技巧
请问各位收敛太慢怎么办啊?我都做了好几个星期了,作业总是运算超时之后被系统退下来,我只有把CONCAR代到POSCAR中,都做了几次了。现在时间有点紧迫了,请问我该怎么办?谢谢咯!!!
Q1:减小计算体系或者计算精度,如果一周左右无法收敛,需要测试一下自己的ENCUT和K点设置,看看是否合适
Q2:ALGO可以设置为48,这样可以提高计算速度
Q3:首先确定 kpoint 和 ecut 是否合理。
其次可以做多次优化:先做收敛精度低的,然后做收敛精度高的,逐步优化
Q4:是驰誉时离不收敛吧, 可以把EDIFFG 改小一些(绝对值变大)
Q5:最近计算了一个体系192个原子,结构优化算了20天还没出结果,后来改了一些设置大概5天就可以了。
ENCUT 减小
EDIFF, EDIFFG降低标准
IALGO=48
如果能并行的话最好并行,我用的是机群,设置是:
LPLANE=.TRUE.
NPAR=并行用的节点数
LSCALU=.FALSE.
NSIM=4
效果还好。目前来看,至少比我之前的快多了。
Q5:
1.尽可能把初始结构给得准确、合理些
2.先把K点取得少些,待结构优化好后,增加K点到你需要的精度,接着优化(读取前面得到的WAVECAR)
Q6:
建议你打开vasp guide ,目录里面专门有一节介绍怎么样加快速度计算的
Q7: 1) 应该是随便满足一个条件就会收敛。一般来说应该是把EDIFFG调成EDIFF的十分之一。
2) 你可以选择换换IBRION或者是POTIM.因为不同的算法,对收敛影响的还是很大的。也有可能你的结构离平衡太远了。
Q7: VASP手册注释
ALGO-tag
ALGO = Normal | VeryFast | Fast | All | Damped
Default: ALGO = Normal
only the first letter in the flag decides, which algorithm is used.
The ALGO tag is a convenient way to specify the electronic minimisation algorithm in VASP.4.5 and later versions.
ALGO = Normal will select, IALGO=38 (blocked Davidson block iteration scheme), whereas
ALGO = Very Fast will select IALGO=48 (RMM-DIIS).
A faily robust mixture of both algorithm is selected for ALGO = Fast. In this case IALGO=38 is used for the initial phase, and then VASP switches to IALGO=48.
For ionic step, one IALGO=38 sweep is performed.
The all band simultaneous update of wavefunctions can be selected using ALGO = All (IALGO=58). A damped velocity friction algorithm is selected with ALGO = Damped (IALGO=53). See next sections for details.
IALGO, and LDIAG-tag
IALGO = integer selecting algorithm
LDIAG = .TRUE. or .FALSE. (perform sub space rotation)
Default
IALGO = 8 or 38 for VASP.4.5
LDIAG = .TRUE.
0-1 手册解答
ENCUT= [real]
Default: | ||
ENCUT | = | largest ENMAX from POTCAR-file |
Cut-off energy for plane wave basis set in eV. All plane-waves with a kinetic energy smaller than are included in the basis set: i.e.
The number of plane waves differs for each k-point, leading to a superior beahviour for e.g. energy-volume calculations. If the volume is increased the total number of plane waves changes fairly smoothly. The criterion (i.e. same basis set for each k-point) would lead to a very rough energy-volume curve and, generally, slower energy convergence.
Starting from version VASP 3.2 the POTCAR files contains a default ENMAX (and ENMIN) line, therefore it is in principle not necessary to specify ENCUT in the INCAR file. For calculations with more than one species, the maximum cutoff (ENMAX or ENMIN) value is used for the calculation (see below, Sec. 6.11). For consistency reasons we still recommend to specify the cutoff manually in the INCAR file and keep in constant throughout a set of calculations.
In general, the energy-cut-off must be chosen according to the pseudopotential. All POTCAR files contain a default energy cutoff. Use this energy cut-off - but please also perform some bulk calculations with different energy cut-off to find out whether the recommended setting is correct. The cut-off which is specified in the POTCAR file will usually result in an error in the cohesive energy which is less than 10 meV.
You should be aware of the difference between absolute and relative convergence. The absolute convergence with respect to the energy cut-off ENCUT is the convergence speed of the total energy, whereas relative convergence is the convergence speed of energy differences between different phases (e.g. energy of fcc minus energy of bcc structure). Energy differences converge much faster than the total energy.
This is especially true if both situations are rather similar (e.g. hcp -- fcc). In this case the error due to the finite cut-off is 'transferable' from one situation to the other situation.
If two configurations differ strongly from each other (different distribution of s p and d electrons, different hybridization) absolute convergence gets more and more critical.
There are some rules of thumb, which you should check whenever making a calculation: For bulk materials the number of plane waves per atom should be between 50-100. A smaller basis set might result in serious errors. A larger basis set is rarely necessary, and is a hint for a badly optimized pseudopotential. If a large vacuum is included the number of plane waves will be larger (i.e. of your supercell vacuum number of plane waves increases by a factor of 2).
More problematic than ENCUT is the choice of the FFT-mesh, because this error is not easily transferable from one situation to the next.
For an exact calculation the FFT-mesh must contain all wave vectors up to if , being the used energy-cut-off. Increasing the FFT-mesh from this value does not change the results, except for a possibly very small change due to the changed exchange-correlation potential. The reasons for this behavior are explained in section 7.2.
Nevertheless it is not always possible and necessary to use such a large FFT-mesh. In general only 'high quality' calculations (as defined in the previous section) require a mesh which avoids all wrap around errors. For most calculations -- and in particular for the supplied pseudopotentials with the default cutoff -- it is sufficient to set NGX,NGY and NGZ to of the required values (set PREC=Medium or PREC=Low in the INCAR file before running the makeparam utility or VASP.4.X). The values which strictly avoid any wrap-around errors are also written to the OUTCAR file:
WARNING: wrap around error must be expected set NGX to 22 WARNING: wrap around error must be expected set NGY to 22 WARNING: wrap around error must be expected set NGZ to 22Just search for the string 'wrap'. As a rule of thumb the will result in FFT-mesh, which contain approximately 8x8x8=256 FFT-points per atom (assuming that there is no vacuum).
One hint, that the FFT mesh is sufficient, is given by the lines
soft charge-density along one line 0 1 2 3 4 5 6 7 8 x 32.0000 -.7711 1.9743 .0141 .3397 -.0569 -.0162 -.0006 .0000 y 32.0000 6.7863 .0205 .2353 .1237 -.1729 -.0269 -.0006 .0000 z 32.0000 -.7057 -.7680 -.0557 .1610 -.2262 -.0042 -.0069 .0000also written to the file OUTCAR (search for the string 'along'). These lines contain the charge density in reciprocal space at the positions
The last number will always be 0 (it is set explicitly by VASP), but as a rule of thumb the previous value divided by the total number of electrons should be smaller than . To be more precise: Because of the wrap-around errors, certain parts of the charge density are wrapped to the other side of the grid, and the size of the ``wrapped'' charge density divided by the number of electrons should be less than .
Another important hint that the wrap around errors are too large is given by the forces. If there is a considerable drift in the forces, increase the FFT-mesh. Search for the string 'total drift' in the OUTCAR file, it is located beneath the line TOTAL-FORCE:
total drift: -.00273 -.01048 .03856The drift should definitely not exceed the magnitude of the forces, in general it should be smaller than the size of the forces you are interested in (usually 0.1 eV/).
For the representation of the augmentation charges a second more accurate FFT-mesh is used. Generally the time spent for the calculation on this mesh is relatively small, therefore there is no need to worry too much about the size of the mesh, and relying on the defaults of the makeparam utility is in most cases safe. In some rare cases like Cu, Fe_pv with extremely 'hard' augmentation charges, it might be necessary to increase NGXF in comparison to the default setting. This can be done either by hand (setting NGXF in the param.inc file) or by giving a value for ENAUG in the INCAR file 6.10.
As for the soft part of the charge density the total charge density (which is the sum of augmentation charges and soft part) is also written to the file OUTCAR:
total charge-density along one line 0 1 2 3 4 5 6 7 8 x 32.0000 -.7711 1.9743 .0141 .3397 -.0569 -.0162 -.0006 .0000 y 32.0000 6.7863 .0205 .2353 .1237 -.1729 -.0269 -.0006 .0000 z 32.0000 -.7057 -.7680 -.0557 .1610 -.2262 -.0042 -.0069 .0000The same criterion which holds for the soft part should hold for the total charge density. If the second mesh is too small the forces might also be wrong (leading to a 'total drift' in the forces).
Mind: The second mesh is only used in conjunction with US-pseudopotentials. For normconserving pseudopotentials neither the charge density nor the local potentials are set on the fine mesh. In this case set NG(X,Y,Z)F to NGX,Y,Z or simply to 1. Both settings result in the same storage allocation.
Mind: If very hard non-linear/partial core corrections are included the convergence of the exchange-correlation potential with respect to the FFT grid might cause problems. All supplied pseudopotentials have been tested in this respect and are safe.
When to set ENCUT (and ENAUG) by handIn most cases once can safely use the default values for ENCUT and ENAUG, which are read from the POTCAR file. But there are some cases where this can results in small, easily avoidable inaccuracies.
For instance, if you are interested in the energy difference between bulk phases with different compositions (i.e. Co - CoSi - Si). In this case the default ENCUT will be different for the calculations of pure Co and pure Si, but it is preferable to use the same cutoff for all calculations. In this case determine the maximal ENCUT and ENAUG from the POTCAR files and use this value for all calculations.
Another example is the calculation of adsorption energies of molecules on surfaces. To minimize (for instance) non-transferable wrap errors one should calculate the energy of an isolated molecule, of the surface only, and of the adsorbate/surface complex in the same supercell, using the same cutoff. This usually requires to fix ENCUT and ENAUG by hand in the INCAR file. If one also wants to use real space optimization (LREAL=On), it is recommended to use LREAL=On for all three calculations as well (the ROPT flag should also be similar for all calculations, section 6.39).
Read and understand section 7.4 before reading this section.
The number of k-points necessary for a calculation depends critically on the necessary precision and on the fact whether the system is metallic.
Metallic systems require an order of magnitude more k-points than semiconducting and insulating systems. The number of k-points also depends on the smearing method in use; not all methods converge with similar speed. In addition the error is not transferable at all i.e. a leads to a completely different error for fcc, bcc and sc. Therefore absolute convergence with respect to the number of k-points is necessary.
The only exception are commensurable super cells. If it is possible to use the same super cell for two calculations it is definitely a good idea to use the same k-point set for both calculations.
k-point mesh and smearing are closely connected. We repeat here the guidelines for ISMEAR already given in section 6.38:
For semiconductors or insulators always use tetrahedron method (ISMEAR=-5), if the cell is too large to use tetrahedron method use ISMEAR=0.
For relaxations in metals always use ISMEAR=1 and an appropriated SIGMA value (so that the entropy term is less than 1 meV per atom). Mind: Avoid to use ISMEAR0 for semiconductors and insulators, it might result in problems.
For the DOS and very accurate total energy calculations (no relaxation in metals) use the tetrahedron method (ISMEAR=-5).
Once again, if possible we recommend the tetrahedron method with Blöchl corrections (ISMEAR=-5), this method is fool proof and does not require any empirical parameters like the other methods. Especially for bulk materials we were able to get highly accurate results using this method.
Even with this scheme the number of k-points remains relatively large. For insulators 100 k-points/per atom in the full Brillouin zone are generally sufficient to reduce the energy error to less than 10 meV.
Metals require approximately 1000 k-points/per atom for the same accuracy. For problematic cases (transition metals with a steep DOS at the Fermi-level) it might be necessary to increase the number of k-points up to 5000/per atom, which usually reduces the error to less than 1 meV per atom.
Mind: The number of k-points in the irreducible part of the Brillouin zone (IRBZ) might be much smaller. For fcc/bcc and sc a containing 1331 k-points is reduced to 56 k-points in the IRBZ. This is a relatively modest value compared with the values used in conjunction with LMTO packages using linear tetrahedron method.
Not in all cases it is possible to use the tetrahedron method, for instance if the number of k-points falls beneath 3, or if accurate forces are required. In this case use the method of Methfessel-Paxton with N=1 for metals and N=0 for semiconductors. SIGMA should be as large as possible, but the difference between the free energy and the total energy (i.e. the term
entropy T*S in the OUTCAR file) must be small (i.e. 1-2 meV/per atom). In this case the free energy and the energy one is really interested in are almost the same. The forces are also consistent with .Mind: A good check whether the entropy term causes any problems is to compare the entropy term for different situations. The entropy must be the same for all situations. One has a problem if the entropy is meV per atom at the surface but meV per atom for the bulk.
Comparing different k-points meshes:
It is necessary to be careful comparing different k-point meshes. Not always does the number of k-points in the IRBZ increase continuously with the mesh-size. This is for instance the case for fcc, where even grids centered not at the -point (e.g. Monkhorst Pack ) result in a larger number of k-points than odd divisions (e.g. ). In fact the difference can be traced back to whether or whether not the -point is included in the resulting k-point mesh. Meshes centered at (option 'G' in KPOINTS file or odd divisions, see Sec. 5.5.3) behave different than meshes without (option 'M' in the KPOINTS file and even divisions). The precision of the mesh is usually directly proportional to the number of k-points in the IRBZ, but not to the number of divisions. Some ambiguities can be avoided if even meshes (not centered at ) are not compared with odd meshes (meshes centered at ).
Some other considerations:
It is recommended to use even meshes (e.g. ) for up to . From there on odd meshes are more efficient (e.g. ). However we have already stressed that the number of divisions is often totally unrelated to the total number of k-points and to the precision of the grid. Therefore a might be more accurate then a grid. For fcc a grid is approximately as precise as a mesh. Finally, for hexagonal cells the mesh should be shifted so that the point is always included i.e. a KPOINTS file
automatic mesh 0 Gamma 8 8 6 0. 0. 0. is much more efficient than a KPOINTS file with ``Gamma'' replaced by ``Monkhorst'' (see also Ref. 5.5.3).
Archiver|手机版|科学网 ( 京ICP备07017567号-12 )
GMT+8, 2024-11-24 11:31
Powered by ScienceNet.cn
Copyright © 2007- 中国科学报社