Compilation and Invocation of ChronusQ
This wiki page contains information pertinent to the compilation and invocation of ChronusQ:
The following dependencies must be installed prior to compiling ChronusQ. All dependencies must be compiled with a C++11 enabled compiler.
- C++11 compiler (see notes below on tested compilers)
- C compiler (for LibXC)
- Fortran compiler (for LibXC)
- CMake build system (Version 3.11+).
We refer the user to the linked documentation for the installation details of the packages.
The following compilers are regularly tested in the compilation of ChronusQ.
- GCC 6.2+
- Intel 17.0+
Compilation with Clang and PGI are known to cause problems and are not currently supported. If support is desired, open an issue on the GitHub page.
For best performance on Intel CPUs, we highly recommend compiling ChronusQ with the Intel compilers and linking to MKL for threaded linear algebra. Note that if the Intel compilers are specified, MKL will automatically be added to the compiler invocation.
This wiki provides basic instructions for the compilation of ChronusQ. The actual build instructions must be tailored to your individual system needs. Please open a GitHub issue if irreconcilable issues with the compilation are encountered.
As a part of the ChronusQ build system, a number of open source dependencies are compiled as a part of the compilation procedure. This is done mainly for two reasons:
- To ease the dependency burden of the user.
- To ensure proper compilation and installation of non-trivial dependencies as they are critical to the performance and usage of ChronusQ.
The following dependencies will be compiled upon compilation of ChronusQ:
- Libint2  GTO integral library.
- LibXC  for exchange-correlation kernel evaluation
- GTest Unit testing framework.
- (Optionally with GCC) OpenBLAS for threaded linear algebra (only compiled if MKL is not found)
(Optionally if MPI is enabled)
If compiling ChronusQ on a system with many cores, it is possible to compile Libint2 outside of the ChronusQ compilation in parallel (highly recommended). To do so, run the following from the top directory
Where N is the number of cores to use when compiling Libint2. The Libint2 compilation is quite extensive, and parallel compilation is quite handy.
In the case where the Libint version has been updated between version of ChronusQ, we provide a utility script to clean the Libint2 installation. From the top directory
 Edward F. Valeev; Libint: machine-generated library for efficient evaluation of molecular integrals over Gaussians, version 2.1.0; https://github.com/evaleev/libint/; 2015
 Miguel A. L. Marques; Micael J. T. Oliveira; and Tobias Burnus; Libxc: a library of exchange and correlation functionals for density functional theory; Comput. Phys. Commun. 183, 2272 (2012) OAI: arXiv:1203.1739
As of ChronusQ v.0.2.0, MPI is supported throughout the code. See CMake build for information on how to enable these bindings. As of 0.2.0, the following MPI libraries have been tested
- MPICH 3.2.1
- CRAY-MPICH 7.7.0
ChronusQ does not support OpenMPI.
The compilation and Makefile generation of ChronusQ is handled through the CMake build system (version 3.11+). It is recommended that ChronusQ be compiled out-of-source, i.e. in a build directory. On most systems with properly installed system dependencies (see above), a simple CMake invocation will suffice :
mkdir build && cd build && cmake .. && make
If compilation is successful, a link to the ChronusQ executable
chronusq will be placed in the build
directory. System installation of ChronusQ is not yet available. If this is a feature that you would like,
please open a GitHub issue.
Although simple invocation of CMake will often suffice, there are a number of influential CMake variables that will impact the dependencies and performance for ChronusQ. A (non-exhaustive) list of these variables will be listed here:
|CMAKE_CXX_COMPILER||Specification of the C++ compiler||-DCMAKE_CXX_COMPILER=icpc|
|CMAKE_C_COMPILER||Specification of the C compiler||-DCMAKE_C_COMPILER=icc|
|CMAKE_Fortran_COMPILER||Specification of the FORTRAN compiler||-DCMAKE_Fortran_COMPILER=gfortran|
|CQ_ENABLE_MPI||Enable MPI bindings||-DCQ_ENABLE_MPI=ON|
|MPI_CXX_COMPILER||Specification of the MPI C++ compiler||-DMPI_CXX_COMPILER=mpicxx|
|MPI_C_COMPILER||Specification of the MPI C compiler||-DMPI_C_COMPILER=mpicc|
|MPI_Fortran_COMPILER||Specification of the MPI FORTRAN compiler||-DMPI_Fortran_COMPILER=mpifort|
|CMAKE_COMPILER_FLAGS||Compiler flags (optimization, etc) for specificed compiler||-DCMAKE_CXX_FLAGS='-O3 -xHost' (Intel compilers)|
|EIGEN3_ROOT||Location of (non standard) Eigen3 installation||-DEIGEN3_ROOT=$HOME/local/include/eigen3|
|HDF5_ROOT||Location of (non standard) HDF5 installation root (both include and lib)||-DHDF5_ROOT=/path/to/hdf5|
|OPENBLAS_TARGET||Explicit specification of CPU TARGET for OpenBLAS||-DOPENBLAS_TARGET=HASWELL (Haswell / AVX2 + FMA enabled CPUs)|
It is highly recommended that ChronusQ is compiled with optimization flags. ChronusQ is regularly tested with both -O2 and -O3 on both the GNU and Intel compilers.
OpenBLAS fails to detect CPU target
- See #9 for details (Partially addessed in a6a99971274)
- Solution: Explicitly set OPENBLAS_TARGET in CMake invocation. To get things working it will often suffice to set OPENBLAS_TARGET=GENERIC, but this will be suboptimial many (if not all) modern CPUs. See the OpenBLAS documentation for more details on how to properly set this variable for your microarchitecture.
chronusqlink fails on HDF5 for _cxx11baisic_string
- See #10 for details.
- Solution: There are several solutions to this problem.
If it is possible to recompile HDF5 locally with
-std=c++11in the compiler flags, this will solve the issue. This problem stems from the fact that standard compilation of HDF5 with GCC < 6.1 will not enable this by default (including package manager binaries). When compiling from scratch, the following configure line will enable the necessary features of HDF5 to be compatible with ChronusQ (
HDF_INSTALL_PREFIXis to be replaced with a user specified installation prefix)
CXXFLAGS='-std=c++11' ./configure --prefix=$HDF_INSTALL_PREFIX --enable-fortran --enable-cxx --enable-hl --with-default-plugindir=$HDF_INSTALL_PREFIX/lib/plugin
Once installed, ChronusQ should be configured with
If reinstalling HDF5 is not practical (i.e. cluster usage, etc), you may recompile all of ChronusQ with
_GLIBCXX_USE_CXX11_ABI=0. This can be accomplished by adding
-DCMAKE_CXX_FLAGS='-D_GLIBCXX_USE_CXX11_ABI=0'to your CMake invocation.
Compiling on Mac
You will need a version of
libomp in addition to the clang compiler on Mac. Depending on the method you used to install these, you may need to add the library directories to
LIBRARY_PATH for successful linking. (e.g. if you get an error such as
ld: library not found for -lomp you need to add the path to
libomp.a to your
These instructions have been validated using HDF5 1.10.6, gcc 9.3.0 (for gfortran - the package is called
gcc9), eigen3 3.3.7, and libomp 9.0.1 installed from the MacPorts package management system
Mac CMake commands
The following provides an example of the CMake commands necessary to build on Mac using the packages noted above:
cmake \ -DCMAKE_C_COMPILER=clang\ -DCMAKE_CXX_COMPILER=clang++\ -DCMAKE_Fortran_COMPILER=gfortran-mp-9\ -DEIGEN3_INCLUDE_DIR=/opt/local/include/eigen3\ -DCMAKE_BUILD_TYPE=Release\ -DCMAKE_CXX_FLAGS="-O3"\ -DOPENBLAS_TARGET=SANDYBRIDGE\ -DCQ_EXTERNAL_OPENMP=On\ -DOpenMP_CXX_FLAGS="-I/opt/local/include/libomp -Xpreprocessor -fopenmp"\ ..
OPENBLAS_TARGET must be
SANDYBRIDGE even if your processor supports a Haswell or greater instruction set. We do not know why this is at this point.
/opt/local paths are the include directories for the MacPorts installed software.
The invocation of ChronusQ is simple once compilation is completed:
Where x.inp is the ChronusQ input file. This will generate an x.out and x.bin file which contain the log file and binary data file for the ChronusQ calculation. The ".inp" extension is a requirement. Please see the documentation on the ChronusQ input file structure for further details.
If MPI is enabled, you must invoke ChronusQ with the appropriate MPI launcher. For MPICH, the following will run x.inp on 2 MPI ranks
mpirun -n 2 /path/to/build/dir/chronusq x.inp