Benchmarks outdated!

In the early phases of development we put a lot of emphasis on comparing QuantumOptics.jl to other existing frameworks showing off its superiority in terms of speed and resources. This extensive list of benchmarks stems from that time in 2018.

We are currently in the process of updating our benchmarks and comparisons and you will find more recent data here soon.

A cavity mode, modeled as a Fock space with a certain cutoff, is driven on resonance with a classical laser. Since photon loss is included, the system is an open system and evolves according to a master equation $$\dot{\rho} = i \big[ \rho, H \big] + \kappa\big(a\rho a^\dagger - \frac{1}{2} a^\dagger a \rho - \frac{1}{2} \rho a^\dagger a \big)$$ with the Hamiltonian $$H = \Delta_c a^\dagger a + \eta (a + a^\dagger).$$

This benchmark measures the time it takes to create all necessary operators and states, perform a time-evolution according to a master equation and calculate the expectation values of the number operator at certain times. The Schrödinger time-evolution neglects the loss, obviously.

An atom, modeled as a dipole, is resonantly coupled to a cavity mode, modeled as a Fock space with a certain cutoff. Photon loss and incoherent driving by a thermal bath are included and taken into account by the master equation $$\dot{\rho} = i \big[ \rho, H \big] + \kappa \left( 1 + \bar n \right) \, \mathcal{D} \left[ a \right] \rho + \kappa \bar n \, \mathcal{D} \left[ a^\dagger \right] \rho + \gamma \, \mathcal{D} \left[ \sigma^- \right] \rho$$ with the Hamiltonian $$H = \omega_c \, a^\dagger a + \frac{\omega_a}{2} \, \sigma^z + g \big( a^\dagger \sigma^- + a \sigma^+ \big).$$

Again, we create all necessary objects, perform a time evolution and calculate the expectation value of the number operator in the photon mode at particular times. For the Schrödinger equation the incoherent dynamics are neglected.

A Gaussian wave packet propagates in a harmonic potential while simultaneously being subject to friction, which is accounted for by the master equation $$\dot{\rho} = i \big[ \rho, H \big] + \nu \mathcal{D} \left[ \sqrt{\frac{\omega m}{2}} \, x + \frac{i}{\sqrt{2 \omega m}} \, p \right] \rho$$ and the Hamiltonian $$H = \frac{p^2}{2m} + \frac{m \omega^2}{2} \, x^2.$$

We measure the time it takes to create all necessary objects, perform a time evolution and calculate the expectation value of the position operator at certain times.

Benchmarking is a tricky thing to do right. Many different variables can significantly influence the results. The choice of examples and the presentation of the results may be biased towards one or the other framework. We tried our best to be as fair as possible. To give everybody the chance to reproduce our results, the entire benchmarking code is open source and can be found in the QuantumOpticsFrameworks-Benchmarks repository on GitHub. If you find any mistakes or obtain different results we would be grateful if you could file an issue there.

A few remarks on the benchmarking process:

  • All benchmarks are performed on a single dedicated CPU core. Both cases, single-processing and multi-processing, are of interest with slightly different implications. Obviously, when working interactively on a single example, the complete available processing power should be used to get the answer as fast as possible. However, for embarrassingly parallel problems, like for example performing the same time-evolution for different parameters, it is favorable to avoid any unnecessary overhead stemming from premature parallelization.
  • Startup time is neglected. Julia's just-in-time compilation, which is the key to generating extremely performant code, comes with a price. The first time a function is called with a certain set of argument types it has to be compiled which leads to a constant offset in every single benchmark. If the function is called often enough and/or runs long enough this overhead doesn't matter. However, if a function is called only once the overhead might be considerable and is the reason why often times using Julia feels less snappy compared to other languages.