main

Sommerfeld Green’s function simulation — multi-frequency, multi-Rx grid.

Execution Model

The frequency axis is embarrassingly parallel: each energy point requires an independent Greens_function_analytical instance (with its own ε(ω)) and loops over all Rx positions. Three execution backends are supported, controlled by parallel.backend in the Hydra config:

  • sequential — plain for-loop (default, zero dependencies).

  • joblib — multiprocessing via joblib (shared-memory, good

    for laptops/workstations).

  • mpi — distributed via mpi4py (good for HPC clusters).

For joblib, the worker function _compute_one_energy() is called via Parallel(delayed(...)). DataProvider is re-created inside each worker to avoid pickling issues with interpolation objects.

For mpi, energies are scattered round-robin across ranks; each rank writes its chunk and rank 0 gathers and saves.

mqed.Dyadic_GF.main._build_linspace_segment(segment) numpy.ndarray[source]

Build one uniformly spaced segment from a dict-like config.

mqed.Dyadic_GF.main._build_piecewise_grid(segments) numpy.ndarray[source]

Build a piecewise grid while avoiding duplicate boundary points.

This lets Hydra configs use multiple spectral windows with different resolutions, for example a dense window around a plasmon resonance and sparse windows elsewhere.

mqed.Dyadic_GF.main._compute_one_energy(idx: int, energy_eV: float, lambda_m: float, rx_values_m: numpy.ndarray, zD: float, zA: float, material_cfg, integ_cfg) tuple[source]

Compute Green’s function for all Rx at a single energy.

This is the atomic unit of work for parallelization. Each call is completely independent — no shared state with other workers.

DataProvider is re-instantiated inside the worker because scipy’s interp1d objects are not always pickle-safe across processes.

Parameters:
  • idx – Energy index (used to place results back in the output array).

  • energy_eV – Energy value in eV (for logging only).

  • lambda_m – Corresponding wavelength in meters.

  • rx_values_m – 1-D array of Rx positions in meters.

  • zD – Source z-position (donor height) in meters.

  • zA – Observer z-position (acceptor height) in meters.

  • material_cfg – OmegaConf subtree for material (re-creates DataProvider inside worker).

  • integ_cfg – OmegaConf subtree for integration parameters.

Returns:

idx is the energy index; total and vacuum are (nR, 3, 3) complex arrays.

Return type:

(idx, total, vacuum)

mqed.Dyadic_GF.main._deduplicate_sorted_grid(values: numpy.ndarray, *companions: numpy.ndarray) tuple[numpy.ndarray, ...][source]

Remove repeated neighboring points from a sorted grid and companion arrays.

mqed.Dyadic_GF.main._maybe_auto_launch_mpi(parallel_cfg)[source]

Re-launch this script under mpiexec if not already running in MPI.

Same pattern as mqed.disorder.run_disorder._maybe_auto_launch_mpi. Returns True if we re-launched (caller should sys.exit).

mqed.Dyadic_GF.main._run_joblib(energy_eV_array, target_lambdas_m, rx_values_m, sim_params, material_cfg, n_jobs)[source]

Joblib backend — parallelize over energy axis.

Each energy is dispatched as an independent task to a pool of n_jobs processes. Communication is via return values (no shared memory needed).

mqed.Dyadic_GF.main._run_mpi(energy_eV_array, target_lambdas_m, rx_values_m, sim_params, material_cfg, parallel_cfg)[source]

MPI backend — scatter energies round-robin across ranks.

Rank 0 gathers all partial results and returns the full arrays. Non-root ranks return (None, None).

mqed.Dyadic_GF.main._run_sequential(energy_eV_array, target_lambdas_m, rx_values_m, sim_params, material_cfg)[source]

Sequential execution — simple for-loop, no parallelism.

mqed.Dyadic_GF.main.build_grid(config)[source]

Builds a 1-D numpy array from flexible Hydra config input.

Accepted formats:
  • Single value: 2.0[2.0]

  • List: [1.0, 2.0, 3.0] → as-is

  • Dict/linspace: {min: 1.0, max: 3.0, points: 5}

  • Piecewise: [{min: 0.0, max: 3.0, points: 21}, ...]