Eigen-solver backends ===================== Modal analysis (:doc:`modal`) and cyclic-symmetry modal analysis (:doc:`cyclic`) both dispatch the eigensolution through the **eigen-solver registry** in :mod:`femorph_solver.solvers.eigen`. Three backends are exposed; the auto-chain picks the right one based on problem size and which optional dependencies are installed. Auto-chain dispatch ------------------- * **ARPACK** (default) — shift-invert Lanczos via :func:`scipy.sparse.linalg.eigsh`. Universally available (ships with SciPy). Optimal for the lowest few modes of a large sparse SPD problem. * **PRIMME** — block-Davidson with adaptive restart (Stathopoulos & McCombs 2010). Selected automatically for large problems (rough heuristic: :math:`N > 10^{6}` DOFs) or when the caller asks for many modes (:math:`n_\mathrm{modes} > 100`). Needs the optional ``primme`` extra (``pip install "femorph-solver[primme]"``). * **LOBPCG** — locally-optimal block PCG (Knyazev 2001). Factor-free; selected via ``eigen_solver="lobpcg"``. Useful for memory-constrained regimes where the :math:`(\mathbf{K} - \sigma \mathbf{M})` factor wouldn't fit; supports a built-in preconditioner (``"factor"`` / ``"jacobi"`` / ``"none"``) tuned for plate / shell stiffness conditioning. When ``eigen_solver="auto"`` the registry picks based on the heuristic above; pass an explicit identifier to override. Backend-by-backend ------------------ ARPACK ~~~~~~ The shift-invert path is what femorph-solver runs by default: .. math:: \mathbf{S}_{\sigma} = (\mathbf{K} - \sigma\, \mathbf{M})^{-1}\, \mathbf{M}, with :math:`\sigma = 0` by default. See :doc:`../theory/eigenproblem` for the Lanczos derivation and why shift-invert beats power iteration. * **Strengths.** Universal availability; optimal for the lowest few modes of a sparse SPD GEVP; reverse-comm API lets the caller swap in any linear backend for the inner factor (Pardiso / CHOLMOD / MUMPS / SuperLU). * **Weaknesses.** Many-mode runs (>100) ramp up the Krylov subspace size and the work-space memory grows quadratically. PRIMME ~~~~~~ Block-Davidson with built-in preconditioner support and restart-aware convergence. The optimal choice when the problem is large enough that ARPACK's Krylov subspace dominates memory, or when many modes are needed. * **Strengths.** Block iteration parallelises matrix-vector products; preconditioner support out of the box; faster convergence on tightly-clustered eigenvalues. * **Weaknesses.** Optional dependency (``primme`` Python package); not in every Python wheel cache. LOBPCG ~~~~~~ Factor-free iteration on the original GEVP — no shift- invert solve at every step, just a preconditioned matrix- vector product. * **Strengths.** Constant-memory in :math:`n_\mathrm{modes}`; doesn't need a sparse-direct factor at all. Useful when the factor wouldn't fit (very large 3D mesh on a memory-constrained host). * **Weaknesses.** Convergence is preconditioner-sensitive; the SciPy shipped LOBPCG (used here) is slower than ARPACK on the typical 10-50 mode runs that dominate verification workloads. Mass-orthonormalisation ----------------------- All three backends return mass-orthonormalised eigenvectors — :math:`\boldsymbol{\phi}_{i}^{\!\top}\, \mathbf{M}\, \boldsymbol{\phi}_{j} = \delta_{ij}` — to machine precision on converged modes. The :class:`ModalResult ` post-processor preserves that property regardless of which backend ran the solve. Inspecting the registry ----------------------- .. code-block:: python from femorph_solver.solvers.eigen import list_eigen_solvers print(list_eigen_solvers()) # → {"arpack": True, "primme": False, "lobpcg": True} Implementation: :mod:`femorph_solver.solvers.eigen` (registry + auto-chain) plus per-backend wrappers :mod:`femorph_solver.solvers.eigen._arpack`, :mod:`femorph_solver.solvers.eigen._primme`, :mod:`femorph_solver.solvers.eigen._lobpcg`. References ---------- * Lehoucq, R. B., Sorensen, D. C. and Yang, C. (1998) *ARPACK Users' Guide*, SIAM SET 6. * Stathopoulos, A. and McCombs, J. R. (2010) "PRIMME: PReconditioned Iterative MultiMethod Eigensolver — Methods and Software Description," *ACM TOMS* 37 (2), 1–30. * Knyazev, A. V. (2001) "Toward the optimal preconditioned eigensolver: locally optimal block preconditioned conjugate gradient method," *SIAM J. Sci. Comput.* 23 (2), 517–541. * Parlett, B. N. (1998) *The Symmetric Eigenvalue Problem*, SIAM (foundational treatment of all three). * Saad, Y. (2011) *Numerical Methods for Large Eigenvalue Problems*, 2nd ed., SIAM (Lanczos / Davidson comparison).