Verification#

femorph-solver’s correctness story rests on three independent pillars — each benchmark page in this section sits on at least one of them:

  1. Analytical reference — a closed-form solution derived from classical mechanics (Timoshenko, Cook, Hughes, Bathe, …). The unambiguous ground truth where one exists.

  2. Vendor-neutral benchmark corpus — published for the express purpose of cross-solver comparison:

    • NAFEMS Benchmark Tests for Linear Elastic Analysis (NAFEMS BENCHMARK-LA series) — the industry standard for linear-structural verification. Vendor-neutral, attribution-free reuse.

    • NAFEMS Background to Benchmarks — methodology and derivation companion.

    • ASME boiler-code benchmark suite — free with attribution.

  3. Cross-reference citations to proprietary verification manuals — short factual citations only (one line, one problem ID + one numeric comparison value per source). This follows the academic-publishing convention used by every serious FEA validation paper:

    • Abaqus Verification Manual (Simulia) — referenced by problem ID (e.g., AVM 1.4.3). We never vendor AVM decks, text, or expected-result tables.

    • Abaqus Benchmarks Manual — same posture.

    • MAPDL Verification Manual (Ansys) — referenced by problem ID (e.g., VM-1). Same posture.

    • NX Nastran Verification Manual (Siemens) — same.

Each benchmark page follows the same four-section template:

  • Problem — our prose, physics derived from textbook.

  • Analytical reference (where applicable) — closed-form + cited textbook derivation.

  • femorph-solver result — computed value, committed test.

  • Cross-references — one-line citations to NAFEMS / AVM / MAPDL-VM / etc. for readers who want to consult multiple independent verification sources.

Commercial-ship posture#

Everything under doc/source/verification/ is original content or legally-reusable benchmark spec. We never vendor proprietary verification manuals’ decks or their tables of expected results. See the fair-use citation pattern below.

Fair-use citation pattern#

A typical cross-reference row in a benchmark page reads:

| Source            | Reference value | Problem ID |
|-------------------|-----------------|------------|
| Closed form       | 0.001 000 m     | (Timoshenko §5.4) |
| NAFEMS LE1        | 0.000 998 m     | NAFEMS BENCHMARK-LE1 |
| femorph-solver    | 0.001 012 m     | this test |
| Abaqus VM         | 0.000 999 m     | AVM 1.4.3 |
| MAPDL VM          | 0.000 998 m     | VM-1 |

Short, factual, purposeful — a single number for comparison plus the problem ID. This is the same table you’ll find in published FEA-vendor cross-comparison papers going back decades.