While current benchmarks assess language understanding, factual recall, mathematics or code generation, none capture the integrative reasoning central to engineering where scientific principles, quantitative modeling and practical constraints must converge. To address this gap, we introduce EngChain, a benchmark for verifiable multi-step engineering problem-solving. EngChain contains 90 problems spanning three engineering branches, organized into 9 domains and 20 distinct areas. Problems are generated from symbolic templates with a high degree of randomization to ensure diversity and eliminate the risk of contamination. The benchmark moves beyond final answer accuracy with a two-stage evaluation: it first quantitatively verifies the numerical and semantic validity of each reasoning step and then introduces LLM-as-Judge, an automated system to qualitatively categorize any identified reasoning errors.