Hardware Accelerator
Specialized chips (e.g., GPUs, TPUs) designed to speed up AI computations, with implications for energy use and supply chain risk.
Definition
Purpose-built silicon (GPUs, TPUs, FPGAs, neuromorphic chips) optimized for matrix math and parallel workloads. Accelerators dramatically cut training and inference times but concentrate procurement risk (single-vendor lock-in), energy consumption, and e-waste concerns. Governance of accelerators covers vendor diversification, sustainability KPIs (performance-per-watt), end-of-life recycling programs, and secure firmware updates to guard against hardware-level attacks.
Real-World Example
A cloud provider benchmarks NVIDIA GPUs against AMD Instinct accelerators for its AI-training clusters. They adopt a hybrid procurement strategy—mixing both vendors—to avoid single-source risk, deploy dynamic workload schedulers that favor the more energy-efficient devices during peak power-cost hours, and partner with an e-waste recycler to responsibly retire outdated cards.