Original link: Faulty Nvidia H100 GPUs and HBM3 memory caused half of failures during LLama 3 training — one failure every three hours for Meta's 16,384 GPU training cluster / Tom's Hardware.
Those GPU's (really math vector processors) are being developed very quickly to keep up with demand.
Sounds like quality is suffering.