Meta has initiated a small-scale rollout of its proprietary AI accelerator chip, a power-efficient alternative to traditional NVIDIA GPUs. This move follows the company’s successful completion of its first “tape-out,” a crucial stage in semiconductor development where the finalized design is sent for an initial manufacturing run.
The chip belongs to the Meta Training and Inference Accelerator (MTIA) series, an in-house silicon lineup designed to enhance generative AI capabilities, recommendation algorithms, and cutting-edge AI research.
Last year, Meta introduced an MTIA inference chip, which was integrated into the recommendation systems of Facebook and Instagram. This chip supports the predictive processing required for AI-driven content suggestions. Now, according to Reuters, Meta is planning to incorporate its training-focused silicon into these systems as well. The long-term vision for both chips extends beyond recommendations, aiming to power generative AI applications such as the Meta AI chatbot.
Despite its investments in custom silicon, Meta remains one of NVIDIA’s most significant clients, having placed multi-billion-dollar GPU orders in 2022. This shift came after the company scrapped a previous attempt at an in-house inference chip that failed to meet expectations in a similar small-scale deployment. The current trial phase for its AI training chip will determine whether Meta’s renewed efforts in custom silicon development can compete with industry-leading solutions.