Aria: An Open Multimodal Native Mixture-of-Experts Model
Aria is a multimodal native MoE model. It features:1️⃣State-of-the-art performance on various multimodal and language tasks, superior in video and document understanding; 2️⃣Long multimodal context window of 64K tokens; 3️⃣3.9B activated parameters per token, enabling fast inference speed and low fine-tuning cost.
Oct 5, 2024