Apple Unveils MM1: A New Family of Multimodal AI Models
Apple researchers have introduced a groundbreaking family of multimodal AI models known as MM1. These models, with up to 30 billion parameters, are engineered to comprehend both text and image inputs. Currently in the pre-training phase, the MM1 AI models are undergoing further research to unlock their full potential.
Advancing Multimodal Technology with MM1 AI Models
As per a pre-print paper shared by Apple researchers, the MM1 AI models are poised to push the boundaries of multimodal technology by integrating text, images, and code. These models hold the promise of achieving exceptional visual performance through specialized training on image data.
Apple has not disclosed specific details regarding the applications or use cases of the MM1 AI models. Nevertheless, the development of these models underscores Appleās dedication to propelling artificial intelligence forward and delving into the possibilities of multimodal AI.
References:
- Apple Researchers Working on MM1, a Family of Multimodal AI Models with Up to 30 Billion Parameters
- MM1: The Advanced 30B Parameters Multimodal LLM from Apple
- Apple Presents MM1 - Family of Multimodal LLMs - YouTube
- Apple debuts its MM1 multimodal AI model with rich visual capabilities
- Apple Quietly Reveals MM1, a Multimodal LLM - Thurrott.com
- Apple researchers achieve breakthroughs in multimodal AI as company ramps up investments
- Apple MM1 Is Not A Chip But A Multimodal AI - Dataconomy
- Appleās MM1. Exploring the frontier of Multimodalā¦ | Mar, 2024 |