Mamba (deep learning architecture)
| Part of a series on |
| Machine learning and data mining |
|---|
Mamba[a] is a deep learning architecture focused on sequence modeling. It was developed by researchers from Carnegie Mellon University and Princeton University to address some limitations of transformer models, especially in processing long sequences. It is based on the structured state space sequence (S4) model.[2][3][4]
Architecture
[edit]To enable handling long data sequences, Mamba incorporates S4.[2] S4 can effectively and efficiently model long dependencies by combining continuous time, and recurrent, and convolutional models. These enable it to handle irregularly sampled data, unbounded context, and remain computationally efficient during training and inferencing.[5]
Mamba introduces significant enhancements to S4, particularly in its treatment of time-variant operations. It adopts a unique selection mechanism that adapts structured state space model (SSM) parameters based on the input.[6][2] This enables Mamba to selectively focus on relevant information within sequences, effectively filtering out less pertinent data. The model transitions from a time-invariant to a time-varying framework, which impacts both computation and efficiency.[2][7]
Mamba employs a hardware-aware algorithm that exploits GPUs, by using kernel fusion, parallel scan, and recomputation.[2] The implementation avoids materializing expanded states in memory-intensive layers, thereby improving performance and memory usage. The result is significantly more efficient in processing long sequences compared to transformers.[2][7]
Additionally, Mamba simplifies its architecture by integrating the SSM design with MLP blocks, resulting in a homogeneous and streamlined structure, furthering the model's capability for general sequence modeling across data types that include language, audio, and genomics, while maintaining efficiency in both training and inference.[2]
Key components
[edit]- Selective state spaces (SSM): The core of Mamba, SSMs are recurrent models that selectively process information based on the current input. This allows them to focus on relevant information and discard irrelevant data.[2]
- Simplified architecture: Mamba replaces the complex attention and MLP blocks of transformers with a single, unified SSM block. This aims to reduce computational complexity and improve inference speed.[2]
- Hardware-aware parallelism: Mamba uses a recurrent mode with a parallel algorithm specifically designed for hardware efficiency, potentially further enhancing its performance.[2]
| Feature | Transformer | Mamba |
|---|---|---|
| Architecture | Attention-based | SSM-based |
| Complexity | High | Lower |
| Inference speed | O(n)[clarification needed]
|
O(1)
|
| Training speed | O(n2)
|
O(n)
|
Variants
[edit]Mamba-2
[edit]Mamba-2 serves as a successor to Mamba by introducing a new theoretical and computational framework called Structured State Space Duality (SSD). This contribution acts as a mathematical bridge between SSMs and Transformers. Specifically, a connection to the attention mechanism. This is in response to concerns over the difficulty in training SSMs compared to Transformers. SSD gives Mamba-2 the ability to inherit many system-level optimizations for Transformers, while maintaining linear-time scalability. [8]
Mamba-2 is designed to leverage many of the system and algorithmic optimizations that have been developed for Transformers. Mamba-2 does this through mathematical properties gained from Structured State Space Duality (SSD). Additionally, Mamba-2 introduces a parallel block to further connect the architecture to the attention mechanism and improve scalability.
- SSD Layer: The main contribution of structured state space duality in Mamba-2 is through the SSD layer. In Mamba-1, the state space A matrix is restricted to a diagonal matrix to improve computation. SSD further restricts the A matrix to be a scalar times the identity. This scalar restriction allows for the mathematical dual forms to arise.
- SSD Framework: The researchers further propose the SSD framework to better reason about the model. Firstly, SSD can be viewed through a structured matrix transformation framework. Many sequence models can be seen as a sequence of matrix transformations. State space models such as Mamba are among these models. This leads to a linear formulation of their output as sequential matrix multiplications. The addition of the SSD to Mamba further allows this formulation to be transformed into a quadratic form directly related to masked attention. This is the "duality" in the matrix transformation framework. SSD can also be viewed through a structured attention framework. This utilizes causal linear attention to show that structured masked attention has similar dual linear and quadratic modes. Linear attention is a method to compute the attention algorithm in linear time. It also has a recurrent form similar to SSMs. Mamba-2 bridges these two frameworks by showing that structured masked attention is equivalent to the scalar-identity formulation.
- SSD Algorithm: The SSD algorithm allows for a more hardware-efficient computation of the SSD model. It does this by rewriting computations to leverage matrix multiplication operations. This is due to matrix multiplications FLOPs being more efficient than non-matrix multiplication FLOPs when leveraging tensor cores. It does this by writing the matrix transformations of SSMs as semiseparable matrices as shown in the SSD framework.[8][9]
Token-free language models: MambaByte
[edit]Operating on byte-sized tokens, transformers scale poorly as every token must "attend" to every other token leading to O(n2) scaling laws, as a result, Transformers opt to use subword tokenization to reduce the number of tokens in text, however, this leads to very large vocabulary tables and word embeddings.
This research investigates a novel approach to language modeling, MambaByte, which departs from the standard token-based methods. Unlike traditional models that rely on breaking text into discrete units, MambaByte directly processes raw byte sequences. This eliminates the need for tokenization, potentially offering several advantages:[10]
- Language independence: Tokenization often relies on language-specific rules and vocabulary, limiting applicability across diverse languages. MambaByte's byte-level representation allows it to handle different languages without language-specific adaptations.
- Removes the bias of subword tokenisation: where common subwords are overrepresented and rare or new words are underrepresented or split into less meaningful units. This can affect the model's understanding and generation capabilities, particularly for languages with rich morphology or tokens not well-represented in the training data.
- Simplicity in preprocessing: It simplifies the preprocessing pipeline by eliminating the need for complex tokenization and vocabulary management, reducing the preprocessing steps and potential errors.
Subword tokenisation introduces a number of quirks in LLMs, such as failure modes where LLMs can't spell words, reverse certain words, handle rare tokens, which are not present in byte-level tokenisation.[11]
Mamba mixture of experts (MOE)
[edit]MoE Mamba represents a pioneering integration of the mixture of experts (MoE) technique with the Mamba architecture, enhancing the efficiency and scalability of SSMs in language modeling. This model leverages the strengths of both MoE and SSMs, achieving significant gains in training efficiency—requiring 2.2 times fewer training steps than its predecessor, Mamba, while maintaining competitive performance. MoE Mamba showcases improved efficiency and effectiveness by combining selective state space modeling with expert-based processing, offering a promising avenue for future research in scaling SSMs to handle tens of billions of parameters. The model's design involves alternating Mamba and MoE layers, allowing it to efficiently integrate the entire sequence context and apply the most relevant expert for each token.[12][13]
Vision Mamba
[edit]Vision Mamba (Vim) integrates SSMs with visual data processing, employing bidirectional Mamba blocks for visual sequence encoding. This method reduces the computational demands typically associated with self-attention in visual tasks. Tested on ImageNet classification, COCO object detection, and ADE20k semantic segmentation, Vim showcases enhanced performance and efficiency and is capable of handling high-resolution images with lower computational resources. This positions Vim as a scalable model for future advancements in visual representation learning.[14]
Jamba
[edit]Jamba is a novel architecture built on a hybrid transformer and mamba SSM architecture developed by AI21 Labs with 52 billion parameters, making it the largest Mamba variant created so far. It has a context window of 256k tokens.[15]
See also
[edit]Notes
[edit]References
[edit]- ^ "Albert Gu (@_albertgu) on X". X (formerly Twitter).
- ^ a b c d e f g h i j Gu, Albert; Dao, Tri (2023). "Mamba: Linear-Time Sequence Modeling with Selective State Spaces". arXiv:2312.00752 [cs.LG].
- ^ Chowdhury, Hasan. "The tech powering ChatGPT won't make AI as smart as humans. Others might". Business Insider. Retrieved 13 January 2024.
- ^ Pandey, Mohit (6 December 2023). "Mamba is Here to Mark the End of Transformers". Analytics India Magazine. Retrieved 13 January 2024.
- ^ Gu, Albert; Goel, Karan; Re, Christopher (6 October 2021). "Efficiently Modeling Long Sequences with Structured State Spaces". ICLR. arXiv:2111.00396. Retrieved 13 January 2024.
- ^ Gu, Albert; Johnson, Isys; Goel, Karan; Saab, Khaled Kamal; Dao, Tri; Rudra, A.; R'e, Christopher (26 October 2021). "Combining Recurrent, Convolutional, and Continuous-time Models with Linear State-Space Layers". NeurIPS. S2CID 239998472.
- ^ a b Tickoo, Aneesh (10 December 2023). "Researchers from CMU and Princeton Unveil Mamba: A Breakthrough SSM Architecture Exceeding Transformer Efficiency for Multimodal Deep Learning Applications". MarkTechPost. Retrieved 13 January 2024.
- ^ a b Dao, Tri; Gu, Albert (2024-05-31), Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality, arXiv, doi:10.48550/arXiv.2405.21060, arXiv:2405.21060, retrieved 2025-11-22
- ^ "State Space Duality (Mamba-2) Part I - The Model | Tri Dao". tridao.me. Retrieved 2025-11-22.
- ^ Wang, Junxiong; Gangavarapu, Tushaar; Yan, Jing Nathan; Rush, Alexander M. (2024-01-24), MambaByte: Token-free Selective State Space Model, arXiv:2401.13660
- ^ Let's build the GPT Tokenizer, 20 February 2024, retrieved 2024-02-23
- ^ Pióro, Maciej; Ciebiera, Kamil; Król, Krystian; Ludziejewski, Jan; Jaszczur, Sebastian (2024-01-08), MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts, arXiv:2401.04081
- ^ Nikhil (2024-01-13). "This AI Paper Proposes MoE-Mamba: Revolutionizing Machine Learning with Advanced State Space Models and Mixture of Experts MoEs Outperforming both Mamba and Transformer-MoE Individually". MarkTechPost. Retrieved 2024-02-23.
- ^ Zhu, Lianghui; Liao, Bencheng; Zhang, Qian; Wang, Xinlong; Liu, Wenyu; Wang, Xinggang (2024-02-10), Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model, arXiv:2401.09417
- ^ "Introducing Jamba: AI21's Groundbreaking SSM-Transformer Model". www.ai21.com. 28 March 2024. Retrieved 2024-03-29.