2.2 C
New Jersey
Friday, November 22, 2024

A Walkthrough of Nvidia’s Newest Multi-Modal LLM Household | by Mengliu Zhao | Oct, 2024


From LLaVA, Flamingo, to NVLM

Multi-modal LLM improvement has been advancing quick lately.

Though the industrial multi-modal fashions like GPT-4v, GPT-4o, Gemini, and Claude 3.5 Sonnet are essentially the most eye-catching performers as of late, the open-source fashions reminiscent of LLaVA, Llama 3-V, Qwen-VL have been steadily catching up when it comes to efficiency on public benchmarks.

Simply final month, Nvidia launched their open-source multi-modal LLM household known as NVLM. The household includes three architectures: a) decoder-based, b) cross-attention-based, and c) hybrid. The decoder-based mannequin takes each the picture and textual content tokens to a pre-trained LLM, such because the LLaVA mannequin. The cross-attention-based mannequin makes use of the picture token embeddings because the keys and values whereas utilizing the textual content token embeddings because the queries; for the reason that consideration is calculated utilizing completely different sources, it’s known as “cross-attention” as within the unique transformer decoder quite than the self-attention as in decoder-only fashions. The hybrid structure is a singular design merging the decoder and cross-attention structure for the good thing about multi-modal reasoning, fewer coaching parameters, and taking high-resolution enter. The 72B decoder-based NVLM-D mannequin achieved a formidable efficiency, beating state-of-the-art open-source and industrial fashions on duties like pure picture understanding and OCR.

On this article, I’m going to stroll by way of the next issues:

  • the dynamic high-resolution (DHR) imaginative and prescient encoder, which all of the NVLM fashions undertake
  • the decoder-based mannequin, NVLM-D, in comparison with LLaVA
  • the gated cross-attention mannequin, NVLM-X, in comparison with Flamingo
  • the hybrid mannequin, NVLM-H

In the long run, I’ll present the NVLM-D 72B efficiency. In comparison with state-of-the-art open-source and industrial fashions, the NVLM-D mannequin reveals stability over text-based duties and superior efficiency on pure understanding and OCR duties.

Picture supply: https://pxhere.com/en/picture/821032

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

237FansLike
121FollowersFollow
17FollowersFollow

Latest Articles