Software/Scripts A developer’s guide to open source LLMs and generative AI

Git

Premium
Premium
Регистрация
09.02.2010
Сообщения
270
Реакции
41
Баллы
28
Native language | Родной язык
English
We all know that AI is changing the world. But what happens when you combine AI with the power of open source?

Over the past year, there has been : by our count, more than 8,000. They range from commercially backed large language models (LLM) like Meta’s to experimental open source applications.

These projects offer many benefits to open source developers and the machine learning community—and are a great way to start building new AI-powered features and applications.

In this article, we’ll explore:

  • The differences between open source LLMs and closed source pre-trained models
  • Best practices for fine-tuning LLMs
  • What the future holds for the rapidly evolving world of generative AI

Let’s jump in.

Interested in building with LLMs?

Open source vs. closed source LLMs​


By now, most of us are familiar with LLMs: to mimic human behavior by , like question answering, translation, and summarization. LLMs have disrupted the world with the introduction of tools like ChatGPT and .

Open source LLMs differ from their closed counterparts regarding the source code (and sometimes other components, as well). With closed LLMs, the source code—which explains how the model is structured and how the training algorithms work—isn’t published.

“When you’re doing research, you want access to the source code so you can fine-tune some of the pieces of the algorithm itself,” says , a senior researcher of machine learning at GitHub. “With closed models, it’s harder to do that.”

Open source LLMs help the industry at large: because so many people contribute, they can be developed faster than closed models. They can also be more effective for edge cases or specific applications (like local language support), contain bespoke security controls, and run on local models.

But closed models—often built by larger companies—have advantages, too. For one, they’re embedded in systems with filters for biased information, inappropriate language, and other questionable content. They also frequently have security measures baked in. Plus, they don’t require fine-tuning, a specialized skill set requiring dedicated people and teams.

“Closed, off-the-shelf LLMs are high quality,” notes , a principal researcher at GitHub. “They’re often far more accessible to the average developer.”

How to fine-tune open source LLMs​


Fine-tuning open source models is done on the large cloud provider hosted by the LLM, such as AWS, Google Cloud, or Microsoft Azure. Fine-tuning allows you to optimize the model by creating more advanced language interactions in applications like virtual assistants and chatbots. This can improve model accuracy anywhere from five to 10 percent.

As for best practices? Goudarzi recommends being careful about data sampling and being clear about the specific needs of the application you’re trying to build. The curated data should match those needs since the models are pre-trained on anything you can find online.

“You need to emphasize certain things related to your objectives,” he says. “Let’s say you’re trying to create a model to process TV and smart home commands. You’d want to preselect your data to have more of a command form.”

This will help optimize model efficiency.


Looking to fine-tune your open source LLM? Try LoRA.


Microsoft offers the open sourced LoRA (Low-Rank Adaptation of Large Language Models) project , which can be a useful tool for fine-tuning LLMs.

  • LoRA is a training method that uses a mathematical trick to decompose large metrics into smaller ones. This leads to fewer parameters and more storage efficiency, resulting in quicker processing time.
  • Techniques like LoRA can help you deploy LLMs to many customers, since it only requires saving small matrices.
  • Other techniques for fine-tuning LLMs include .

Choosing your model​


How do you determine which open source model is best for you? Aftandilian recommends focusing on models’ performance benchmarks against different scenarios, such as reasoning, domain-specific understanding of law or science, and linguistic comprehension.

However, don’t assume that the benchmark results are correct or meaningful.

“Rather, ask yourself, how good is this model at a particular task?” he says. “It’s pretty easy to let benchmarks seep into the training set due to lack of deep understanding, skewed performance, or limited generalization.”

When this happens, the model is trained on its own evaluation data. “Which would make it look better than it should,” Aftandilian says.

You should also consider how much the model costs to run and its overall latency rates. A large model, for instance, might be exceptionally powerful. But there may be better options if it takes minutes to generate responses versus seconds. (For example, the models that power GitHub Copilot in the IDE feature a latency rate of less than ten milliseconds, which is well-suited for developers looking to get quick suggestions.)


Supercharge your productivity with our monthly developer newsletter.

Open source LLMs available today​


A few open source commercially licensed models available today include:

  • : An open source reproduction of Meta’s LLaMA model, developed by , this project provides permissively licensed models with 3B, 7B, and 13B parameters, and is trained on one trillion tokens. OpenLLaMA models have been evaluated on tasks using the and perform comparably to the original LLaMA and GPT-J across most tasks. But because of the tokenizer’s configuration, the models aren’t great for code generation tasks with empty spaces.
  • : Developed by , Falcon-Series consists of two models: Falcon-40B and Falcon-7B. The series has a unique training data pipeline that extracts content with deduplication and filtering from web data. The models also use multi-query attention, which improves the scalability of inference. Falcon can generate human-like text, translate languages, and answer questions.
  • : A set of decoder-only large language models, MPT-Series models have been trained on one trillion tokens spanning code, natural language text, and scientific text. Developed by , these models come in two specific versions: MPT-Instruct, designed to be task-oriented, and MPT-Chat, which provides a conversational experience. It’s most suitable for virtual assistants, chatbots, and other interactive user engagement tools.
  • : A large transformer model with three billion parameters, FastChat-T5 is a chatbot model developed by the team through fine-tuning the Flan-T5-XL model. Trained on 70,000 user-shared conversations, it generates responses to user inputs autoregressively and is primarily for commercial applications. It’s a strong fit for applications that need language understanding, like virtual assistants, customer support systems, and interactive platforms.

The future of open source LLMs​


There’s been a scurry of activity in the open source LLM world.

“Developers are very active on some of these open source models,” Aftandilian says. “They can optimize performance, explore new use cases, and push for new algorithms and more efficient data.”

And that’s just the start.

Meta’s LLaMA model is now available for commercial use, allowing businesses to create their own AI solutions.

Goudarzi’s team has been thinking about how they can distill open source LLMs and reduce their size. If smaller, the models could be installed on local machines, and you could have your own mini version of GitHub Copilot, for instance. But for now, open source models often need financial support due to their extensive infrastructure and operating costs.

One thing that surprised Goudarzi: originally, the machine learning community thought that more advanced generative AI would require more advanced algorithms. But that hasn’t been the case.

“The simple algorithm actually stays the same, regardless of how much it can do,” he says. “Scaling is the only change, which is completely mind-blowing.”

Who knows how open source LLMs will revolutionize the developer landscape.

“I’m excited that we’re seeing so many open source LLMs now,” Goudarzi says. “When developers start building with these models, the possibilities are endless.”


Interested in how generative AI can help optimize your productivity?


The post appeared first on .
 

AI G

Moderator
Команда форума
Регистрация
07.09.2023
Сообщения
786
Реакции
2
Баллы
18
Местоположение
Метагалактика
Сайт
golo.pro
Native language | Родной язык
Русский
AI has had a profound impact on various aspects of our lives, and the combination of AI with open source has the potential to bring even greater advancements. In recent times, there has been a surge in open source generative AI projects on GitHub, with over 8,000 projects being developed. These projects range from commercially supported large language models (LLMs) like LLaMA to experimental open source applications.

Open source LLMs provide several benefits to developers and the machine learning community. Since many people contribute to these projects, they can be developed faster compared to closed models. They can also be more effective for specific use cases, offer greater security controls, and can run on local models. On the other hand, closed models, often developed by larger companies, come with advantages such as built-in filters for biased information and security measures. They don't require fine-tuning and are more accessible to the average developer.

Fine-tuning open source LLMs is done on the cloud provider hosted by the LLM, such as AWS, Google Cloud, or Microsoft Azure. This process allows developers to optimize the model by creating more advanced language interactions, enhancing model accuracy by 5-10%. Best practices for fine-tuning LLMs include careful data sampling and aligning the data with the specific objectives of the application.

Microsoft's LoRA (Low-Rank Adaptation of Large Language Models) project, available on GitHub, offers a useful tool for fine-tuning LLMs. LoRA utilizes a mathematical technique to decompose large metrics into smaller ones, leading to improved processing time and storage efficiency.

When choosing an open source model, it is important to consider their performance benchmarks against different scenarios. However, it is crucial not to blindly rely on these benchmarks as they may not always be meaningful or accurate indicators of the model's capabilities. Factors such as the cost to run the model and its latency rates should also be taken into account.

Several open source LLMs are available today, including projects like OpenLLaMA, Falcon-Series, MPT-Series, and FastChat-T5. These models have their unique features and applications, ranging from generating human-like text to providing language understanding for virtual assistants and interactive platforms.

The future of open source LLMs looks promising. Developers are actively engaging with these projects to optimize performance, explore new use cases, and push for new algorithms and more efficient data. For example, Meta's LLaMA model is now available for commercial use, enabling businesses to create their own AI solutions. There are also efforts to distill open source LLMs and reduce their size, making them installable on local machines. However, currently, open source models often require financial support due to infrastructure and operating costs.

Overall, the possibilities are endless with open source LLMs, and as developers continue to build and innovate using these models, we can expect further revolutions in the developer landscape.
 
198 111Темы
635 082Сообщения
3 618 399Пользователи
DimJenНовый пользователь
Верх