Top Open Source Models with Commercial License

includes download links

Welcome to the 2,210 new members this week! This newsletter now has 44,573 subscribers.

The debate around open-source LLMs intensifies as AI advances. In today's newsletter, I will share the nuances, criteria, and commercialization strategies surrounding open-source LLMs.

Today, I’ll cover:

  • List of best Open Source LLMs with links to download

  • How to evaluate their performance

  • Guide on Licenses for Open Source Models

  • What to do with those models (how to tune or implement them)

  • Risks of using plain Open Source

Let’s Dive In! 🤿

List of Best Open Source LLMs with Links to Download

I curated a list of some of the most promising and capable open-source LLMs available for download and experimentation. Each entry includes a brief description, performance highlights, and a link to the official repository or download location.

List of Open Source LLMs with Commercial License

You can access the table here: link

Feel free to respond to this email to add more!

How to Evaluate Their Performance

Evaluating the performance of open-source LLMs is crucial for understanding their capabilities and limitations. Let’s see the most common evaluation metrics, benchmarks, and techniques commonly used to assess language models' performance across tasks such as text generation, question answering, and language understanding.

  1. Composite Benchmarks: Hugging Face Open LLM Leaderboard: This leaderboard includes tasks like ARC-Challenge, HellaSwag, MMLU, TruthfulQA, WinoGrande, and GSM8k.

  2. Programming and Mathematics: HumanEval: Evaluates the ability to solve programming problems. GSM8k: Assesses mathematical reasoning and problem-solving skills.

  3. Multi-Task Language Understanding: Measures the ability to perform various language understanding tasks, including question answering, commonsense reasoning, and coreference resolution.

  4. Long-Context Benchmarks: These benchmarks, such as KV-Pairs and HotpotQAXL, are designed to evaluate the ability of language models to handle long input sequences or contexts.

  5. Retrieval-Augmented Generation (RAG) Benchmarks: These benchmarks, such as Natural Questions and HotPotQA, evaluate the model's ability to leverage external information retrieved from a database or corpus to improve its performance on question-answering tasks. These benchmarks simulate real-world scenarios where language models must integrate external knowledge sources to answer complex queries or generate informative responses.

Guide on Licenses for Open Source Models

Open-source models come with different types of licenses, each with its terms and conditions. I provide a comprehensive guide to understanding the most common open-source licenses used for LLMs, such as MIT, Apache, and GNU GPL. Understanding the implications of these licenses for commercial use, distribution, and modification is important.

Common Licenses for Open-Source Commercial Models

We are seeing more vendors, such as Meta or Databricks, create license terms for open-source LLMs.

What to Do with Those Models (How to Tune or Implement Them)

Once you've identified the open-source LLM that meets your needs, the next step is to integrate and fine-tune it for your specific use case. There are multiple techniques to do so; the most common ones are:

  • Prompt Engineering

  • Fine-Tuning

  • Retrieval Augmented Generation

There are multiple ways to implement each of those techniques; in many cases, you will need to combine them to get the best results.

When to tune a model

Access my full guide to customize Foundation Models here:

Risks of Using Plain Open-Source

While open-source LLMs offer numerous benefits, there are also potential risks and pitfalls to be aware of. Using plain open-source models in an enterprise without the right implementation can lead to legal problems, privacy and security concerns, the potential for misuse or bias, and the challenges of maintaining and updating these models over time.

Open-source tools alone are not sufficient to meet enterprise AI needs. While they drive innovation and are required for AI, enterprises need additional capabilities and support to effectively leverage open source in a scalable, secure, and collaborative manner.

Key points to consider:

  • Open source tools are geared towards individual users and lack enterprise-grade features like scalability, connectivity, collaboration, and deployment.

  • Open source contributions often focus on novel features rather than foundational enterprise requirements like security integration, version management, and governance.

  • Enterprises need a comprehensive platform that hardens open-source components, fills functionality gaps, and provides a seamless experience for data scientists, enabling them to focus on value-generating tasks rather than maintenance and configuration.

My friend Luis Arellano wrote about this a few months ago in a guest post. You can read it here:

The open-source landscape for LLMs is ripe with potential, but it must be approached thoughtfully. I hope that today’s edition equipped you with the knowledge to evaluate open-source LLMs, understand licensing implications, and implement these models effectively while mitigating risks.

As you explore this ecosystem, prioritize responsible development practices, collaborate with the community, and drive innovation in ethical AI.

And remember:

We can shape the future of open-source language models, so let's get building.

and that’s all for today. Enjoy the easter weekend, folks.

Armand 🐰

Whenever you're ready, learn AI with me:

The 15-day Generative AI course: Join my 15-day Generative AI email course, and learn with just 5 minutes a day. You'll receive concise daily lessons focused on practical business applications. It is perfect for quickly learning and applying core AI concepts. 15,000+ Business Professionals are already learning with it.

Join the conversation

or to participate.