Ollama

Access a diverse library of large language models, customize and create your own, and deploy across macOS, Linux, and Windows with GPU acceleration.
Language Model Integration AI Development Tools Cross-Platform Development

Ollama lets you use large language models like Llama 3, Phi 3, Mistral, Gemma and several others. You can download, run and modify them on macOS, Linux and Windows (in preview). The project offers a simple mechanism to install and use a variety of language models, which should be useful to developers and anyone else interested in AI.

Ollama offers a variety of models, including uncensored versions of popular models like Llama 2 and Mistral, and models tuned for specific tasks like generating code. The software also supports models from prominent developers like Meta, Google and Microsoft.

Among Ollama's features are:

  • Model Library: A collection of large language models, including Llama 3, Phi 3, Mistral, Gemma and others.
  • Customization: Users can modify and create their own models.
  • Cross-Platform: Works on macOS, Linux and Windows (preview).
  • GPU Acceleration: Supports AMD graphics cards in preview on Windows and Linux.
  • Initial OpenAI Compatibility: Enables use of existing tooling written for OpenAI with local models.
  • Python and JavaScript Libraries: First versions of libraries for Python and JavaScript (or Typescript) integration.

Ollama is useful for anyone who wants to use large language models in their projects, including developers, researchers and curious people who want to try out AI for themselves. You can get started with Ollama quickly, for example by installing it with a single command on Linux. The project is entirely open-source, which spurs community involvement and contributions.

You can check out Ollama and download it at ollama.ai.

Published on June 14, 2024

Related Questions

Tool Suggestions

Analyzing Ollama...