If you want a low-code foundation for building and integrating large language models (LLMs) so you can get things done fast and get them running, Flowise is a great option. It lets you create your own LLM orchestration flows and AI agents with a graphical interface. With more than 100 integrations with services like Langchain and LlamaIndex, you can build self-contained AI agents and run them in air-gapped environments using local LLMs. Flowise can be installed on AWS, Azure and GCP, so you can use it in whatever environment you prefer.
Another interesting option is Humanloop, which is designed to oversee and optimize LLM app development. Humanloop tries to solve some of the pain points that can come with LLMs by providing a sandboxed environment where developers and domain experts can collaborate. It includes a collaborative prompt management system with version control, an evaluation suite for debugging, and customization tools for tuning models. It's geared for product teams and developers who want to increase productivity and collaboration in their AI work.
If you're on a budget, Predibase is a platform for fine tuning and serving LLMs. It offers free serverless inference for up to 1 million tokens per day and supports a variety of models, including Llama-2 and Mistral. Predibase uses a pay-as-you-go pricing model and offers enterprise-level security through SOC-2 compliance, so you can use it for small or large-scale deployments.
Last, GPTBots is a no-code platform that lets you build AI-powered business applications by connecting LLMs to enterprise data and workflows. It supports real-time training, multimodal dialogue input and integration with thousands of tools. GPTBots has a variety of pricing tiers, and it offers private encryption services to protect data, so it's a good option for businesses that want to use AI to increase productivity.