If you're looking for an open-source foundation for building and deploying LLM apps with a graphical interface, and you want to be able to self-host on cloud services like AWS and Azure, Flowise is worth a look. It's a low-code tool that lets developers assemble sophisticated LLM apps with more than 100 integrations, including Langchain and LlamaIndex. Flowise can be self-hosted on AWS, Azure and GCP, and offers prebuilt tools for generating product catalogs, describing products, querying SQL databases and more. The interface is designed to be approachable, with an active community and plenty of documentation, so it's a good choice for building and refining AI solutions.
Another strong contender is Dify, which has a visual Orchestration Studio for building AI apps. Dify offers tools for secure data pipelines, prompt design and model tuning. It also offers customizable LLM agents and quick chatbot and AI assistant deployment. The service can be used on-premise for better data security and reliability, and offers several pricing levels for individuals, teams and enterprises.
If you want a platform that marries data science and ML workflows, Zerve is a good option. It combines open models, serverless GPUs and user data to speed up workflows. Zerve offers an integrated environment with notebook and IDE abilities, fine-grained GPU control and language interoperability. The service can be self-hosted on AWS, Azure or GCP instances, giving you control over data and infrastructure.
Last, LLMStack has a no-code builder for chaining together multiple LLMs and connecting them to data and business processes. It supports vector databases for compact data storage and multi-tenancy and permission controls for access control. LLMStack can be run in the cloud or on-premise, so it's good for a range of AI application development needs, including chatbots and AI assistants.