Langfuse

Debug, analyze, and experiment with large language models through tracing, prompt management, evaluation, analytics, and a playground for testing and optimization.
Large Language Model Development AI Model Analytics Experimentation and Debugging Tools

Langfuse is an open-source LLM engineering platform that lets teams debug, analyze and experiment with their large language model (LLM) projects. It's got a lot of features to improve observability, analytics and experimentation for LLM projects.

Among its features are some that should help you overcome some of the hassles of building complicated LLM projects:

  • Tracing: So you can record the full context of LLM invocations, including API calls, prompts and internal system interactions.
  • Prompt Management: So you can manage and version prompts, making it easier to deploy and track changes.
  • Evaluation: So you can record and score LLM completions, a key part of judging quality and performance.
  • Analytics: So you can see things like LLM cost, latency and quality to make more informed decisions.
  • Playground: So you can experiment and test prompts in the platform.

Langfuse supports a variety of integrations, including Python and JavaScript SDKs, OpenAI, Langchain, LlamaIndex and more. That means it can be used in a variety of LLM projects and use cases.

The company prioritizes security, with certifications including SOC 2 Type II and ISO 27001, and GDPR compliance.

Pricing depends on the needs of your project:

  • Hobby: Free, with all the platform features and 50,000 observations per month, with community support.
  • Pro: $59 per month, with unlimited data access, dedicated support and higher observation limits.
  • Team: Custom pricing for companies, with unlimited ingestion throughput, support SLAs and other security controls.

You can also host Langfuse yourself, which means it can be used in projects of any size.

Published on June 13, 2024

Related Questions

Tool Suggestions

Analyzing Langfuse...