Customizing LLM Output: Post-Processing Techniques

How to make the most out of LLM production data: simulated user feedback

5 Ways to Serve Open Source LLMs (With Code Examples)

A Humanitarian Crises Situation Report AI Assistant: Exploring LLMOps with Prompt Flow

LLMOps: What It Is, Why It Matters, and How to Implement It

Top Evaluation Metrics for RAG Failures

Exploring mergekit for Model Merge and AutoEval for Model Evaluation

Building an LLMOPs Pipeline

Retrieval Augmented Generation (RAG) Inference Engines with LangChain on CPUs

How to Measure the Success of Your RAG-based LLM System