Fine-tune Llama 2 using QLoRA and Deploy it on Amazon SageMaker with AWS Inferentia2 aws.amazon.com Post date December 13, 2023 No Comments on Fine-tune Llama 2 using QLoRA and Deploy it on Amazon SageMaker with AWS Inferentia2 Related External Tags Amazon SageMaker, AWS Inferentia, generative-ai ← Build an end-to-end MLOps pipeline using Amazon SageMaker Pipelines, GitHub, and GitHub Actions → Top Generative AI GitHub Repositories to Revisit in 2023 Leave a ReplyCancel reply This site uses Akismet to reduce spam. Learn how your comment data is processed.