Towards LLM explainability: why did my model produce this outputĀ ?