Categories
AI and Society Second Opinion

Can LLMs Synthesize New Knowledge?

My old friend Ryan Casey posed this question recently, and wrote up a nice theory with a proposed solution to the question which he calls Probabilistic Knowledge Synthesis (PKS). The purpose of my post here is to respond to Ryan’s theory and offer my own perspective on the question of whether LLMs can synthesize new knowledge.

First, a quick synopsis of Ryan’s theory. He proposes to combine probabilistic databases with LLMs and graph models to first catalog knowledge statements and then synthesize and evaluate new ones. I’m unfamiliar with probabilistic databases and am far from an expert on graphs, but Ryan’s explanations of them were sufficient for me to draw my own conclusions in light of my own knowledge about LLMs.

My assessment is that Ryan’s design could indeed allow the system to synthesize new knowledge, but I will propose a simpler solution below powered entirely by LLMs without the need for external data structures or reasoning engines.

Synthesizing New Knowledge with Only LLMs

Ryan outlined how probabilistic databases (PDBs) and graphs can be paired with LLMs to synthesize new knowledge. The role of PDBs and graphs in the system are to perform probabilistic reasoning over a corpus of uncertain and interconnected knowledge statements.

I agree that this is a key capability required to synthesize new knowledge in a fuzzy, messy world. But I would argue that LLMs themselves already possess this capability.

Experiment: What Color Is The Sky?

Here’s an example conversation with ChatGPT to prove my point.

https://chatgpt.com/share/66f4d201-6cc0-8008-9bad-74fd49cc44da

ChatGPT has knowledge about what color the sky is for a typical Earth-based user, and responds to the question “what color is the sky” with the answer “blue” (plus some nuance about sunsets and sunrises).

But if I condition the LLM with some context, in this case that I’m a sci-fi character living on a different planet with a thicker atmosphere, then the LLM happily conditions its response. It synthesizes its knowledge about how atmospheric conditions influence the color of the sky into a new statement of knowledge. It even offers multiple possible colors that might appear in different scenarios, reflecting it’s understanding that the knowledge it has synthesized has uncertainty (key details are missing that would be needed to be 100% confident what color the sky of this mystery planet would be).

The example above shows that an LLM on its own has the capacity to “perform probabilistic reasoning over a corpus of uncertain and interconnected knowledge statements.” This is already an incredible feat, but it’s not yet enough to realize Ryan’s dream. An LLM can only do this out of the box for knowledge statements it was trained on.

Synthesizing Out of Sample Knowledge

If you want an LLM to synthesize new knowledge for you in some domain that it has not been trained on, such as proprietary company data, here are your options.

  1. Prompting
    • Throw all the context needed into a prompt and ask your question. This can solve many problems, but the context window is limited (though large, most models can take 100k+ tokens/words as of fall 2024) and it’s expensive to process very large prompts.
    • Problems that require synthesizing new knowledge from a large corpus of data are either not possible or not practical with this simple approach.
  2. RAG
    • Retrieval Augmented Generation helps overcome the context window issue by adding a database and retrieval step to limit context to only the most relevant info.
    • But a fully-baked RAG implementation for general knowledge synthesis of highly complex and interrelated knowledge statements probably starts looking quite similar to Ryan’s PKS system.
  3. Fine-tuning
    • LLM fine-tuning has become very accessible and practical thanks to techniques like LoRA and vendors like OpenAI, Predibase, Fireworks AI, and Lamini.
    • With LLM fine-tuning, you can bake your out-of-sample knowledge into the weights of the model, allowing the model to hold the entirety of your knowledge base in it’s “working memory” when you ask it to synthesize new knowledge for you.
    • Continuous fine-tuning is possible as new knowledge statements are established to allow the “knowledge base” to grow.
    • Memorization is also possible when absolute facts are needed, for example using Lamini’s memory fine-tuning (overfit on facts with 100 epochs but do so only in a LoRA adapter to ensure that the base model doesn’t experience catastrophic forgetting, then add retrieval of millions of such adapters). If uncertainty levels of new knowledge are known, the number of epochs per knowledge statement can be adjusted accordingly.

Conclusion

I first showed that LLMs on their own have the ability to do “probabilistic reasoning over a corpus of uncertain and interconnected knowledge statements” using an example of knowledge statements that were available during the LLMs training.

I then outlined how this ability can be extended to out-of-sample knowledge statements using LLM fine-tuning. LLM fine-tuning allows large, complex knowledge bases to be baked into the weights of the model.

With the proposed recipe, a “Probabilistic Knowledge Synthesis” system can be built that is capable of extending proprietary knowledge bases without the need for probabilistic databases and graphs.

In conclusion, LLMs alone are enough to synthesize new knowledge.

Jared Rand

By Jared Rand

Jared Rand is a data scientist specializing in natural language processing. He also has an MBA and is a serial entrepreneur. He is a Principal NLP Data Scientist at Everstream Analytics and founder of Skillenai. Connect with Jared on LinkedIn.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: