Start a Project

Fine Tuning vs Embeddings – which works best for Ecommerce ?

In the new world of online shopping built on various tech stacks like Symfony, laravel eCommerce etc., the scope of AI and Natural Language Processing is continuously rising at high speed.

While dealing with NLP, we often encounter terms like Fine-Tuning a pre-trained model and embedding. Most of the time people get confused between them and don’t know which to use when.

Today in this blog, we will clear the difference between them and their use cases.

Embeddings: Embeddings are multi-dimensional vector representations of words. These vectors represent the meaning of the word in numerical language. They are the building blocks for Natural Language Processing. Hence, they will be used in every NLP task, directly or indirectly.

  1. Need of Embeddings: Computers don’t understand text, they just understand numbers. Natural Language Tasks require a deep understanding of text, so words are converted to numerical vectors so that computers can also understand the meaning of human language. Suppose we have a small vocabulary of words: “cat,” “dog,” “fish,” and “bird.” We can represent these words as 20-dimensional embeddings like this:

     
  2. Use Cases: Embeddings in e-commerce can be used for many tasks like Semantic Search, Recommendation Models, Products Reviews Classification, Sentiment Analysis, etc. Such tasks are simple and don’t require text generation, so in these cases, embeddings can be extracted from LLMs and can be used directly to train a model, and fine-tuning a pre-trained model is not required.
  3. How to get embeddings of words: There are few embedding datasets like Glove & Word2Vec. You can also get embeddings of your text from open-source Language Models like this:

    Or you can also get embeddings from State-of-the-art proprietary models like OpenAI Chat Models, like this:

     

Fine Tuning a Pre-Trained LLM: Fine Tuning of Pre-Trained LLMs is done specifically for complex tasks like text generation. Preparing such a model from scratch requires a lot of data, computation resources, and a dedicated team of Data Scientists and ML Engineers to build.

  1. Need for fine-tuning a pre-trained model:  Text Generation is a complex task and requires a lot of resources and effort, so fine-tuning a pre-trained model is the best option. For this, we need to import the trained model and train it with our own data by referring to the official documentation of the model.
  2. Use Cases: Models Like AI Chatbot and Conversational Agent, where text generation is done are the best use cases for fine-tuning a pre-trained model. Like in the Chatbot Module of Bagisto, we have fine-tuned the OpenAI LLM model on our data to respond to user queries.
  3. How to fine-tune a pre-trained model: In this use case we will see how we can fine-tune the OpenAI model to answer from our own Data:

    In summary, the embedding approach is used for simpler models like semantic search or recommendation models, where we just need to fetch the text similar to the user’s query rather than generate a new text like in AI Chabot. So in complex NLP tasks, we go with fine-tuning a pre-trained model.
Exit mobile version