Updated 7 September 2023
The choice between open-source and Proprietary Language models depends on various factors, which include your specific need, budget, and availability of resources like a tech team, required data, etc.
Open Source LM:
| 
					 1 2 3 4 5 6 7 8 9 10 11  | 
						import { pipeline } from '@xenova/transformers'; let question = 'Who was Jim Henson?'; let context = 'Jim Henson was a nice puppet.'; let answerer = await pipeline('question-answering', 'Xenova/distilbert-base-uncased-distilled-squad'); let output = await answerer(question, context); // { //   "answer": "a nice puppet", //   "score": 0.5768911502526741 // }  | 
					
Considering Open Source LM for E-commerce:
Proprietary LLM:
| 
					 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38  | 
						import { ChatOpenAI } from "langchain/chat_models/openai"; import { ConversationalRetrievalQAChain } from "langchain/chains"; import { HNSWLib } from "langchain/vectorstores/hnswlib"; import { OpenAIEmbeddings } from "langchain/embeddings/openai"; import { RecursiveCharacterTextSplitter } from "langchain/text_splitter"; import { BufferMemory } from "langchain/memory"; import * as fs from "fs"; export const run = async () => {   /* Initialize the LLM to use to answer the question */   const model = new ChatOpenAI({});   /* Load in the file we want to do question answering over */   const text = fs.readFileSync("state_of_the_union.txt", "utf8");   /* Split the text into chunks */   const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000 });   const docs = await textSplitter.createDocuments([text]);   /* Create the vectorstore */   const vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings());   /* Create the chain */   const chain = ConversationalRetrievalQAChain.fromLLM(     model,     vectorStore.asRetriever(),     {       memory: new BufferMemory({         memoryKey: "chat_history", // Must be set to "chat_history"       }),     }   );   /* Ask it a question */   const question = "What did the president say about Justice Breyer?";   const res = await chain.call({ question });   console.log(res);   /* Ask it a follow up question */   const followUpRes = await chain.call({     question: "Was that nice?",   });   console.log(followUpRes); };  | 
					
Considerations for Proprietary LLMs:
In summary, the choice between open-source and proprietary LLMs depends on your specific circumstances:
If you have more details or questions, you can reply to the received confirmation email.
Back to Home
Be the first to comment.