From 6a4080ba92c9529a6674443f80d3543ceeca98a5 Mon Sep 17 00:00:00 2001 From: Ying Hu Date: Mon, 19 Aug 2024 10:50:16 +0800 Subject: [PATCH 1/2] Update README.md According to #https://github.com/opea-project/GenAIExamples/issues/338: The motivation paragraph 2 is more general and perhaps should move up as paragraph-1. Original paragraph-1 gets too specific into legal documents prematurely. so Remove the paragraph 1 as it is not related. --- DocSum/README.md | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/DocSum/README.md b/DocSum/README.md index 2372df8fc9..32905908c5 100644 --- a/DocSum/README.md +++ b/DocSum/README.md @@ -1,8 +1,5 @@ # Document Summarization Application - -In a world where data, information, and legal complexities are prevalent, the volume of legal documents is growing rapidly. Law firms, legal professionals, and businesses are dealing with an ever-increasing number of legal texts, including contracts, court rulings, statutes, and regulations. These documents contain important insights, but understanding them can be overwhelming. This is where the demand for legal document summarization comes in. - -Large Language Models (LLMs) have revolutionized the way we interact with text. These models can be used to create summaries of news articles, research papers, technical documents, and other types of text. Suppose you have a set of documents (PDFs, Notion pages, customer questions, etc.) and you want to summarize the content. In this example use case, we utilize LangChain to implement summarization strategies and facilitate LLM inference using Text Generation Inference on Intel Xeon and Gaudi2 processors. +Large Language Models (LLMs) have revolutionized the way we interact with text. These models can be used to create summaries of news articles, research papers, technical documents, legal documents and other types of text. Suppose you have a set of documents (PDFs, Notion pages, customer questions, etc.) and you want to summarize the content. In this example use case, we utilize LangChain to implement summarization strategies and facilitate LLM inference using Text Generation Inference on Intel Xeon and Gaudi2 processors. The architecture for document summarization will be illustrated/described below: From 3430d01c0ac6f818d66857dd3becff1f95afb76e Mon Sep 17 00:00:00 2001 From: "pre-commit-ci[bot]" <66853113+pre-commit-ci[bot]@users.noreply.github.com> Date: Mon, 19 Aug 2024 02:51:40 +0000 Subject: [PATCH 2/2] [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --- DocSum/README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/DocSum/README.md b/DocSum/README.md index 32905908c5..c032a8790e 100644 --- a/DocSum/README.md +++ b/DocSum/README.md @@ -1,4 +1,5 @@ # Document Summarization Application + Large Language Models (LLMs) have revolutionized the way we interact with text. These models can be used to create summaries of news articles, research papers, technical documents, legal documents and other types of text. Suppose you have a set of documents (PDFs, Notion pages, customer questions, etc.) and you want to summarize the content. In this example use case, we utilize LangChain to implement summarization strategies and facilitate LLM inference using Text Generation Inference on Intel Xeon and Gaudi2 processors. The architecture for document summarization will be illustrated/described below: