Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

adding embedding support for CLIP based models for VideoRAGQnA example for v0.9 #538

Merged
merged 28 commits into from
Sep 4, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
cb5b2bb
clip embedding support
srinarayan-srikanthan Aug 21, 2024
885af46
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Aug 21, 2024
c793514
test script for embedding
srinarayan-srikanthan Aug 21, 2024
d9efb3a
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Aug 21, 2024
6926c15
fix freeze workflow (#522)
XuehaoSun Aug 21, 2024
a44e936
Fix Dataprep Potential Error in get_file (#540)
letonghan Aug 21, 2024
4127ff6
Support SearchedDoc input type in LLM for No Rerank Pipeline (#541)
letonghan Aug 21, 2024
7e660a1
Add dependency for pdf2image and OCR processing (#421)
ZailiWang Aug 21, 2024
f75267d
Add local_embedding return 768 length to align with chatqna example (…
xuechendi Aug 21, 2024
389fb61
add telemetry doc (#536)
Spycsh Aug 21, 2024
bb01c19
Add video-llama LVM microservice under lvms (#495)
BaoHuiling Aug 21, 2024
7a7f377
Fix the data load issue for structured files (#505)
XuhuiRen Aug 21, 2024
73a9a12
Add finetuning component (#502)
XinyuYe-Intel Aug 21, 2024
8ca6484
add torchvision into requirements (#546)
chensuyue Aug 21, 2024
29265b2
Use Gaudi base images from Dockerhub (#526)
ashahba Aug 22, 2024
c9427ae
Add toxicity detection microservice (#338)
qgao007 Aug 22, 2024
41cc41f
rename script and use 5xxx
BaoHuiling Aug 27, 2024
64c984b
add proxy for build
BaoHuiling Aug 27, 2024
a3bb34c
fixed commit issues
srinarayan-srikanthan Aug 30, 2024
b72d2ba
Fix docarray constraint
srinarayan-srikanthan Sep 1, 2024
36620a8
updated docarray
srinarayan-srikanthan Sep 1, 2024
532741d
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Sep 1, 2024
5dc564b
Merge branch 'main' into sri-clip-embedding
srinarayan-srikanthan Sep 2, 2024
cdda846
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Sep 2, 2024
bc318c0
rm telemetry which cause error in mega
BaoHuiling Sep 3, 2024
6289315
renamed dirs
srinarayan-srikanthan Sep 3, 2024
9878ba0
renamed test
srinarayan-srikanthan Sep 3, 2024
58d83cd
Merge branch 'main' into sri-clip-embedding
srinarayan-srikanthan Sep 3, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions comps/cores/proto/docarray.py
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,7 @@ class EmbedDoc(BaseDoc):
fetch_k: int = 20
lambda_mult: float = 0.5
score_threshold: float = 0.2
constraints: Optional[Union[Dict[str, Any], None]] = None


class EmbedMultimodalDoc(EmbedDoc):
Expand Down
1 change: 1 addition & 0 deletions comps/dataprep/milvus/prepare_doc_milvus.py
Original file line number Diff line number Diff line change
Expand Up @@ -133,6 +133,7 @@ def ingest_data_to_milvus(doc_path: DocPath, embedder):
)

content = document_loader(path)

if logflag:
logger.info("[ ingest data ] file content loaded")

Expand Down
52 changes: 52 additions & 0 deletions comps/embeddings/multimodal_clip/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
# Multimodal CLIP Embeddings Microservice

The Multimodal CLIP Embedding Microservice is designed to efficiently convert textual strings and images into vectorized embeddings, facilitating seamless integration into various machine learning and data processing workflows. This service utilizes advanced algorithms to generate high-quality embeddings that capture the semantic essence of the input text and images, making it ideal for applications in multi-modal data processing, information retrieval, and similar fields.

Key Features:

**High Performance**: Optimized for quick and reliable conversion of textual data and image inputs into vector embeddings.

**Scalability**: Built to handle high volumes of requests simultaneously, ensuring robust performance even under heavy loads.

**Ease of Integration**: Provides a simple and intuitive API, allowing for straightforward integration into existing systems and workflows.

**Customizable**: Supports configuration and customization to meet specific use case requirements, including different embedding models and preprocessing techniques.

Users are albe to configure and build embedding-related services according to their actual needs.

## 🚀1. Start Microservice with Docker

### 1.1 Build Docker Image

#### Build Langchain Docker

```bash
cd ../../..
docker build -t opea/embedding-multimodal:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/embeddings/multimodal_clip/docker/Dockerfile .
```

### 1.2 Run Docker with Docker Compose

```bash
cd comps/embeddings/multimodal_clip/docker
docker compose -f docker_compose_embedding.yaml up -d
```

## 🚀2. Consume Embedding Service

### 2.1 Check Service Status

```bash
curl http://localhost:6000/v1/health_check\
-X GET \
-H 'Content-Type: application/json'
```

### 2.2 Consume Embedding Service

```bash
curl http://localhost:6000/v1/embeddings \
-X POST -d '{"text":"Sample text"}' \
-H 'Content-Type: application/json'

```
2 changes: 2 additions & 0 deletions comps/embeddings/multimodal_clip/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
29 changes: 29 additions & 0 deletions comps/embeddings/multimodal_clip/docker/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

FROM langchain/langchain:latest

ARG ARCH="cpu"

RUN apt-get update -y && apt-get install -y --no-install-recommends --fix-missing \
libgl1-mesa-glx \
libjemalloc-dev \
vim

RUN useradd -m -s /bin/bash user && \
mkdir -p /home/user && \
chown -R user /home/user/

USER user

COPY comps /home/user/comps

RUN pip install --no-cache-dir --upgrade pip && \
if [ ${ARCH} = "cpu" ]; then pip install torch torchvision --index-url https://download.pytorch.org/whl/cpu; fi && \
pip install --no-cache-dir -r /home/user/comps/embeddings/multimodal_clip/requirements.txt

ENV PYTHONPATH=$PYTHONPATH:/home/user

WORKDIR /home/user/comps/embeddings/multimodal_clip

ENTRYPOINT ["python", "embedding_multimodal.py"]
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

version: "3.8"

services:
embedding:
image: opea/embedding-multimodal:latest
container_name: embedding-multimodal-server
ports:
- "6000:6000"
ipc: host
environment:
no_proxy: ${no_proxy}
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
LANGCHAIN_API_KEY: ${LANGCHAIN_API_KEY}
restart: unless-stopped

networks:
default:
driver: bridge
86 changes: 86 additions & 0 deletions comps/embeddings/multimodal_clip/embedding_multimodal.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

import datetime
import os
import time
from typing import Union

from dateparser.search import search_dates
from embeddings_clip import vCLIP

from comps import (
EmbedDoc,
ServiceType,
TextDoc,
opea_microservices,
register_microservice,
register_statistics,
statistics_dict,
)


def filtler_dates(prompt):

base_date = datetime.datetime.today()
today_date = base_date.date()
dates_found = search_dates(prompt, settings={"PREFER_DATES_FROM": "past", "RELATIVE_BASE": base_date})

if dates_found is not None:
for date_tuple in dates_found:
date_string, parsed_date = date_tuple
date_out = str(parsed_date.date())
time_out = str(parsed_date.time())
hours, minutes, seconds = map(float, time_out.split(":"))
year, month, day_out = map(int, date_out.split("-"))

rounded_seconds = min(round(parsed_date.second + 0.5), 59)
parsed_date = parsed_date.replace(second=rounded_seconds, microsecond=0)

iso_date_time = parsed_date.isoformat()
iso_date_time = str(iso_date_time)

if date_string == "today":
constraints = {"date": ["==", date_out]}
elif date_out != str(today_date) and time_out == "00:00:00": ## exact day (example last friday)
constraints = {"date": ["==", date_out]}
elif (
date_out == str(today_date) and time_out == "00:00:00"
): ## when search_date interprates words as dates output is todays date + time 00:00:00
constraints = {}
else: ## Interval of time:last 48 hours, last 2 days,..
constraints = {"date_time": [">=", {"_date": iso_date_time}]}
return constraints

else:
return {}


@register_microservice(
name="opea_service@embedding_multimodal",
service_type=ServiceType.EMBEDDING,
endpoint="/v1/embeddings",
host="0.0.0.0",
port=6000,
input_datatype=TextDoc,
output_datatype=EmbedDoc,
)
@register_statistics(names=["opea_service@embedding_multimodal"])
def embedding(input: TextDoc) -> EmbedDoc:
start = time.time()

if isinstance(input, TextDoc):
# Handle text input
embed_vector = embeddings.embed_query(input.text).tolist()[0]
res = EmbedDoc(text=input.text, embedding=embed_vector, constraints=filtler_dates(input.text))

else:
raise ValueError("Invalid input type")

statistics_dict["opea_service@embedding_multimodal"].append_latency(time.time() - start, None)
return res


if __name__ == "__main__":
embeddings = vCLIP({"model_name": "openai/clip-vit-base-patch32", "num_frm": 4})
opea_microservices["opea_service@embedding_multimodal"].start()
50 changes: 50 additions & 0 deletions comps/embeddings/multimodal_clip/embeddings_clip.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

import torch
import torch.nn as nn
from einops import rearrange
from transformers import AutoProcessor, AutoTokenizer, CLIPModel

model_name = "openai/clip-vit-base-patch32"

clip = CLIPModel.from_pretrained(model_name)
processor = AutoProcessor.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)


class vCLIP(nn.Module):
def __init__(self, cfg):
super().__init__()

self.num_frm = cfg["num_frm"]
self.model_name = cfg["model_name"]

def embed_query(self, texts):
"""Input is list of texts."""
text_inputs = tokenizer(texts, padding=True, return_tensors="pt")
text_features = clip.get_text_features(**text_inputs)
return text_features

def get_embedding_length(self):
return len(self.embed_query("sample_text"))

def get_image_embeddings(self, images):
"""Input is list of images."""
image_inputs = processor(images=images, return_tensors="pt")
image_features = clip.get_image_features(**image_inputs)
return image_features

def get_video_embeddings(self, frames_batch):
"""Input is list of list of frames in video."""
self.batch_size = len(frames_batch)
vid_embs = []
for frames in frames_batch:
frame_embeddings = self.get_image_embeddings(frames)
frame_embeddings = rearrange(frame_embeddings, "(b n) d -> b n d", b=len(frames_batch))
# Normalize, mean aggregate and return normalized video_embeddings
frame_embeddings = frame_embeddings / frame_embeddings.norm(dim=-1, keepdim=True)
video_embeddings = frame_embeddings.mean(dim=1)
video_embeddings = video_embeddings / video_embeddings.norm(dim=-1, keepdim=True)
vid_embs.append(video_embeddings)
return torch.cat(vid_embs, dim=0)
14 changes: 14 additions & 0 deletions comps/embeddings/multimodal_clip/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
dateparser
docarray[full]
einops
fastapi
huggingface_hub
langchain
open_clip_torch
opentelemetry-api
opentelemetry-exporter-otlp
opentelemetry-sdk
prometheus-fastapi-instrumentator
sentence_transformers
shortuuid
uvicorn
6 changes: 6 additions & 0 deletions comps/finetuning/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,10 +92,12 @@ Assuming a training file `alpaca_data.json` is uploaded, it can be downloaded in

```bash
# upload a training file

curl http://${your_ip}:8015/v1/finetune/upload_training_files -X POST -H "Content-Type: multipart/form-data" -F "files=@./alpaca_data.json"

# create a finetuning job
curl http://${your_ip}:8015/v1/fine_tuning/jobs \

-X POST \
-H "Content-Type: application/json" \
-d '{
Expand All @@ -104,18 +106,22 @@ curl http://${your_ip}:8015/v1/fine_tuning/jobs \
}'

# list finetuning jobs

curl http://${your_ip}:8015/v1/fine_tuning/jobs -X GET

# retrieve one finetuning job
curl http://localhost:8015/v1/fine_tuning/jobs/retrieve -X POST -H "Content-Type: application/json" -d '{

"fine_tuning_job_id": ${fine_tuning_job_id}}'

# cancel one finetuning job


curl http://localhost:8015/v1/fine_tuning/jobs/cancel -X POST -H "Content-Type: application/json" -d '{
"fine_tuning_job_id": ${fine_tuning_job_id}}'

# list checkpoints of a finetuning job
curl http://${your_ip}:8015/v1/finetune/list_checkpoints -X POST -H "Content-Type: application/json" -d '{"fine_tuning_job_id": ${fine_tuning_job_id}}'


```
Empty file.
7 changes: 7 additions & 0 deletions comps/finetuning/handlers.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,9 @@ def update_job_status(job_id: FineTuningJobID):
status = str(job_status).lower()
# Ray status "stopped" is OpenAI status "cancelled"
status = "cancelled" if status == "stopped" else status

logger.info(f"Status of job {job_id} is '{status}'")

running_finetuning_jobs[job_id].status = status
if status == "finished" or status == "cancelled" or status == "failed":
break
Expand Down Expand Up @@ -105,7 +107,9 @@ def handle_create_finetuning_jobs(request: FineTuningJobsRequest, background_tas
)
finetune_config.General.output_dir = os.path.join(JOBS_PATH, job.id)
if os.getenv("DEVICE", ""):

logger.info(f"specific device: {os.getenv('DEVICE')}")

finetune_config.Training.device = os.getenv("DEVICE")

finetune_config_file = f"{JOBS_PATH}/{job.id}.yaml"
Expand All @@ -120,6 +124,7 @@ def handle_create_finetuning_jobs(request: FineTuningJobsRequest, background_tas
# Path to the local directory that contains the script.py file
runtime_env={"working_dir": "./"},
)

logger.info(f"Submitted Ray job: {ray_job_id} ...")

running_finetuning_jobs[job.id] = job
Expand Down Expand Up @@ -172,7 +177,9 @@ async def save_content_to_local_disk(save_path: str, content):
content = await content.read()
fout.write(content)
except Exception as e:

logger.info(f"Write file failed. Exception: {e}")

raise Exception(status_code=500, detail=f"Write file {save_path} failed. Exception: {e}")


Expand Down
Empty file.
12 changes: 12 additions & 0 deletions comps/finetuning/lanuch.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

if [[ -n "$RAY_PORT" ]];then
export RAY_ADDRESS=http://127.0.0.1:$RAY_PORT
ray start --head --port $RAY_PORT
else
export RAY_ADDRESS=http://127.0.0.1:8265
ray start --head
fi

python finetuning_service.py
Loading
Loading