Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(readme): use new demo server #819

Merged
merged 4 commits into from
Sep 13, 2022
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 25 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,19 @@ CLIP-as-service is a low-latency high-scalability service for embedding images a

## Try it!

An always-online demo server loaded with `ViT-L/14-336px` is there for you to play & test:
An always-online server `api.clip.jina.ai` loaded with `ViT-L/14-336px` is there for you to play & test.
Before you start, make sure you have created access token from our [console website](https://console.clip.jina.ai/get_started),
or CLI as described in [this guide](https://github.com/jina-ai/jina-hubble-sdk#create-a-new-pat).

```bash
jina auth token create <name of PAT> -e <expiration days>
```

Then, you need to set the created token in HTTP request header `Authorization` as `<your access token>`,
or configure it in the parameter `credential` of the client in python.

⚠️ Our demo server `demo-cas.jina.ai` is sunset and no longer available after **15th of Sept 2022**.


### Text & image embedding

Expand All @@ -50,8 +62,9 @@ An always-online demo server loaded with `ViT-L/14-336px` is there for you to pl

```bash
curl \
-X POST https://demo-cas.jina.ai:8443/post \
-X POST https://api.clip.jina.ai:8443/post \
-H 'Content-Type: application/json' \
-H 'Authorization: <your access token>' \
-d '{"data":[{"text": "First do it"},
{"text": "then do it right"},
{"text": "then do it better"},
Expand All @@ -66,7 +79,9 @@ curl \
# pip install clip-client
from clip_client import Client

c = Client('grpcs://demo-cas.jina.ai:2096')
c = Client(
'grpcs://api.clip.jina.ai:2096', credential={'Authorization': '<your access token>'}
)

r = c.encode(
[
Expand Down Expand Up @@ -101,8 +116,9 @@ There are four basic visual reasoning skills: object recognition, object countin

```bash
curl \
-X POST https://demo-cas.jina.ai:8443/post \
-X POST https://api.clip.jina.ai:8443/post \
-H 'Content-Type: application/json' \
-H 'Authorization: <your access token>' \
-d '{"data":[{"uri": "https://picsum.photos/id/1/300/300",
"matches": [{"text": "there is a woman in the photo"},
{"text": "there is a man in the photo"}]}],
Expand All @@ -129,8 +145,9 @@ gives:

```bash
curl \
-X POST https://demo-cas.jina.ai:8443/post \
-X POST https://api.clip.jina.ai:8443/post \
-H 'Content-Type: application/json' \
-H 'Authorization: <your access token>' \
-d '{"data":[{"uri": "https://picsum.photos/id/133/300/300",
"matches": [
{"text": "the blue car is on the left, the red car is on the right"},
Expand Down Expand Up @@ -165,8 +182,9 @@ gives:

```bash
curl \
-X POST https://demo-cas.jina.ai:8443/post \
-X POST https://api.clip.jina.ai:8443/post \
-H 'Content-Type: application/json' \
-H 'Authorization: <your access token>' \
-d '{"data":[{"uri": "https://picsum.photos/id/102/300/300",
"matches": [{"text": "this is a photo of one berry"},
{"text": "this is a photo of two berries"},
Expand Down Expand Up @@ -655,6 +673,7 @@ Fun time! Note, unlike the previous example, here the input is an image and the
</table>



### Rank image-text matches via CLIP model

From `0.3.0` CLIP-as-service adds a new `/rank` endpoint that re-ranks cross-modal matches according to their joint likelihood in CLIP model. For example, given an image Document with some predefined sentence matches as below:
Expand Down