Disclaimer: This project is only released on GitHub, based on the MIT agreement, free of charge and used as open source learning.
More features:chatgpt-web-plus
- ChatGPT Web
- Introduction
- [route to be implemented](#route to be implemented)
- pre-requirements
- Node
- PNPM
- [fill key](#fill key)
- [Install dependencies](#Install dependencies)
- backend
- Frontend
- [Test environment running](#Test environment running)
- [backend service](#backend service)
- [Front Page](#Front Page)
- [Environment Variables](#Environment Variables)
- package
- use-docker
- [Docker parameter example](#docker-parameter example)
- Docker build & Run
- Docker compose
- [Prevent crawlers from crawling](#Prevent crawlers from crawling)
- deploy with Railway
- Railway Environment Variables
- [Manual packing](#Manual packing)
- [backend service](#backend service-1)
- [Front Page](#Front Page-1)
- FAQ
- [Participate in contribution](#Participate in contribution)
- Sponsorship
- License
Supports dual models, providing two unofficial ChatGPT API
methods
Method | Free? | Robust? | Quality? |
---|---|---|---|
ChatGPTAPI | ❌ No | ✅ Yes | ✅️ Real ChatGPT models + GPT-4 |
ChatGPTUnofficialProxyAPI | ✅ Yes | ❌ No️ | ✅ ChatGPT webapp |
Compared:
ChatGPTAPI
usesgpt-3.5-turbo
to callChatGPT
throughOpenAI
officialAPI
ChatGPTUnofficialProxyAPI
uses an unofficial proxy server to accessChatGPT
’s backendAPI
, bypassingCloudflare
(depends on third-party servers, and has a rate limit)
warning:
- You should use the
API
method first - When using
API
, if the network is not available, it means that the country is blocked, and you need to build your own proxy. Never use other people's public proxy, it is dangerous. - When using the
accessToken
method, the reverse proxy will expose your access token to third parties, which should not have any adverse effects, but please consider the risks before using this method. - When using
accessToken
, no matter whether you are a domestic or foreign machine, you will use a proxy. The default proxy is pengzhilehttps://ai.fakeopen.com/api/conversation
, which is not a backdoor or monitoring, unless you have the ability to turn over by yourselfCF
verification, please know before using. Community proxy (Note: only these two are recommended, other third-party sources, please identify by yourself) - When publishing the project to the public network, you should set the
AUTH_SECRET_KEY
variable to add your password access rights, and you should also modify thetitle
inindex.html
to prevent it from being searched by keywords.
Switching method:
- Enter the
service/.env.example
file and copy the content to theservice/.env
file - To use
OpenAI API Key
, please fill inOPENAI_API_KEY
field (get apiKey) - To use
Web API
, please fill inOPENAI_ACCESS_TOKEN
field (get accessToken) - When both exist,
OpenAI API Key
takes precedence
Environment variables:
Please check all parameter variables or [here](#environment variable)
/service/.env.example
[✓] Dual Model
[✓] Multi-session storage and context logic
[✓] Formatting and beautifying processing of message types such as codes
[✓] Access Control
[✓] Data import, export
[✓] Save message to local picture
[✓] Multilingual interface
[✓] Interface themes
[✗] More...
###Node
node
requires version ^16 || ^18 || ^19
(node >= 14
needs to install fetch polyfill), use nvm to manage multiple local node
versions
node -v
###PNPM
If you don't have pnpm
installed
npm install pnpm -g
Get Openai Api Key
or accessToken
and fill in local environment variables Jump
# service/.env file
# OpenAI API Key - https://platform.openai.com/overview
OPENAI_API_KEY=
# change this to an `accessToken` extracted from the ChatGPT site's `https://chat.openai.com/api/auth/session` response
OPENAI_ACCESS_TOKEN=
In order to simplify the understanding burden of
back-end developers
, the front-endworkspace
mode is not adopted, but stored in folders. If you only need to do secondary development on the front-end page, just delete theservice
folder.
Go to the folder /service
and run the following command
pnpm install
Run the following command in the root directory
pnpm bootstrap
Go to the folder /service
and run the following command
pnpm start
Run the following command in the root directory
pnpm dev
The API
is available:
- Choose one of
OPENAI_API_KEY
andOPENAI_ACCESS_TOKEN
OPENAI_API_MODEL
set model, optional, default:gpt-3.5-turbo
OPENAI_API_BASE_URL
set interface address, optional, default:https://api.openai.com
OPENAI_API_DISABLE_DEBUG
set the interface to close the debug log, optional, default: empty is not closed
ACCESS_TOKEN
is available:
- You can choose between
OPENAI_ACCESS_TOKEN
andOPENAI_API_KEY
. When they exist at the same time,OPENAI_API_KEY
takes precedence API_REVERSE_PROXY
set reverse proxy, optional, default:https://ai.fakeopen.com/api/conversation
, [Community](https://github.com/transitive-bullshit/chatgpt-api# reverse-proxy) (Note: only these two are recommended, other third-party sources, please identify by yourself)
General:
AUTH_SECRET_KEY
access authorization key, optionalMAX_REQUEST_PER_HOUR
maximum number of requests per hour, optional, default unlimitedTIMEOUT_MS
timeout in milliseconds, optionalSOCKS_PROXY_HOST
works withSOCKS_PROXY_PORT
, optionalSOCKS_PROXY_PORT
works withSOCKS_PROXY_HOST
, optionalHTTPS_PROXY
supportshttp
,https
,socks5
, optionalALL_PROXY
supportshttp
,https
,socks5
, optional
docker build -t chatgpt-web .
# run in the foreground
docker run --name chatgpt-web --rm -it -p 127.0.0.1:3002:3002 --env OPENAI_API_KEY=your_api_key chatgpt-web
# Background process
docker run --name chatgpt-web -d -p 127.0.0.1:3002:3002 --env OPENAI_API_KEY=your_api_key chatgpt-web
# run address
http://localhost:3002/
version: '3'
services:
app:
image: chenzhaoyu94/chatgpt-web # Always use latest, and re-pull the tag image when updating
ports:
- 127.0.0.1:3002:3002
environment:
# pick one of two
OPENAI_API_KEY: sk-xxx
# pick one of two
OPENAI_ACCESS_TOKEN: xxx
# API interface address, optional, available when OPENAI_API_KEY is set
OPENAI_API_BASE_URL: xxx
# API model, optional, available when OPENAI_API_KEY is set, https://platform.openai.com/docs/models
# gpt-4, gpt-4-0314, gpt-4-32k, gpt-4-32k-0314, gpt-3.5-turbo, gpt-3.5-turbo-0301, text-davinci-003, text-davinci-002 , code-davinci-002
OPENAI_API_MODEL: xxx
# reverse proxy, optional
API_REVERSE_PROXY: xxx
# Access permission key, optional
AUTH_SECRET_KEY: xxx
# The maximum number of requests per hour, optional, default unlimited
MAX_REQUEST_PER_HOUR: 0
# timeout in milliseconds, optional
TIMEOUT_MS: 60000
# Socks proxy, optional, works with SOCKS_PROXY_PORT
SOCKS_PROXY_HOST: xxx
# Socks proxy port, optional, effective when combined with SOCKS_PROXY_HOST
SOCKS_PROXY_PORT: xxx
# HTTPS proxy, optional, supports http, https, socks5
HTTPS_PROXY: http://xxx:7890
OPENAI_API_BASE_URL
optional, available whenOPENAI_API_KEY
is setOPENAI_API_MODEL
optional, available whenOPENAI_API_KEY
is set
nginx
Fill in the following configuration into the nginx configuration file, you can refer to docker-compose/nginx/nginx.conf
file to add anti-crawler method
# Prevent crawlers from crawling
if ($http_user_agent ~* "360Spider|JikeSpider|Spider|spider|bot|Bot|2345Explorer|curl|wget|webZIP|qihoobot|Baiduspider|Googlebot|Googlebot-Mobile|Googlebot-Image|Mediapartners-Google|Adsbot-Google|Feedfetcher -Google|Yahoo! Slurp|Yahoo! Slurp China|YoudaoBot|Sosospider|Sogou spider|Sogou web spider|MSNBot|ia_archiver|Tomato Bot|NSPlayer|bingbot")
{
return 403;
}
env variable name | required | Remark |
---|---|---|
PORT |
required | default 3002 |
AUTH_SECRET_KEY |
optional | access key |
MAX_REQUEST_PER_HOUR |
optional | The maximum number of requests per hour, optional, unlimited by default |
TIMEOUT_MS |
optional | timeout, in milliseconds |
OPENAI_API_KEY |
OpenAI API (pick one) |
apiKey required to use OpenAI API (get apiKey) |
OPENAI_ACCESS_TOKEN |
Web API (pick one) |
accessToken required to use Web API (Get accessToken) |
OPENAI_API_BASE_URL |
optional,OpenAI API available |
API interface address |
OPENAI_API_MODEL |
optional,OpenAI API available |
API Model |
API_REVERSE_PROXY |
optional,Web API available |
Web API Reverse proxy address Details |
SOCKS_PROXY_HOST |
optional,and SOCKS_PROXY_PORT take effect together |
Socks proxy |
SOCKS_PROXY_PORT |
optional,and SOCKS_PROXY_HOST take effect together |
Socks proxy port |
SOCKS_PROXY_USERNAME |
optional,and SOCKS_PROXY_HOST take effect together |
Socks proxy username |
SOCKS_PROXY_PASSWORD |
optional,and SOCKS_PROXY_HOST take effect together |
Socks proxy password |
HTTPS_PROXY |
optional | HTTPS proxy, support http, https, socks5 |
ALL_PROXY |
optional | All Agents Agents, Support http,https, socks5 |
Note:
Railway
modifying environment variables will restartDeploy
If you don't need the
node
interface of this project, you can omit the following operations
Copy the service
folder to the server where you have the node
service environment.
# Install
pnpm install
# Pack
pnpm build
# run
pnpm prod
PS: You can run pnpm start
directly on the server without packaging
-
Modify
VITE_GLOB_API_URL
in the.env
file in the root directory to your actual backend interface address -
Run the following command in the root directory, and then copy the files in the
dist
folder to the root directory of your website service
pnpm build
Q: Why do Git
commits always report errors?
A: Because there is commit information verification, please follow the Commit Guide
Q: If only the front-end page is used, where should I change the request interface?
A: The VITE_GLOB_API_URL
field in the .env
file in the root directory.
Q: When the file is saved, it all explodes?
A: For vscode
, please install the project recommended plugin, or manually install the Eslint
plugin.
Q: Is there no typewriter effect on the front end?
A: One possible reason is that after the Nginx reverse proxy is enabled and the buffer is enabled, Nginx will try to buffer a certain size of data from the backend and then send it to the browser. Please try to add proxy_buffering off;
after the reverse generation parameter, and then reload Nginx. The same goes for other web server configurations.
Please read the Contributing Guidelines before contributing
Thanks to everyone who contributed!
If you think this project is helpful to you, and if the situation permits, you can give me a little support. In short, thank you very much for your support~
MIT © ChenZhaoYu