-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docs/design: add a proposal for full charsets support #27325
Conversation
[REVIEW NOTIFICATION] This pull request has been approved by:
To complete the pull request process, please ask the reviewers in the list to review by filling The full list of commands accepted by this bot can be found here. Reviewer can indicate their review by submitting an approval review. |
The check_dev_2 test failed, created an issue: #27327 |
docs/design/2021-08-18-charsets.md
Outdated
- Supports the use of `CAST`, `CONVERT` and other functions to convert gbk characters. | ||
- Support SQL statements, such as: `SET CHARACTER SET GBK`, `SET NAMES GBK`, `SHOW CHARSET`, etc. | ||
- It supports the comparison between strings in the gbk character set, as well as the comparison between gbk strings and strings in other character sets. | ||
- There is no mapping between (U+D800, U+DFFF) and gbk. Attempting to convert code points in this range will return "?". |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if it is talking about converting the surrogate code points from UTF8MB4 to GBK i don't know why they need to be singled out here. there are tons of real characters which is valid in UTF8MB4 but invalid in GBK e.g.
select
column_0 utf8mb4,
convert(column_0 using gbk) gbk,
convert(column_0 using gb18030) gb18030
from (values row('🤔'), row('ς'), row('σ')) _;
+---------+------+---------+
| utf8mb4 | gbk | gb18030 |
+---------+------+---------+
| 🤔 | ? | 🤔 |
| ς | ? | ς |
| σ | σ | σ |
+---------+------+---------+
3 rows in set (0.00 sec)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a description item referring to the gb18030 character set supported by MySQL:
https://dev.mysql.com/doc/refman/8.0/en/charset-gb18030.html#:~:text=There%20is%20no%20mapping%20between%20(U%2BD800%2C%20U%2BDFFF)%20and%20GB18030.%20Attempted%20conversion%20of%20code%20points%20in%20this%20range%20returns%20%27%3F%27.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
GB18030 is not GBK though
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I found GB18030 has special instructions for this: https://en.wikipedia.org/wiki/GB_18030#:~:text=0080%20%E2%80%93%20FFFF%20except%20D800%20%E2%80%93%20DFFF%5B.
Done.
docs/design/2021-08-18-charsets.md
Outdated
- TiKV | ||
- Coprocessor related builtin functions need to be processed, and collation related functions need to be implemented. | ||
- TiCDC | ||
- It needs to be converted to gbk character set encoding when restoring the transaction downstream or outputting in accordance with the [TiCDC Open Protocol](https://docs.pingcap.com/zh/tidb/dev/ticdc-open-protocol). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the low-level storage of data is always utf8mb4,
when generating SQL, it is ok to just always write insert into x values (_utf8mb4'文字');
.
for OpenProtocol it is up to debate, as JSON and Avro supports Unicode strings only (utf8mb4).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So we don't need to do special treatment in TiCDC?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO no. But someone from the TiCDC team and/or business group should clarify if they want UTF-8-encoded data or native-charset-encoded data downstream for OpenProtocol.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The TiCDC group will discuss this problem, and @overvenus will help reply later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found Maxwell supports GBK (encoded in JSON) by outputing charset info and UTF8 data. https://github.com/zendesk/maxwell/blob/master/src/test/resources/sql/json/test_charsets
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But someone from the TiCDC team and/or business group should clarify if they want UTF-8-encoded data or native-charset-encoded data downstream for OpenProtocol.
@leoppro What do you think? Could you gather requirements from customers/business?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@overvenus the gbk
, big5
and ujis
in the test are just column names?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Current discussion result: GBK support is not required for all JSON-based protocols, but the Canal protocol may need GBK support (@leoppro will help explain this after investigating).
According to the current conclusion, TiCDC may need special support for OpenProtocol, and other functions are basically not dealt with.
The third stage | ||
|
||
- Basically support all string-related functions already supported by TiDB. | ||
- Support the use of alter statement, support to modify the charset of the column. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does this include modifying columns that are part of an index?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It needs to be included.
PTAL @tangenta @xiongjiwei |
/merge |
This pull request has been accepted and is ready to merge. Commit hash: 9ac2433
|
@zimulala: Your PR was out of date, I have automatically updated it for you. At the same time I will also trigger all tests for you: /run-all-tests If the CI test fails, you just re-trigger the test that failed and the bot will merge the PR for you after the CI passes. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/tichi repository. |
/run-check_dev_2 |
What problem does this PR solve?
Issue Number: close #xxx
Problem Summary:
What is changed and how it works?
Add a proposal for full charsets support in TiDB.
Related issue: #26812
Check List
Tests
Release note