Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pika3.3.6单机主从架构多db如何做数据迁移到codis+pika分片版本 #1195

Closed
ahern88 opened this issue Sep 22, 2022 · 10 comments
Closed

Comments

@ahern88
Copy link

ahern88 commented Sep 22, 2022

目前生产环境使用pika3.3.6,采用classic方式,部署了一主二从。但由于数据量增大,想迁移至codis+pika分片的集群中

codis+pika分片模式的集群已部署好,但不方便将老集群中的数据迁移过来。

尝试了pika_port,失败,不支持pika3以上的版本

看了pika_migrate版本,貌似不支持多db

还有什么方式能做数据迁移?不行的话就只能自己研究改代码了

@kernelai
Copy link
Collaborator

pika_migrate版本,我记得是支持多db的

@ahern88
Copy link
Author

ahern88 commented Sep 22, 2022

pika_migrate版本,我记得是支持多db的

好的,我试下,我看文档(pika_migrage.md)说不支持多db

注意事项
Pika支持不同数据结构采用同名Key, 但是Redis不支持, 所以在有同Key数据的场景下, 以第一个迁移到Redis数据结构为准, 其他同Key数据结构会丢失
该工具只支持热迁移单机模式下, 并且只采用单DB版本的Pika, 如果是集群模式, 或者是多DB场景, 工具会报错并且退出.
为了避免由于主库Binlog被清理导致该工具触发多次全量同步向Redis写入脏数据, 工具自身做了保护, 在第二次触发全量同步时会报错退出.

@ahern88
Copy link
Author

ahern88 commented Sep 23, 2022

@kernelai 你好,单db下我同步没有问题,但是多db提示这个,请问是啥原因

`I0923 11:38:56.164340 5321 pika.cc:187] Server at: conf/pika.conf
I0923 11:38:56.164978 5321 pika_server.cc:191] Using Networker Interface: ens192
I0923 11:38:56.165097 5321 pika_server.cc:234] host: 10.9.47.139 port: 9226
I0923 11:38:56.165114 5321 pika_server.cc:89] Worker queue limit is 20100
I0923 11:38:56.165226 5321 pika_server.cc:100] target redis port: 19000 !!!
I0923 11:38:57.260337 5321 pika_partition.cc:92] db0 DB Success
I0923 11:38:57.260406 5321 pika_binlog.cc:106] Binlog: Find the exist file.
I0923 11:38:57.462963 5321 pika_partition.cc:92] db1 DB Success
I0923 11:38:57.463006 5321 pika_binlog.cc:89] Binlog: Manifest file not exist, we create a new one.
I0923 11:38:57.677038 5321 pika_partition.cc:92] db2 DB Success
I0923 11:38:57.677081 5321 pika_binlog.cc:89] Binlog: Manifest file not exist, we create a new one.
I0923 11:38:57.888043 5321 pika_partition.cc:92] db3 DB Success
I0923 11:38:57.888101 5321 pika_binlog.cc:89] Binlog: Manifest file not exist, we create a new one.
I0923 11:38:57.889081 5376 redis_sender.cc:169] Start redis sender 0 thread...
I0923 11:38:57.889101 5377 redis_sender.cc:169] Start redis sender 1 thread...
I0923 11:38:57.889685 5378 redis_sender.cc:169] Start redis sender 2 thread...
I0923 11:38:57.889984 5379 redis_sender.cc:169] Start redis sender 3 thread...
I0923 11:38:57.890050 5380 redis_sender.cc:169] Start redis sender 4 thread...
I0923 11:38:57.891098 5381 redis_sender.cc:169] Start redis sender 5 thread...
I0923 11:38:57.891461 5382 redis_sender.cc:169] Start redis sender 6 thread...
I0923 11:38:57.891505 5383 redis_sender.cc:169] Start redis sender 7 thread...
I0923 11:38:57.891677 5384 redis_sender.cc:169] Start redis sender 8 thread...
I0923 11:38:57.891831 5321 pika_server.cc:294] Pika Server going to start
I0923 11:38:57.892037 5385 redis_sender.cc:169] Start redis sender 9 thread...

I0923 11:43:41.246526 5375 pika_repl_client.cc:145] Try Send Meta Sync Request to Master (10.9.47.139:9225)
I0923 11:43:41.248464 5332 pika_server.cc:573] Mark try connect finish
I0923 11:43:41.248629 5332 pika_repl_client_conn.cc:129] Finish to handle meta sync response
F0923 11:43:41.347460 5375 pika_rm.cc:1354] we only allow one DBSync action to avoid passing duplicate commands to target Redis multiple times
*** Check failure stack trace: ***
I0923 11:43:41.348256 5333 pika_repl_client_conn.cc:166] Partition: (db3:0) Need Wait To Sync
@ 0x7f3f42b8ee6d (unknown)
@ 0x7f3f42b90ced (unknown)
@ 0x7f3f42b8ea5c (unknown)
@ 0x7f3f42b9163e (unknown)
@ 0x6b94fa PikaReplicaManager::SendPartitionDBSyncRequest()
@ 0x6b9847 PikaReplicaManager::RunSyncSlavePartitionStateMachine()
@ 0x69e340 PikaAuxiliaryThread::ThreadMain()
@ 0x6f28ec pink::Thread::RunThread()
@ 0x7f3f4265cdd5 start_thread
@ 0x7f3f41166ead __clone
path : conf/pika.conf
-----------Pika server 3.2.7 ----------
-----------Pika config list----------
1 port 9226
2 thread-num 1
3 thread-pool-size 12
4 sync-thread-num 6
5 log-path ./log/
6 db-path ./db/
7 write-buffer-size 268435456`

@ahern88
Copy link
Author

ahern88 commented Sep 28, 2022

@kernelai 请问下migrate这个版本为啥没有支持多db,是有啥考虑吗?如果自己去修改支持多db有没有需要注意的地方

@Gjj455
Copy link

Gjj455 commented Sep 28, 2022

你好 请问下 你们用的codis+pika 模式,slot 数量用的默认的1024吗?一共用了几台机器?

@ahern88
Copy link
Author

ahern88 commented Sep 28, 2022

你好 请问下 你们用的codis+pika 模式,slot 数量用的默认的1024吗?一共用了几台机器?
是的,1024够我们的业务场景了,现在遇到从单机迁移数据到集群模式的问题,你有啥办法吗

@Gjj455
Copy link

Gjj455 commented Sep 28, 2022

我们目前的情况是想把codis 的数据迁移到pika,之前考虑过pika sharding模式,因为数据量没那么大,所以就几台机器,但是得配合codis1024个slot,这样相当于每台机器就要上百个slot,而pika一个slot 就要起5个roksdb实例,对于一台机器来说好像压力比较大。不知道你们有没有类似的问题。

@ahern88
Copy link
Author

ahern88 commented Oct 21, 2022

我们目前的情况是想把codis 的数据迁移到pika,之前考虑过pika sharding模式,因为数据量没那么大,所以就几台机器,但是得配合codis1024个slot,这样相当于每台机器就要上百个slot,而pika一个slot 就要起5个roksdb实例,对于一台机器来说好像压力比较大。不知道你们有没有类似的问题。

刚看到消息,我修改了codis和pika的默认slot,我5台物理机,将slot修改为了128,目前运行下来比较稳定,最开始使用1024个slot,性能很差,还经常timeout。

@AlexStocks
Copy link
Contributor

刘振:上面回复有问题,pika_migrate 是否支持多 db 不确定,如果支持则此问题不存在。

@AlexStocks
Copy link
Contributor

         *   用户:pika3.3.6单机主从架构多db如何做数据迁移到codis+pika分片版本。我修改了codis和pika的默认slot,我5台物理机,将slot修改为了128,目前运行下来比较稳定,最开始使用1024个slot,性能很差,还经常timeout。
        * 0527 正在开发 Pika 的 classic+codis 的集群模式
        * 0603 开发完了,pika适配完命令做测试,https://github.com/pikiwidb/pika/pull/29 待CR和功能测试
        * 0701 压测

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants