-
-
Notifications
You must be signed in to change notification settings - Fork 652
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance of add when doing a large amount of writes #90
Comments
Thanks for finding this optimization possibility. Sounds as a possible solution to use. Should remove the return statement though so the caller won't expect the promise to complete. I'm hoping to find some more time to do optimization in general with the library and see how your idea could fit in or give a hint on how to optimize the lib. |
Good idea with the return statement. Please let me know if I can help test your changes, we have a good test case here :) |
Thanks. Would a bulk add/put method be of any help in your case? Or are you dependant on having to do it in your own controlled loop? |
Bulk add is what we are after, so that would be perfect :) |
We're interested in a Bulk operation, too. We're inserting ~15K objects which takes several minutes, but peaks at over 1GB for parsing/inserting 50MB of JSON. |
I am also interested in a bulk add operation... |
+1 |
+1. Giving a complete array of JSON objects and having it sped up would be wonderful. I found lots of obvious stuff makes a huge difference. e.g. writing to console on each successful insert will clearly slow things right down, esp if you have the dev tools opens :) Worth double checking your code for such, but still... something faster than a 'for in' / forEach loop adding each item would be nice. Maybe make use of more cores with workers. |
+1 @arj03 What did you attach this method to? Was it faster than using standard 'add' with no transaction? |
nbiton just below add. It was much faster than add even in a transaction, because it does a lot of crazy stuff we don't need. Maybe it makes sense to merge given all the +1 :? |
In response to issue #90, a bulk add method. Does support hook('creating') if used. This method takes special care of not keeping any closure reference to the added objects since there are many users that have had memory consumption and performance issues when adding lots of items. I would in the long term also want to create a generic bulk() method that would take an array of operations that could be a mix of add/put/update/deletes, but that will require a bit more complexity. Starting with this method to get some feedback on whether it improves performance as it is intended to do.
I added a new bulkAdd() method on WriteableTable. Not documented yet. Please help me test this out. I would suppose it is as fast, or faster than the provided example topmost of this issue and it supports crud hooks if used. It works as following:
|
Hello Sorry for the late reply. Been quite busy here. I managed to try your code today but got the following error: bulkAdd() only support inbound keys Seems to happen on one of our auto-incremented primary key tables. I'm afraid without that it won't be a solution for us. Thanks! |
Any news on this? |
Unfortunately not (yet). Thanks for reminding! |
@arj03, are you sure you get this error on am auto-incremented table? The unit test for bulkAdd uses an auto-increment table. |
Aha, supposed you got a non inbound auto-incremented table. Should work. Added unit test and support for it. Now working with a design issue with bulkAdd() that it resolves with an array of failures. This is quite non-intuitive that it resolves when all operations fails. This is not documented either. Need to rewrite that so that it rejects with an Error containing a list of rejections. |
…mmitted, but if not catching the error, the success operations should abort.
Ok, so now bulkAdd() will fail if any of the operations fails. A new error type Dexie.BulkError will be thrown, that holds the failures. If it was desired to ignore failed operations, then caller should catch the call to bulkAdd() explicitely and the transaction wont abort, and those successful additions will be persisted even though some failed. |
I can confirm that it works and that it is about 20% faster than my addInTransaction version. Great work @dfahlander! |
Thanx for testing and reporting these good news! bulkAdd() utilizes the fact that it is only nescessary to listen to the last success event from the backend add() operation instead of listening to every success event This makes the main thread being less "hammered on" while adding the objects. To save memory, it also avoids creating request-specific closures for the error events. |
Hello
We are writing an application where we insert quite a lot of objects into the database in a big transaction (probably around 200.000 objects). We have had trouble with performance, where memory and excecuting time was quite bad. I investigated and found the main problem to be the table add function in Dexie. The thing is that we are doing everything in a transaction, so we are not that concerned with reject and succes, we just want to insert a whole lot of objects, or don't do anything. I stripped add down to the following:
Which runs twice as fast and uses half the memory, which is not that strange, when you think that in a normal add, it stores the object for a nice reject message. Do you have any advice as to how to improve this situation?
Thanks for a great library :)
The text was updated successfully, but these errors were encountered: