-
Notifications
You must be signed in to change notification settings - Fork 439
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature request] A way to get the timestamp that a delayed job is supposed to run #1728
Comments
This is an extremely ancient issue. Never worked quite right from the beginning, and become worse since the addition of Try this instead.
|
Shouldn’t this be addressed by bull here since it’s choosing to store the delay as a millisecond value? Instead of resorting to fallback on the Redis client. I’m just a casual first time user of bullmq and the delay aspect is confusing to be honest. I guess I initially expected it to be stored as a date to run at or at least expose that back to the consumer. |
@wenq1 I'm aware of using the score of the job in the delayed zset (where
Additionally, your approach is incomplete/incorrect -- there isn't one specific let score = await redisClient.zscore(queue.keys.delayed, jobId);
let runAt = Math.floor(parseFloat(score || '0') / 0x1000); |
As I said yesterday, the design was never quite right from the very beginning. This is due to the way So basically the design problem is that it much better to use a As for your question 1, I think if you want to additional information for instrumentation you need to store it somewhere else. In our use cases just do an ugly hack to As for your question 2, yes I agree with you that something like this score should be part of bull. However, before a PR is submitted your best bet is still the hack as mentioned. |
I guess there's a PR now 😀 |
i guess there is one more pending PR now |
yes, I am sorry but I am a bit reluctant to increase the complexity so much for a quite small feature addition... complexity keeps adding up and then it is just more work for us long-time maintainers. |
@manast Would you be open to accepting the change if it also drops the Alternatively, how about storing the timestamp of the last change to |
i think what @manast probably meant is just the addition of something like |
@wenq1 Yes I know that, and since that doesn't solve usecase 1 from my first post for me, I was asking if there were any other acceptable approaches. And while I do appreciate you proposing the wild hack of abusing the |
What now? |
Is your feature request related to a problem? Please describe.
I'm trying to find out the real "
runAt
" timestamp of a job, for the following purposes:waiting
state, or how long after a job is supposed to run that it actually begins execution.queue.getJobs('delayed')
, I would like to be able to tell when a delayed job is supposed to run.Describe the solution you'd like
A
Job.runAt
property that contains the timestamp that a job is supposed to run at, potentially stored on the job key in Redism so it would be part of theJobJsonRaw
interface. This needs to match the timestamp that actually makes it to the job's score in the delayed set and stay accurate through all the job state transitions.Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Here's what I've tried so far:
job.timestamp + job.delay
-- doesn't really give any meaningful result once the job has sat around in redis for a while (why does it make sense to storedelay
on the job key in redis without actually storing the timestamp it was calculated from?)job.delay
is set to 0 when a job is promoted to waitingjob.delay
is updated inmoveToDelayed
andchangeDelay
, but notjob.timestamp
, so we can't get the jobrunAt
time from thatjob.timestamp + job.opts.delay
-- works for some usecases, falls apart whenjob.changeDelay()
orjob.moveToDelayed()
has happened at least once on the job, becausejob.opts.delay
isn't updated by either of those calls.At this point I'm considering just storing this timestamp as a special key on the job payload, but I think that solution's a bit of a hack.
The text was updated successfully, but these errors were encountered: