Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Invalid job status RECEIVED, expected DONE #199

Open
IAkumaI opened this issue Jan 19, 2022 · 1 comment
Open

Invalid job status RECEIVED, expected DONE #199

IAkumaI opened this issue Jan 19, 2022 · 1 comment

Comments

@IAkumaI
Copy link

IAkumaI commented Jan 19, 2022

I get this error with no reason. Sometimes after 5 minutes, sometimes after 30 and then.
Error occur only on prod with much load. I have much jobs, but do not use any external storage.

I think is happend when I call limiter.jobStatus(key)
Bottleneck version 2.19.5 (latest)

// Bottleneck options
let options = {
            maxConcurrent: 5,
            minTime: 22,
            reservoir: 12,
            reservoirIncreaseAmount: 3,
            reservoirIncreaseInterval: 250,
            reservoirIncreaseMaximum: 12,
        }
/opt/source/image-process/node_modules/bottleneck/lib/Job.js:77
      throw new BottleneckError(`Invalid job status ${status}, expected ${expected}. Please open an issue at https://github.com/SGrondin/bottleneck/issues`);
            ^
Error: Invalid job status RECEIVED, expected DONE. Please open an issue at https://github.com/SGrondin/bottleneck/issues
    at Job._assertStatus (/opt/source/image-process/node_modules/bottleneck/lib/Job.js:77:13)
    at /opt/source/image-process/node_modules/bottleneck/lib/Job.js:198:18
    at Generator.next (<anonymous>)
    at asyncGeneratorStep (/opt/source/image-process/node_modules/bottleneck/lib/Job.js:3:103)
    at _next (/opt/source/image-process/node_modules/bottleneck/lib/Job.js:5:194)
    at runNextTicks (node:internal/process/task_queues:59:5)
    at listOnTimeout (node:internal/timers:526:9)
    at processTimers (node:internal/timers:500:7)
@IAkumaI
Copy link
Author

IAkumaI commented Jan 19, 2022

I think i found the reason.

I use limiter.on('failed', async() => {...}) for retrying jobs. After last try jobs marked as "error" in external key-value storage to prevent push it to queue again.
But sometimes, if there are many jobs in small time, job with same ID added to queue in time of executing of async on-failed callback. So in that situation we have job ID which is not failed, but added again.

My english not so well. In another words, if you add job with same id before on-failed finished, error occurs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant