Minion backend idle connection issue #2229
Unanswered
guillomovitch
asked this question in
Q&A
Replies: 3 comments
-
Your strategy #2 is excatly what we use - a simple Mojo::IOLoop->recurring(
...) should do nicely.
/Lars
…On Wed, 29 Jan 2025 at 16:35, Guillaume Rousse ***@***.***> wrote:
Hello. My application is actually composed of two parts:
- the frontend is a mojolicious web application, running on "front"
host
- the backend is another mojolicious web application, running on
"back" host
The database itself, as well as the minion runner, both run on the "back"
host
The problem comes from the fact than both hosts are hosted on different
networks, and the front -> back database connection is established trough a
firewall. As this connection is only used when there is an actual job to
enqueue, the connection regulary exceeds idle connection timeout, and is
then *silently* closed by the firewall: no TCP reset packet is sent, and
any subsequent packet is just dropped... As a result, any attempt to
enqueue a job from the frontend is bound to fail. And the idle connection
detection mechanism built in Minion backend doesn't work either, as it
relies on DBI ping() method, which also results in a timeout ((at least for
Mariadb minion backend, I didn't tried any other one yet).
The root cause here is clearly the firewall behaviour when closing
connection, which doesn't allow graceful reconnection. We have a opened a
support ticket about it, but I don't expect much from it, so I'm looking
for workaround. Here are the various strategies I'm currently considering.
Strategy #1 <#1> would be to
stop using a permanent database connection, but to create the connection
only when needed, at request handling time. I guess just manually creating
a minion object in relevant Mojolicious controller, instead of relying on
the Minion plugin, would be enough. As we don't really have performance
issues, that would be relatively easy.
Strategy #2 <#2> would be to
have some kind of hearbeat mechanism from the frontend, the same way the
minion runner already does, which would prevent the connection to ever turn
idle. That's probably just a matter of calling DBI ping() method on the
backend DBD object at some point during IO::Loop, but my understanding of
Mojolicious internals are clearly insufficient here.
Strategy #3 <#3> would be to
make the idle connection detection mechanism in minion backends more robust
against silly network issues, but that would probably be heavily
backend-dependant.
I've also experimented having the database server handling idle
connections closing instead of the firewall, or extending idle timeout,
with various results, hence my request for advices here.
If that matters, I'm using Mojolicious 9.36, Minion 10.29,
Minion::Backend::mysql 1.004 and DBD::MariaDB 1.23
Thanks to anyone brave enough to have read so far :)
—
Reply to this email directly, view it on GitHub
<#2229>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAJBV2DCNAQIXZRA5PXGW32NDYNHAVCNFSM6AAAAABWDEWJBKVHI2DSMVQWIX3LMV43ERDJONRXK43TNFXW4OZXHA4DSOBRGI>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
0 replies
-
Given there is no backend-agnostic method available yet, this would probably be something similar as:
|
Beta Was this translation helpful? Give feedback.
0 replies
-
The following PR has been opened in Minion repository. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello. My application is actually composed of two parts:
The database itself, as well as the minion runner, both run on the "back" host
The problem comes from the fact than both hosts are hosted on different networks, and the front -> back database connection is established trough a firewall. As this connection is only used when there is an actual job to enqueue, the connection regulary exceeds idle connection timeout, and is then silently closed by the firewall: no TCP reset packet is sent, and any subsequent packet is just dropped... As a result, any attempt to enqueue a job from the frontend is bound to fail. And the idle connection detection mechanism built in Minion backend doesn't work either, as it relies on DBI ping() method, which also results in a timeout ((at least for Mariadb minion backend, I didn't tried any other one yet).
The root cause here is clearly the firewall behaviour when closing connection, which doesn't allow graceful reconnection. We have a opened a support ticket about it, but I don't expect much from it, so I'm looking for workaround. Here are the various strategies I'm currently considering.
Strategy #1 would be to stop using a permanent database connection, but to create the connection only when needed, at request handling time. I guess just manually creating a minion object in relevant Mojolicious controller, instead of relying on the Minion plugin, would be enough. As we don't really have performance issues, that would be relatively easy.
Strategy #2 would be to have some kind of hearbeat mechanism from the frontend, the same way the minion runner already does, which would prevent the connection to ever turn idle. That's probably just a matter of calling DBI ping() method on the backend DBD object at some point during IO::Loop, but my understanding of Mojolicious internals are clearly insufficient here.
Strategy #3 would be to make the idle connection detection mechanism in minion backends more robust against silly network issues, but that would probably be heavily backend-dependant.
I've also experimented having the database server handling idle connections closing instead of the firewall, or extending idle timeout, with various results, hence my request for advices here.
If that matters, I'm using Mojolicious 9.36, Minion 10.29, Minion::Backend::mysql 1.004 and DBD::MariaDB 1.23
Thanks to anyone brave enough to have read so far :)
Beta Was this translation helpful? Give feedback.
All reactions