-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Run ensure L2 flow dependently on both devices #285
base: stable/yoga-m3
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very nice, and much appreciated. As usually I reviewed basic syntax only, actual code review to be done by @BenjaminLudwigSAP and @notandy
octavia_f5/tests/unit/controller/worker/test_l2_sync_manager.py
Outdated
Show resolved
Hide resolved
octavia_f5/tests/unit/controller/worker/test_l2_sync_manager.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this PR! I left a few annotations and questions.
ensure_l2_flow = self._f5flows.make_ensure_l2_flow(selfips, store=store) | ||
e = self.taskflow_load(ensure_l2_flow, store=store) | ||
def _do_ensure_l2_flow(self, data: dict): | ||
ensure_l2_flow = graph_flow.Flow('ensure-l2-flow-from-all-devices') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see the point in using a graph flow here. The way I understand it, a graph flow is for throwing multiple tasks/subflows into a bucket, which are then resolved according to their interdependencies. However, here we only have two subflows, one for each BigIP device. As far as I can see those can be parallelized, so an unordered_flow
would suffice. Or is there another good reason to use graph_flow
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The main difference is that you can run flows on both devices in parallel but if one of them fails the second one will be also reverted. You can achieve that with unorder flow only if you put all tasks for both devices into one unordered flow and they will run one by one for a really long time.
""" | ||
|
||
# make SelfIP creation subflow | ||
ensure_selfips_subflow = unordered_flow.Flow('ensure-selfips-subflow') | ||
ensure_selfips_subflow = unordered_flow.Flow( | ||
f'ensure-selfips-subflow-{store["bigip"].hostname}') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
store["bigip"].hostname
is used so often in this function (and reevaluated each time), that I prefer it to be put into a dedicated variable. But it's not very important, I'm also okay with this happening in some future refactoring commit or something.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
make sense, will change that. BTW, getting a value from hash-map (dict) costs nothing
for selfip_port in selfips: | ||
ensure_selfip_task = f5_tasks.EnsureSelfIP( | ||
name=f"ensure-selfip-{selfip_port.id}", inject={'port': selfip_port}) | ||
name=f'ensure-selfip-{selfip_port.id}-{store["bigip"].hostname}', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's better to swap these two interpolations (i. e. {store["bigip"].hostname}-{selfip_port.id}
), because the SelfIP is subordinate to the device. But that is a style choice, so I'm fine if you don't adress this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
make sense, will change
ensure_subnet_route_task = f5_tasks.EnsureSubnetRoute(name=f"ensure-subnet-route-{subnet_route_name}", | ||
inject={'subnet_id': subnet_id}) | ||
ensure_subnet_route_task = f5_tasks.EnsureSubnetRoute( | ||
name=f'ensure-subnet-route-{subnet_route_name}-{store["bigip"].hostname}', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's better to swap these two interpolations (i. e. {store["bigip"].hostname}-{subnet_route_name}
), because the route is subordinate to the device. But that is a style choice, so I'm fine if you don't adress this.
inject={ | ||
'bigip': store['bigip'], | ||
'network': store['network'] | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please move all these identical inject dictionaries into a variable.
It can also include subnet_id
, since afaik it's okay to inject that into tasks that don't take it as an argument.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hm, good point, maybe I can just do it like inject=store
, will test
|
||
if res and not res.ok: | ||
LOG.warning("%s: Failed removing route domain for network_id=%s vlan_id=%s: %s", | ||
bigip.hostname, network.id, network.vlan_id, res.content) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please reraise the error, like this: res.raise_for_status()
.
network: f5_network_models.Network): | ||
device_response = bigip.get(path=f"/mgmt/tm/net/vlan/~Common~vlan-{network.vlan_id}?expandSubcollections=true") | ||
if device_response.status_code == 404: | ||
return None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please either reraise any error (device_response.raise_for_status()
) or remove the RaisesIControlRestError
annotation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do not need to that here, because if resource does not exist it's ok and we will create it in the next task
device_response = bigip.get(path=path) | ||
|
||
if device_response.status_code == 404: | ||
return None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please either reraise any error (device_response.raise_for_status()
) or remove the RaisesIControlRestError
annotation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do not need to that here, because if resource does not exist it's ok and we will create it in the next task
} | ||
# self.manager._do_ensure_l2_flow(data=data) | ||
self.assertRaises(Exception, self.manager._do_ensure_l2_flow, data=data) | ||
# check thath both devices were called and REVERT task were not called |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You mean "were also called" ;)
@BenjaminLudwigSAP please take a look again. I added an availability check for BigIP devices. |
This PR changes
ensure_l2_flow
to synchronize both F5 devices dependently. This change implements:EnsureVLAN
EnsureRouteDomain
The same changes (using graph flow) are coming soon for
remove_l2_flow
.