-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TransientSolution.Solve sometimes produces invalid solutions. #177
Comments
I expect that using doubles can sometimes produce slower computer algebra (because of rationals getting huge), but it's surprising to see NaN instead of a slightly different number. This seems like a bug. In general though, it's pretty helpful to have sampling rates (or their inverses) be integers (or at least rationals). |
I've been looking into #170 and I think it may be the same as this issue: when |
I think the above commit very likely fixes this issue. It makes perfect sense, because by using a double with a near-irrational value for the timestep will make the rationals a lot bigger, and more likely to overflow a double. By handling this case, the issue should be fixed. Please confirm if this is the case for you. BTW, I wonder if this will have a big impact on some of the other experiments you were doing. It makes sense that reordering the components would make certain rationals bigger or smaller, that may have been working around this issue too! |
After rebasing, everything seems to work OK, except one circuit (which was always problematic) But doing some |
There's a pretty easy tie breaker that implements that idea: choose the pivot with the fewest number of non-zero entries when adding it to the remaining rows. I already did something like that in full pivoting, we can do the same thing in partial pivoting I think. I'm going to try it now. |
I've pushed a branch that pulls in a refactor to pivoting in ComputerAlgebra. At the very least, it should be easier to experiment with than before. I haven't been able to find a pivoting tie-breaker that is consistently good, though I have had some surprising wins (e.g. MXR Phase 90 solves 10% faster and simulates 30% faster). I'm also not sure there aren't bugs in this code yet. |
I've run a quick test and it's quite cpu-heavy, I must say. For the more complicated circuits I didn't noticed better success rate than before. I really have to create you some circuit permutations examples for A/B testing, maybe it'll help finding the issue. |
Can you clarify what you mean by CPU heavy? On that branch, I'm seeing a geomean change of +2% solve time, and a +4% sim speed. So, a slight improvement in simulation speed at the cost of a slight regression in solve time. edit: Also, it seems like it tends to be the bigger circuits that improve sim speed the most, while the circuits that were already super fast (10x+ real time) get slower. |
Also, should we mark this particular bug fixed, because of the huge rational issue? This pivoting issue seems separate. |
There's a discussion on #130 for pivot selection improvements now |
I think we should consider the original issue here fixed :) |
Today I've learned that you have to be very careful when calculating a time step, because it's implicitly converted to the


BigRational
. Debugger will show you a nice looking double like2,0833333333333333E-05
but under the hood it could be1/48000
or6148914691236517/295147905179352825856
😒This shouldn't make a difference, and for some circuits that's the case - just a small differences in generated numbers:
but somehow, for some circuits I'm getting this:
That could also explain why, in some cases a circuit performs just fine without any oversampling and I can't get it to run with oversampling applied, which is generally counterintuitive, as the smaller time step should make things better, not worse 😉
The text was updated successfully, but these errors were encountered: