I am currently trying to create a control system which is required to find a solution to a series of complex linear equations without a unique solution.
My problem arises because there will ever only be six equations, while there may be upwards of 20 unknowns (usually way more than six unknowns). Of course, this will not yield an exact solution through the standard Gaussian elimination or by changing them in a matrix to reduced row echelon form.
However, I think that I may be able to optimize things further and get a more accurate solution because I know that each of the unknowns cannot have a value smaller than zero or greater than one, but it is free to take on any value in between them.
Of course, I am trying to create code that would find a correct solution, but in the case that there are multiple combinations that yield satisfactory results, I would want to minimize Sum of (value of unknown * efficiency constant) over all unknowns, i.e. Sigma[xI*eI] from I=0 to n, but finding an accurate solution is of a greater priority.
Performance is also important, due to the fact that this algorithm may need to be run several times per second.
So, does anyone have any ideas to help me on implementing this?
Genetic algorithms are capable of quickly finding fairly good solutions for these sorts of problems, but depending on the shape of your fitness function they may get stuck in local optima that may be arbitrarily less optimal than the global optimum. You can mitigate this effect by increasing the mutation rate, which makes the algorithm more exploratory at the cost of taking longer to converge on a solution.
when I did controls algorithms, it was often, but not always, the case that if you solve some awful thing like this for a few cases, you may find that the optimal solutions are found in a small number of ways (eg the best answer is always variable 10 maxed, variable 18 minimized, and variable 7 has to be smaller than 18 or some silly set of rules like that). It may pay off richly to solve this thing on real data a great many times and do a statistical study of the variables to see which ones have the greatest effect on the solution, and what patterns you can find out about the answers. That in turn could let you eliminate some of the variables entirely, at least as far as solving them (they may still need a value to set a control or something even if they have little to no input on the actual solution side). This may not handle unexpected data well, though: if you go this route be sure your system either won't have unexpected data or has a backup plan for what to do if the data does not match a known pattern (and likely the backup plan puts you right back where you are now...)
solving it several times a second should be no big deal on anything remotely computerish; we ran equally complex stuff at 50-75 hz on computers with 300 or so MHZ processors. What kind of hardware will you have?
Also, if the results are satisfactory, what do you get from a 'better' answer?