I have made a function that take a matrix I input, takes its determinant, and its inverse then multiplies its inverse with the original matrix to give an identity matrix.
I cannot seem to figure out what is wrong with my code, it almost gives me the identity matrix, however, instead of 0's where there should be zeroes, it is ridiculously small numbers.
Is this just an inherent issue with multiplying doubles in c++? I have listed my function below in case anyone wanted to take a look at it. Keep in mind this is for a basic (2x2) matrix
Matrix Multiply(const Matrix& A, const Matrix& B)
{
if (A.getcols() != B.getrows() )
{
cout << "Better luck next time, sucker!" << endl;
cout << "In order to multiply two matrixes, number of Columns of A must equal number of Rows of B." << endl;
exit(1);
}
Also, the function runs, and computes some sort of value. I created a class called matrix with several member functions, and some of them are used in that code above. More or less just begging someone with linear algebra and coding knowledge to come in and let me know if my loops are set up to perform the computations in the correct order.
+= is preferred, eg value +=
but, compute directly into result: result.at(?,?) += (a(etc)*b(etc) (value is not useful, get rid of it or make value a loop-local reference to result(..))
as far as I can tell, its right. There is a non N^3 algorithm but its a royal pain to set up and not necessary for small problems. It is generally more efficient to transpose B first so you iterate over rows in both matrices, but here again, this is for larger problems.
have you tested it on a known problem with a known answer (plenty on the web..!)?
near zeros are normal, but not so much for a 2x2 (that is some 'bad' scaling to hit an issue so early! you can look up 'condition number' and see if your data is just 'difficult'). You are doing a lot of stuff, determinate and inverse etc and errors compound in each step... Its tied to doubles (not c++) and the algorithm you are using (you don't have any numerical methods applied). You can help this somewhat if you normalize the matrix (divide all elements by largest element ) and multiply that back afterwards. Or you can make a cutoff and assign 0 over too small a result. There are many other numerical tricks to improve the answer. Each one costs time to do and code to add to enhance your answer. Generally, if you want better answers, don't do this yourself, get a library like eigen. Each thing you want to do will spawn 2 more... and each one has to have numerical tricks to ensure best answer.... it explodes into a lot of work at an alarming rate. I had to solve ax+xb = c for x... and ended up writing an entire matrix library to do it with eigen values and lu factorizations and pseudoinverses and all that junk for one blasted equation.. (some of it I didn't have to have, but was trying to find a way to solve that wretched thing).
unrelated:
after 3x3, and arguably even then, finding a determinant is as costly or moreso than just doing the work esp if you find it the hard way. to find it the easy way, you will see that getting the bits you need to do that... solved whatever you wanted it for in the first place. If there is a super efficient way to find them without finding a bunch of tangential useful stuff first, I never ran across it.
"Ridiculously small numbers" instead of 0 exactly is possible with floating-point round-off. No way of knowing, because you haven't shown testable code.
Your innermost loop (on j) would be wrong for non-square matrices: it should be
j < A.getcols()
(or you could obviously use B.getrows()).
Of course, there could be errors elsewhere in the code that you haven't shown...