Summa summarum we will implement the following formula, which has a maximum error lower than 10⁻⁵:

1 <= x <= 2 ln(x) ≈ -1.941142 + (3.529580 + (-2.461605 + (1.130888 + (-0.2888280 + 0.03111568 * x) * x) * x) * x) * x

This formula is very simple, using only addition, subtraction and multiplication. All the coefficients consist of 6 to 7 digits without the trailing zeros. They can even be memorized.

The final implementation on the soroban will have the following costs:

- Assignments: 6
- Additions: 1
- Subtractions: 5
- Multiplications: 6
- Divisions: 1

- Method for producing efficient algorithms in Maple
- Implementing the formula on the soroban

- Normalizing the value
- Calculation
- Full example

There are not many resources available on the internet for logarithm calculation on the soroban. One of the best methods is detailed on this website: http://webhome.idirect.com/~totton/soroban/Logarithms/

It is still quite complicated and yields to the accuracy of only 3 digits.

After searching for a while I’ve found this idea from prof. Robert Israel: https://math.stackexchange.com/a/61347/403539

In the comments he gives a short explanation how that result was produced, which I was able to reproduce using Maple. The commands are the following:

with(numapprox): Digits := 7 minimax(ln(x), x = 1..2, 5, 1, 'maxerror') maxerror

Press enter after each line, so don’t copy-paste the whole text but line-by-line.

Click on the image for full size:

We didn’t get the same results, because a different precision (“Digits” directive) was chosen.

The important part is the “minimax” command which uses the Remez algorithm to find the best approximating polynomial for a function.

If we right-click on the expression and select “expand”, then we get the canonical form of the polynomial:

-1.941142 + 3.529580*x - 2.461605*x² + 1.130888*x³ - 0.2888280*x⁴ + 0.03111568*x⁵

Which makes it clear why we called the numbers coefficients. Although this form is not useful on the soroban, so we will use the expression with the parentheses.

The parameters of the “minimax” command are the following

- “ln(x)”: The first parameter contains the expression to be approximated. It can be complicated, like: “sin(x² + 3*x)”, etc.
- “x = 1..2″: The second parameter limits the approximation to the interval 1.0 <= x <= 2.0. If this interval is increased to 1..10, then the maxerror also increases, but around 100-fold.
- “5″: The third parameter tells the maximum degree of the resulting polynomial. Which is also the maximum number of multiplications. Increase this if you want to reduce the error.
- “1″: The fourth parameter is a weight function, with which one can ask for more precision for specific intervals and less for others. We used “1″ here, which asks for equal precision on all points of the 1..2 interval.
- “‘maxerror’”: In the fifth parameter we can specify the name of a variable. The minimax command will return the maximal error in that variable.

Feel free to play with the parameters and with the other commands. Notice that increasing the “Digits” makes only a subtle difference in the accuracy. The maximal error can be decreased mostly by allowing polynomials of higher degrees. Also notice that Maple will return an error message if you try to generate a high-degree polynomial with a low “Digits” setting.

Since the formula works only on the 1..2 interval, we will have to do something with larger numbers.

Basically we will always use the following decomposition. This formula makes use of the 2 most useful properties of the logarithm function: it converts multiplication to addition and the exponentiation to multiplication.

log(y) = log(2ⁿ * x) = n*log(2) + log(x)

Example value: 724.552 (We want to calculate the logarithm of this.)

Since it is outside of the 1..2 interval, we will have to normalize it.

Now we have to find the largest power of 2 which is still smaller (or equal) than our number:

Have a look at a list of powers of 2:

1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, …

As can be seen, our value is between these two powers: 512 <= 724.552 < 1024

So the power we were searching for is: 512 = 2⁹

We can get the x value by dividing 724.552 with 512:

724.552 / 512 = 1.415

log(724.552) = log(512 * 1.415) = log(2⁹ * 1.415) = 9*log(2) + log(1.415)

We will show how to calculate log(1.415) in the next paragraph. The value of log(2) can be memorized: ln(2) = 0.69315, log_{10}(2) = 0.30103

Note that there’s another way for the decomposition: by repeatedly dividing the value by 2 until we reach the 1..2 interval. In this case we have to remember (or store it on the other side of the soroban) the number of times we divided by 2. That number will be the power.

This is the approximating formula already shown:

1 <= x <= 2 ln(x) ≈ -1.941142 + (3.529580 + (-2.461605 + (1.130888 + (-0.2888280 + 0.03111568 * x) * x) * x) * x) * x

The x value has to be normalized already in the range 1..2.

The raw algorithm of that formula is the following:

x should be already on the soroban as the result of the normalization. Then: 1. * 0.03111568 2. - 0.288828 3. * x 4. + 1.130888 5. * x 6. - 2.461605 7. * x 8. + 3.529580 9. * x 10. - 1.941142

But this can’t be used directly because of the negative values involved. Since using complementary values would be too complicated we will divide the soroban into two halves in the next chapter.

We want to calculate the natural logarithm of 724.552

Divide the soroban into 2 halves: - The left side should store negative results - The right side the positives Always round the results to 6 decimal places after the comma. 1. Set up the value on the right side of the soroban: Right = 724.552 2. Search for the largest power of 2 which is still smaller than 724.552 2⁹ = 512 3. Right / 512 = 1.415 (write this number and 2⁹ down on a paper) Now the value is normalized and we can start to evaluate the polynomial: 4. Right * 0.03111568 = 0.044029 (Coefficient of x⁵) 5. Left = 0.288828 (Coefficient of x⁴) 6. Left - Right = 0.244799 7. Left * 1.415 = 0.346391 8. Right = 1.130888 (Coefficient of x³) 9. Right - Left = 0.784497 10. Right * 1.415 = 1.110063 11. Left = 2.461605 (Coefficient of x²) 12. Left - Right = 1.351542 13. Left * 1.415 = 1.912432 14. Right = 3.529580 (Coefficient of x¹) 15. Right - Left = 1.617148 16. Right * 1.415 = 2.288265 17. Right - 1.941142 = 0.347123 (Coefficient of x⁰) That is the result of ln(1.415). (accurate up to 4 decimal places) Now we use the left side to store 9*ln(2): (although it is a positive value) 18. Left = 0.693147 (this is ln(2)) 19. Left * 9 = 6.238323 (because 2⁹ = 512) To get the final result: 20. Right + Left = 6.585446 After rounding to 4 decimal places we get an accurate value: 6.5855 = ln(724.552)

Notice that during the polynomial calculation we used only subtraction. That is because in the additions one of the operands was always a negative value.

]]>

Although in theory the Simplex method is quite simple (even economists learn it), but its implementation is much harder and “dirtier” than one would think. I coded only its simplest form – which uses dense matrices – and yet found myself in the middle of complications and rare cases. None of them was mentioned in the books and articles I read. Originally I wanted to undertake its sparse matrix version, which maintains an LU decomposition, but after reading the complaining comments of developers having 10-20 years experience on this field, I’ve changed my mind quickly. Still this little trip allowed me a deeper insight into the concept of linear programming.

The Simplex Solver class consists of these two files. I’ve put both under the GNU General Public License:

It has two dependencies:

- This exception class
- And the Eigen library

Here you can check out this example program:

- Source code: main.cpp
- Executable (64 bit, Windows, console): Simplex.zip

Two example problems are solved in the main.cpp above. Both are 2 dimensional, although the class can work with any variables, and any matrix sizes, which fits into the memory. (I might do some benchmarks later to determine the variable/constraint number, when it starts to slow down.) One of the problems is maximization while the other is minimization.

The maximization problem can be formulated in matrix form like:

Where **x** contains variables, which with the coefficients of **A** and **b** forms a set of linear inequalities, called: constraints. The c matrix contains the coefficients of the objective function cx, which we want to maximize. This is only a 2 dimensional problem, so a simple graphical representation is possible: (Some renaming was done: x1 = **x** and x2 = **y**)

The blue area is defined by the linear inequalities, the task is to find the topmost point, where the red line touches the blue area, if we start to move it downwards, without changing its slope. Easy to see that the solution is the point: (5, 8)

Now its time for the minimization problem:

And the geometric representation is: (Renamed again: x1 = **x** and x2 = **y**)

There is one important thing to mention about the usage: The **A** and **b** matrices are not passed separately to the solver class (which is usual in most simplex implementations) but they form one big constraint matrix, where **b** is the rightmost column, like in the algebraic inequality form.

SimplexSolver *solver1 = NULL; MatrixXd constraints(3, 3); VectorXd objectiveFunction(2); try { objectiveFunction << 1, 2; constraints << 2, 3, 34, 1, 5, 45, 1, 0, 15; solver1 = new SimplexSolver(SIMPLEX_MAXIMIZE, objectiveFunction, constraints); if (solver1->hasSolution()) { cout << "The maximum is: " << solver1->getOptimum() << endl; cout << "The solution is: " << solver1->getSolution().transpose() << endl; } else { cout << "The linear problem has no solution." << endl; } } catch (FException *ex) { ex->Print(); delete ex; } |

Where X is a row matrix, which contains all variables, like: X = [x y z], and M is a square matrix of order **n**, if X has **n** variables. Let`s see an example:

So this 2 variable quadratic expression can be represented by a square matrix of order 2.

We will see in the examples below how to take advantage of some properties of these matrices by using them for specific kinds of factorization, and how to reach algebraic manipulation by the rearrangement of the factors.

The first example has theoretical significance only, because the efforts required to reach the matrix form are nearly the same as factoring the expression directly by noticing a pattern. The method described in the second example is used regularly in mechanics.

Every rank 1 matrix can be written as the product of a column and a row matrix. In addition to this, symmetric matrices of rank 1 can be expressed as the product of a column matrix and its transpose. Example:

The 3×3 matrix in the example above have two properties:

- symmetric
- its rank is 1

So we can factor it as explained before. The elements of the factor matrix can be reached as the square roots of the values in the diagonal of the 3×3 matrix: N = [2 6 7]. With this, the following steps are possible:

So we have factored the algebraic expression, using the properties the the matrix M.

Every quadratic expression which can be written in the matrix form by a symmetric matrix, can be transformed to a sum of squares, using the spectral theorem of linear algebra. Example:

Notice that this 2×2 matrix in symmetric, but its rank is 2 and not 1, so we could not use the method from the first example. But we don`t want to do that. The manipulation in this example only requires one property:

- symmetry

Then we have to calculate the eigenvalues, and the eigenvectors of the matrix:

- The eigenvalues are: 9, 1
- And the eigenvectors:

Now it is possible to factor the matrix by the spectral theorem, which states:

Where Q is the eigenvector matrix divided by the length of the eigenvectors, and Λ (upper case lambda) is the eigenvalue matrix, which contains all eigenvalues in its diagonal, while its other values are zero.

Now the following steps are possible:

**Note:** In the final step we could move the coefficients easily under the squares, but leaving it this way refers to the original use of this theorem: getting the axes of an ellipse.

An ellipse defined by this equation:

This is why the spectral theorem also has the name: principal axis theorem, which was discovered by James Joseph Sylvester.

]]>

With it, you can draw custom graphs, including: arbitrary loops, parallel edges, directed edges, etc. and can generate code for these computer algebra systems: Maple, MATLAB, Maxima. Then the code can be used immediately after a copy-paste.

You can download the program here:

- GraphMatrixGenerator.zip (made in C#, requires .NET 3.5)

Click on the images to enlarge.

- Vertices: select “Vertex” mode. Click on the canvas to create one,
**right-click**on a vertex to remove it. - Edges / archs: select “Edge” or “Arch” mode. Click on a vertex to select it, then click on a second vertex to draw an edge or an arch. To remove edges/archs, just
**right-click**on the**second vertex**. - Loops: select “Loop” mode. Click on a vertex, and
**keep the left button pressed**, and move it to set the direction of the the loops, then release it. You can add**multiple**loops.**Right-click**to remove a loop. - Moving vertices or the canvas. Select the proper mode, click on a vertex or on the canvas, and move the pointer before you release it.

This example is made for the Maxima system, which is a free, open source software, so anybody can download and use it.

I’ve drawn this simple, unconnected, undirected graph: (click to enlarge)

And generated a laplacian matrix for Maxima:

m: matrix ( [2, -1, 0, -1, 0, 0, 0, 0, 0], [-1, 2, -1, 0, 0, 0, 0, 0, 0], [0, -1, 3, -1, -1, 0, 0, 0, 0], [-1, 0, -1, 2, 0, 0, 0, 0, 0], [0, 0, -1, 0, 2, -1, 0, 0, 0], [0, 0, 0, 0, -1, 1, 0, 0, 0], [0, 0, 0, 0, 0, 0, 2, -1, -1], [0, 0, 0, 0, 0, 0, -1, 2, -1], [0, 0, 0, 0, 0, 0, -1, -1, 2]);

Copy this text into Maxima and then press SHIFT-ENTER. It will display the matrix:

Now let`s calculate the eigenvalues of the matrix, to get the spectrum of the graph, by:

float(eigenvalues(m));

The response is (after a shift-enter):

[[0.43844718719117, 4.561552812808831, 2.0, 3.0, 0.0],[1.0, 1.0, 2.0, 3.0, 2.0]]

We got two lists. The first contains the unique eigenvalues, and the second shows how many we have of them. So the full and ordered list of eigenvalues is:

0, 0, 0.43844718719117, 2, 2, 3, 3, 3, 4.561552812808831

The second smallest eigenvalue is called the algebraic connectivity, if it’s zero, it means that the graph is not connected, which is true.

]]>

So if deadlocks are so common why not to study their properties? This post is about to prove a simple property of most database systems by using the tools of graph theory.

Databases are accessed by multiple clients at the same time. Inside the database, a separate process exists for every client. These processes carry out their instructions. The processes are accessing resources (tables, records, indexes, etc.), and only one process can access a resource at a time, while the others have to wait until it finishes. When a process X has to wait for another process Y, we say that X is dependent of Y. This dependency ends when Y makes the resource free.

These things can be beautifully represented in the terms of graph theory. Let’s see a comparison table between the different terms of these two fields:

Database system | Graph theory |

Process | Vertex |

Dependency | Directed edge, arch |

Deadlock | Directed cycle |

Maximum number of dependencies of the processes. | The maximum outdegree of the vertices. |

Here is a picture of a typical deadlock:

We will prove this property:

- In a transactional database system, two or more deadlock cycles are never connected to each other. They are always separated.

This statement is based on the fact, that in most transactional database systems, a process can be dependent of at most one other process. So they wait, instead of trying to carry out some other tasks meanwhile.

Now let’s translate these statements into the language of graph theory:

**In a connected, directed graph (digraph) G, if the maximum outdegree of the vertices is 1, then it can contain maximum one directed cycle.**

Here is a summary of the used symbols:

So we will prove the original statement by trying to deduce a contradiction from its negated form. Suppose we have 2 directed cycles inside G, their names are C and D.

[1] states formally that the maximum outdegree is 1

The vertices in the two cycles have minimum 1 outdegree, because all vertex is dominated by the next vertex in the cycle sequence. But because [1], the vertices in the cycles must have exactly 1 outdegree, as shown in [2] and [3]:

Because our graph G is connected, a simple path P must exist between the two cycles, which connects them. The path is also a sub-graph, and a well-known fact for sub-graphs is: the sum of all outdegrees of the vertices gives the number of edges [4]:

And for simple paths we also have a specific relation between its vertices and edges:

If we substitute the right side of [5] into the right side of [4], we get:

Which means: The sum of the outdegrees of all vertices inside a path is always equal to the number of vertices minus one.

For the final step we will use the method called: pigeonhole principle

The path P connects the two cycles, so its two endpoints are inside the cycles, in result the endpoints have 1 outdegree by default, and they cannot have more because of [1]. So we have np-2 free vertices and we have to distribute np-1 outdegree values between them. As a result of the pigeonhole principle we will have at least one vertex, which has at least 2 outdegree.

This contradicts with [1], so the original statement is proved.

]]>The usual proof for this theorem is built on induction, but later I have found a proof which is using a basic inequality.

I am sure that I’m not the first one who have seen this solution, although I couldn’t find it on the net yet.

Used variables:

- n: number of vertices
- m: number of edges
- k: number of partitions over the vertices
- a
_{i}:number of vertices inside partition i

All are natural numbers

The sum of the vertices inside the partitions gives the full number of vertices. We will use this many times.

The number of edges in a complete k-parted graph is:

We can formulate (2) to a form which contains sum of squares, using (1).

Now let’s write down the inequality between the arithmetic and the quadratic mean, over the partitions. This inequality can be derived from the Cauchy-Schwarz inequality.

Again (1) is used, and at the end we made the quadratic sum explicit on the right side.

Before I found this, I was thinking a lot how to connect the sum of squares of the partitions – which can be found in (3) – to a constant in a way which would give us a lower limit, and an equality when the partitions have equal sizes. We need a lower limit because the quadratic sum in (3) is after a minus sign, and we need a maximum for m.

Now we only have to multiply (4) with -1, then add n^2 and after that, divide by 2, and we get an upper bound for the number of edges, because now the left side equals to m. (see (3).)

There is equality in (5) in the case the partitions have equal number of vertices. This is because the “inequality between the arithmetic and the quadratic mean” has this property.

Although the equality of the partitions is only possible when k|n, in other cases they can be only nearly equal. On the one hand this can be interpreted as a flaw of the proof, but on the other hand, it also suggests that the inequality between the arithmetic and the quadratic mean also has this sensitivity to “nearly equal” values, which could lead to other investigations.

]]>