Monday, April 17, 2017

P = NP Problem 3



























Let me restate the problem with the N = NP: interests for the largest amount of information, rather than the smallest amount. If you ask for information which you are not going to check - them obviously one side of the equation will get exponentially larger. It is a matter of, if you like, infinitesimals. The side which has to produce needless information will rapidly become larger. Therefore it is the smallest amount of information which is a interest to N = NP.
Then there is the larger point that N = NP is recursive - that is one must delve down in to the matter, until the actual information is defined by one of the most basic units of logic.

These 2 points together, as well as a certain amount of confusion, are all that is necessary to resolve the problem. Of course I do not believe that you will understand this - and if you do you will project all of it. That is fine, either I will be forgotten - or the next generation of mathematicians will see this as obvious. The only point that I can raise his the matter of Gödel – Turing - Nash as an analog to the almost immediately repeated Ampere - Volt – Ohm battery. No one has yet thought of it, but I did. Since this small thing is at least nominally important, the larger thing must be at least looked at. And then forgotten until such time as a new generation of mathematicians and computers scientists are willing to take it seriously.
So how does this work? I will again tell you to look up pages 117 and 118 in “Set Theory and the Continuum Hypothesis”, by Paul J. Cohen. The text that I have are filled with some truly obvious mistakes, and I hope that someone will correct the edition, and perhaps clean up the type. I can give you 2 names at present.
The 1st step is to realize that each proof will come down to one of these forcing situations. What you have to do is determine which forcing situation is involved, and then call a Function name with the correct formal parameters, with the intent of reducing the parameters to pointers. Then, at the next level reduced again until you are down to one of the forcing scenarios.
Let us take an example - an easy one since it is used in Wikipedia. I refer of course to the subset sum problem. Of course it fits with the 1st problem of the type involved: it asks for as much information as possible, rather than as little information as possible. But think about Turing's bomb machine - it had as little information, but it knew only 2 things: the information was sent out, and it was acted upon by recipients. This going out going in behavior, was key to knowing that there was information at all. Another country may well have sent out non-information - the British would do so, and did so with gay abandon, for example the Army of men commanded by Gen. Patton before the invasion. There was no such army, but they think it really well.
With as much information as possible, the subset sum problem needs to generate information which has new use to the receiver - he can simply take the view pieces of information he does use, sum them, and he is done. But with as little information as possible, it is an entirely different picture. Now the follower needs to sum up all of the information, while the subset sum problem only needs to give out the minimum amount.
So what is the minimum amount?
The minimum amount is yes or no to the question - “Do any of the numbers some to a number which is of interest?” that way, the checker needs to some of a descendent square, which is similar to the size that the subset needs to figure out.
But how do we do this?
We could do this in real-time, and solve everything by doing Turing equations - but that would still be slower. But there is a better method - instead of solving each equation, we just need to find out whether the equation could be solved in its current form. In essence we ask “ could I solve this equation in the current form?” this is either true or false - but we do not need to know what the number actually is. So for example let us take {1,3,10} as our set, and {4} as our comparison. The greedy algorithm once us to give us back {1,3} and {4}, and then needs only to add 3 and 1 together to get 4. Good work if you can get it.
However, the parsimonious parser does not want to give anything but “yes”, because it found a number which was equal to 4. And it did so by not testing every number, which would be slow - but whether a group of numbers could be added up to find whether or not it could work - in other words, “if I asked you whether these numbers added up to what I am looking for, could you add them up and produce a result?” In this case the parser is not interested in whether the numbers add up to anything, but could they add up to anything at all - or would there be an error? And it does so by declaring 4 to be “undecidable” and no other number. It then goes off to do the work.
This requires 2 iterations, because if it added up it would produce 0,0 or 1,1 – no error. It would only respond with an error in the case of 0,1 or 1,0 - that is it would be bouncing back and forth between the 2 answers ( +, -). it would then keep bouncing around the error until either it produced one, which would mean it had found a match, or if it had run out of numbers in its binary search tree.
In the case of an error, we know that the number is equal to - in this case – 4. Because it would return an error, which would be clustered and shipped back to the code which called it - which will add the numbers back to the error code. In this case 1,3 and 4. It returns with, move to the left, move to the right, or stop.
It which point this code would say that there was a number which added up to 4, and shipped this code back to main. And we would be done without actually adding up anything, just the attempt would suffice.
At this point you would have to stop yourself, because you want to add up the numbers. But in a computer, adding up the numbers is actually more strenuous than checking whether the numbers could be added up, or would there be an error. Because checking in there is before adding them.
Now we get back to 117, because of course we need to force P. but fortunately, the very 1st finite force is the thing that we are looking for. Again we do not need to know whether every forced number exists, just for a particular number. And we can do this as many times as we like, because it is finite.
QED, as the saying goes - any number can be forced in this way. Yes, this is a Gödelian proof, involving 3 not 2 numbers, and a Turing line to resolve whether it can be done. Most of the heavy lifting was done by Cohen - he just became confused about biological - or silicone - vs. God like beings - which again you do not need to believe in, just hypothesize the difference.
So let us review. The P = NP problem is P != NP because the recipient can exercise a greedy algorithm, eventually getting the giver to start having to solve problems which are going to be ignored. But in a more parsimonious mode, the giver can be, Cohenesque, about what it gives out. An example was shown.
This is where Nash comes in (and all of the sub-variants, such as side payments), because the 2 are going to search out there best algorithm, which in the simplest case is the prisoners dilemma. This will be a later post. That is where both end up frustrating each other's desires. Think of it as a race. GTN.

Next however, is Cohen paradox.