Monday, September 16, 2019

Off Topic | Yet On | The Limits of Computing and Heueristics

Team. Numerous textbook have been written that describe the limits of computing. One was written by one of the author's college mentor and instructor while he studied for a year at a well-ranked liberal arts college in the American heartland. The textbook describes some of the bounds established within the 1970s and 1980s.

Also, classic textbooks in the theory of computation from this time describe the notions spanning between computability and intractability. Yet, a brief overview of Michael Sisper's work, which is more conceptual and qualitative and less quantitative might make one question some of the results in the "well-established" traditional works.

When working with mathematical concepts, one must keep an architectural view, the big picture, while working in the details of the equations, or, in the case of computing, the imperatives of the system under development. One can become so mired in the numbers that he loses his path along the way, seeing the trees and forgetting that he is in Sherwood forest. And, the eagle-eye view lets one see the start of the trail, the finish, and the possible routes in-betwixt these.

And, this is ever so true in the field of algorithms and their analysis. Cormen, Leirson, Rivest, and Stein wrote a classic comprehensive text on the subject. Yet, it presents heuristics as computational problems for which algorithms which are efficient in space and time cannot be found.These are place in the class of "nondeterministically" polynomially-complete problems, or NP-complete problems.

In terms of algorithm analysis, a procedure is said that it is efficient if the amount of steps required in solving it is a polynomial function of its input and the number of memory space required for performing the computation is also the same class of function of its input.

In practice, it is best that this be a quadratic polynomial or better, such as a logarithmic or linear function of the input. In other words, the number of computations required for processing the input should grows as one of these functions as the size of the input increases.

Yet, as taught, these problems were presented as part of an early doctoral dissertation in the earlier years of graduate computer science program. Seeing that the oldest computer science program in the states was established at Carnegie Mellon around 1968, these programs are rather new. And, absolutely nothing novel presented among the theses and dissertations prepared during these early years of computing has stood the test of time. For one, it is said that one should never be a respecter of personages; anyone is fallible. As such, one should not be in awe of program names such as Cambridge, Oxford, Harvard, Princeton, Brown, Princeton, Harvey Mudd, Stanford, Yale, MIT, or CalTech. Furthermore, the greats in computing have limitations, human foibles, and weakness in the midst of their mistakes. So, simply because Lamport, Berners-Lee, Turing, Gosling, VonNeumann, Cerf, Diffe, Hillman, Goldwasser, Naur, Kay, Karp, Hopcroft, Tarjan, Hilbert, Dirac, Nash, or any other "heavy" in computing and mathematics states it as such, it might not be the case.

Professor Stephen Cook is quite famous for his work with "heuristics"; yet, the connotation for this term in computer science is suggestive of "a problem which does not have an efficient optimal solution, one that must be solved approximately". Yet, as we excel in certain areas, we often have deficiencies, minor and gross, in others.Most mathematicians and computer scientist are not know for the strength of their vocabulary, on average, although they are quite "bright". They simply do not focus on such topics. One cannot excel in all areas. And, finding a student, even at the graduate-level who will check every reference in a research paper which they are reading or define every term in a problem specification is exceedingly rare. Most students simply "fill-in" meaning from the context. At time, this might misrepresent the actual meaning of the passage and occasionally present an opposite semantic. And, in some languages, such as English which can have quite a spin on it at times, word with similar sounds might be antonyms, such as timority and temerity.

And, on the topic of computing and heuristics, the following are three of the traditional definitions of the term.

from: www.dictionary.com

heuristic
[ hyoo-ris-tik or, often, yoo- ]
adjective
serving to indicate or point out; stimulating interest as a means of furthering investigation.
encouraging a person to learn, discover, understand, or solve problems on his or her own, as by experimenting, evaluating possible answers or solutions, or by trial and error: a heuristic teaching method.
of, relating to, or based on experimentation, evaluation, or trial-and-error methods.
Computers, Mathematics. pertaining to a trial-and-error method of problem solving used when an algorithmic approach is impractical.
noun
a heuristic method of argument.
the study of heuristic procedure.
Origin of heuristic
1815–25; < New Latin heuristicus, equivalent to Greek heur(ískein) to find out, discover + Latin -isticus -istic

from:the Webster's Collegiate New World Dictionary

heuristic
adjective

    The definition of heuristic refers to techniques, activities or lessons that allow someone to discover something for himself or by finding solutions through experiments or loosely defined rules.

    A process whereby you are asked questions to discover answers on your own and learn more about yourself on your own is an example of a process that would be described as heuristic.

noun

    Heuristics are defined as ways of finding out the answer to a question.

    An example of heuristics are common sense and trial-and-error.

Heuristic. (n.d.).

from: the Merriam-Webster Dictionary

heuristic

helping to discover or learn; specif., designating a method of education or of computer programming in which the pupil or machine proceeds along empirical lines, using rules of thumb, to find solutions or answers

Origin of heuristic
from German heuristisch from Classical Greek heuriskein, to invent, discover: see eureka

heuristic adjective
heu·​ris·​tic | \ hyu̇-ˈri-stik

Definition of heuristic(Entry 1 of 2)

: involving or serving as an aid to learning, discovery, or problem-solving by experimental and especially trial-and-error methods heuristic techniques a heuristic assumption also : of or relating to exploratory problem-solving techniques that utilize self-educating techniques (such as the evaluation of feedback) to improve performance a heuristic computer program

heuristic noun
heu·​ris·​tic | \ hyu̇-ˈri-stik

Definition of heuristic (Entry 2 of 2)
1 : the study or practice of heuristic (see heuristic entry 1) procedure
2 : heuristic (see heuristic entry 1) argument
3 : a heuristic (see heuristic entry 1) method or procedure

German heuristisch, from New Latin heuristicus, from Greek heuriskein to discover; akin to Old Irish fo-fúair he found

So, as the above three definitions show, a teaching heuristics is a learning aid that lets the student investigate a challenging problem developing a method of solution without direct guidance and "hand-holding" from an instructor. It should result in a "Eureka!" moment when one solves the problem.

As such, by definition, heuristics are solvable. Yet, let us examine this "popular" connotative and not denotative definition from Wikipedia:

A heuristic technique (/hjʊəˈrɪstɪk/; Ancient Greek: εὑρίσκω, "find" or "discover"), often called simply a heuristic, is any approach to problem solving or self-discovery that employs a practical method, not guaranteed to be optimal, perfect, or rational, but instead sufficient for reaching an immediate goal. Where finding an optimal solution is impossible or impractical, heuristic methods can be used to speed up the process of finding a satisfactory solution. Heuristics can be mental shortcuts that ease the cognitive load of making a decision.[1]:94 Examples that employ heuristics include using a rule of thumb, an educated guess, an intuitive judgment, a guesstimate, profiling, or common sense.

Yet, often those who excel in computing and mathematics do not do the same in language arts.

However, one does not say "Eureka!" when he is near gold, but when he has it in hand.

In term of the NP-Complete problems, it was Professor Cook who is credited with determining that they are mutual reducible. In other words, one problem in the class NP can be cast in the light of another. They are convertible.

So, what might be a procedure for resolving a an NP-Complete problem. One such class of problem is that of the subset-sum. This problem says that given a set of costs find a subset whose total cost equals a certain amount.

Consider the following set cost where we want a subset of cost 15:

S = { 3, 5, 7, 11, 13 } C = 15

Try solving the problem with a Diophantine Equation.

 C = 15

 F(a,b,c,d,e) = 3a + 5b + 7c + 11d + 13e = 15

 where {a,b,c,d,e,f} are in [0..1] for example.

This can be solved by inspection; however, such is unsatisfactory in the general case of the problem.

Yet, let us use Miller-Kovarik's Secondary Method.

G(a,b,c,d,e) = F(a,b,c,d,e)-15 = 0

0 = H(a,b,c,d,e) = G(a,b,c,d,e)^2

0 = ( 3a + 5b + 7c + 11d + 13e - 15 )^2=H(a,b,c,d,e)

The root of G(a,b,c,d,e) and H(a,b,c,d,e) coincide; yet, H is a near parabolic-surface. The Miller-Kovarik Secondary Method which is also described in these notes will address this multidimensional case of a root- or minimum-finding problem.

The solution and minimum of the surface composed with a parabola would be at (1,1,1,0,0)

In fact, such a minimum finding procedure is effective in factoring integers of arbitrary size when one realizes that for every odd composite, N:

N = (2x+1)(2y+1)
N = 4xy + 2x + 2y + 1
N = 2( 2xy + x + y ) +1
N = 2C+ 1, where C = 2xy + x + y
C= (N-1)/2 and F(x,y) = (N-1)/2 = 2xy + x + y
G(x,y) = F(x,y) - (N-1)/2 = 2xy + x + y - (N-1)/2 = 0
H(x,y) = G(x,y)^2 = (2xy + x + y - (N-1)/2)^2 = 0

This is also an application of Sundaram's Theorem. This does not bode well for "eft" and "https" which each use public-key ciphering systems and the traditional Diffie-Hillman key exchange. It also suggests that a block-chain might be vulnerable and corruptible if each "node" were attacked concurrently.

Yet, on the topic of NP-Completeness, based upon the work of Stephen Cook, if one problem falls then they all do. However, it has been taught that some ciphering protocols depend upon these problems which often result in a combinatoric, factorial, or exponential growth in the number of steps or memory locations required in their solution when solved naively.

Polynomial-time algorithms exists for the solution of the clique problem, which is resolvable using unique prime numbers mapped with each node, the highest common factor algorithm of Euclid, and an iterative pairwise comparison of the adjacency list in graphs until new maximal sub-cliques are not found. It should be noted that these sub-cliques might overlap.

During the Spring of 1987, when the procedure known as the Miller-Kovarik Secondary Method was first jotted down in the author's recreational math textbook, he shared this ideas with his trigonometry teacher for whom it is named. She shared this with the enrichment mathematics teacher who had just hosted a mathematics seminar where a blind student surnamed Miller presented a poster that inspired the approach, about a month earlier. Attending this seminar were a couple of United States naval intelligence officers. The enrichment mathematics teacher stated that the United States military network had been securing the non-public Internet at the time with the difficult problem of factoring large integers, yet they had recently decided that they would secure it with other "secret" Diophantine equations. For, if the equation for public-key exchange is known, then the network is breach-able.

Yet, given a public key K and the knowledge that it might have three parts, A, B, and C. The equation
F(A,B,C) = LA+MB+NC+OAB+PAC+QAB+RABC = K and an application of the Miller-Kovarik would produce candidate private keys. If these private keys do not change over time, the application of F(A,B,C) and Miller-Kovarik on a series of public keys exchanged betwixt the same computing node would result in differing sets of potential private keys (A,BC). It would be the aggregate intersection of these sets that would produce the actual (A,B,C).

Yet, it would be unfortunate if the key exchange were actually this weak; if so, this would justify the old joke that states military intelligence is an oxymoron. Otherwise, our goose is Cooked.

OPEN-VM | General-Purpose Protocol Handler-Interpretor As a Kernel | Houston Embryo

Team. It seems the plenty of interest was shown in the post describing OPEN-VM, a pattern for an actionable overlay that will "open-up" any set of language libraries provided an OPEN-VM is written for such a language. The author should mention that he has written a simple general-purpose protocol handler that uses reflection and dynamic invocation. This basic system for handling protocol message might also process language imperatives (commands). The source might be found on this page of the NuevoArchitect www-site .