Monday, September 16, 2019

Off Topic | Yet On | The Limits of Computing and Heueristics

Team. Numerous textbook have been written that describe the limits of computing. One was written by one of the author's college mentor and instructor while he studied for a year at a well-ranked liberal arts college in the American heartland. The textbook describes some of the bounds established within the 1970s and 1980s.

Also, classic textbooks in the theory of computation from this time describe the notions spanning between computability and intractability. Yet, a brief overview of Michael Sisper's work, which is more conceptual and qualitative and less quantitative might make one question some of the results in the "well-established" traditional works.

When working with mathematical concepts, one must keep an architectural view, the big picture, while working in the details of the equations, or, in the case of computing, the imperatives of the system under development. One can become so mired in the numbers that he loses his path along the way, seeing the trees and forgetting that he is in Sherwood forest. And, the eagle-eye view lets one see the start of the trail, the finish, and the possible routes in-betwixt these.

And, this is ever so true in the field of algorithms and their analysis. Cormen, Leirson, Rivest, and Stein wrote a classic comprehensive text on the subject. Yet, it presents heuristics as computational problems for which algorithms which are efficient in space and time cannot be found.These are place in the class of "nondeterministically" polynomially-complete problems, or NP-complete problems.

In terms of algorithm analysis, a procedure is said that it is efficient if the amount of steps required in solving it is a polynomial function of its input and the number of memory space required for performing the computation is also the same class of function of its input.

In practice, it is best that this be a quadratic polynomial or better, such as a logarithmic or linear function of the input. In other words, the number of computations required for processing the input should grows as one of these functions as the size of the input increases.

Yet, as taught, these problems were presented as part of an early doctoral dissertation in the earlier years of graduate computer science program. Seeing that the oldest computer science program in the states was established at Carnegie Mellon around 1968, these programs are rather new. And, absolutely nothing novel presented among the theses and dissertations prepared during these early years of computing has stood the test of time. For one, it is said that one should never be a respecter of personages; anyone is fallible. As such, one should not be in awe of program names such as Cambridge, Oxford, Harvard, Princeton, Brown, Princeton, Harvey Mudd, Stanford, Yale, MIT, or CalTech. Furthermore, the greats in computing have limitations, human foibles, and weakness in the midst of their mistakes. So, simply because Lamport, Berners-Lee, Turing, Gosling, VonNeumann, Cerf, Diffe, Hillman, Goldwasser, Naur, Kay, Karp, Hopcroft, Tarjan, Hilbert, Dirac, Nash, or any other "heavy" in computing and mathematics states it as such, it might not be the case.

Professor Stephen Cook is quite famous for his work with "heuristics"; yet, the connotation for this term in computer science is suggestive of "a problem which does not have an efficient optimal solution, one that must be solved approximately". Yet, as we excel in certain areas, we often have deficiencies, minor and gross, in others.Most mathematicians and computer scientist are not know for the strength of their vocabulary, on average, although they are quite "bright". They simply do not focus on such topics. One cannot excel in all areas. And, finding a student, even at the graduate-level who will check every reference in a research paper which they are reading or define every term in a problem specification is exceedingly rare. Most students simply "fill-in" meaning from the context. At time, this might misrepresent the actual meaning of the passage and occasionally present an opposite semantic. And, in some languages, such as English which can have quite a spin on it at times, word with similar sounds might be antonyms, such as timority and temerity.

And, on the topic of computing and heuristics, the following are three of the traditional definitions of the term.

from: www.dictionary.com

heuristic
[ hyoo-ris-tik or, often, yoo- ]
adjective
serving to indicate or point out; stimulating interest as a means of furthering investigation.
encouraging a person to learn, discover, understand, or solve problems on his or her own, as by experimenting, evaluating possible answers or solutions, or by trial and error: a heuristic teaching method.
of, relating to, or based on experimentation, evaluation, or trial-and-error methods.
Computers, Mathematics. pertaining to a trial-and-error method of problem solving used when an algorithmic approach is impractical.
noun
a heuristic method of argument.
the study of heuristic procedure.
Origin of heuristic
1815–25; < New Latin heuristicus, equivalent to Greek heur(ískein) to find out, discover + Latin -isticus -istic

from:the Webster's Collegiate New World Dictionary

heuristic
adjective

    The definition of heuristic refers to techniques, activities or lessons that allow someone to discover something for himself or by finding solutions through experiments or loosely defined rules.

    A process whereby you are asked questions to discover answers on your own and learn more about yourself on your own is an example of a process that would be described as heuristic.

noun

    Heuristics are defined as ways of finding out the answer to a question.

    An example of heuristics are common sense and trial-and-error.

Heuristic. (n.d.).

from: the Merriam-Webster Dictionary

heuristic

helping to discover or learn; specif., designating a method of education or of computer programming in which the pupil or machine proceeds along empirical lines, using rules of thumb, to find solutions or answers

Origin of heuristic
from German heuristisch from Classical Greek heuriskein, to invent, discover: see eureka

heuristic adjective
heu·​ris·​tic | \ hyu̇-ˈri-stik

Definition of heuristic(Entry 1 of 2)

: involving or serving as an aid to learning, discovery, or problem-solving by experimental and especially trial-and-error methods heuristic techniques a heuristic assumption also : of or relating to exploratory problem-solving techniques that utilize self-educating techniques (such as the evaluation of feedback) to improve performance a heuristic computer program

heuristic noun
heu·​ris·​tic | \ hyu̇-ˈri-stik

Definition of heuristic (Entry 2 of 2)
1 : the study or practice of heuristic (see heuristic entry 1) procedure
2 : heuristic (see heuristic entry 1) argument
3 : a heuristic (see heuristic entry 1) method or procedure

German heuristisch, from New Latin heuristicus, from Greek heuriskein to discover; akin to Old Irish fo-fúair he found

So, as the above three definitions show, a teaching heuristics is a learning aid that lets the student investigate a challenging problem developing a method of solution without direct guidance and "hand-holding" from an instructor. It should result in a "Eureka!" moment when one solves the problem.

As such, by definition, heuristics are solvable. Yet, let us examine this "popular" connotative and not denotative definition from Wikipedia:

A heuristic technique (/hjʊəˈrɪstɪk/; Ancient Greek: εὑρίσκω, "find" or "discover"), often called simply a heuristic, is any approach to problem solving or self-discovery that employs a practical method, not guaranteed to be optimal, perfect, or rational, but instead sufficient for reaching an immediate goal. Where finding an optimal solution is impossible or impractical, heuristic methods can be used to speed up the process of finding a satisfactory solution. Heuristics can be mental shortcuts that ease the cognitive load of making a decision.[1]:94 Examples that employ heuristics include using a rule of thumb, an educated guess, an intuitive judgment, a guesstimate, profiling, or common sense.

Yet, often those who excel in computing and mathematics do not do the same in language arts.

However, one does not say "Eureka!" when he is near gold, but when he has it in hand.

In term of the NP-Complete problems, it was Professor Cook who is credited with determining that they are mutual reducible. In other words, one problem in the class NP can be cast in the light of another. They are convertible.

So, what might be a procedure for resolving a an NP-Complete problem. One such class of problem is that of the subset-sum. This problem says that given a set of costs find a subset whose total cost equals a certain amount.

Consider the following set cost where we want a subset of cost 15:

S = { 3, 5, 7, 11, 13 } C = 15

Try solving the problem with a Diophantine Equation.

 C = 15

 F(a,b,c,d,e) = 3a + 5b + 7c + 11d + 13e = 15

 where {a,b,c,d,e,f} are in [0..1] for example.

This can be solved by inspection; however, such is unsatisfactory in the general case of the problem.

Yet, let us use Miller-Kovarik's Secondary Method.

G(a,b,c,d,e) = F(a,b,c,d,e)-15 = 0

0 = H(a,b,c,d,e) = G(a,b,c,d,e)^2

0 = ( 3a + 5b + 7c + 11d + 13e - 15 )^2=H(a,b,c,d,e)

The root of G(a,b,c,d,e) and H(a,b,c,d,e) coincide; yet, H is a near parabolic-surface. The Miller-Kovarik Secondary Method which is also described in these notes will address this multidimensional case of a root- or minimum-finding problem.

The solution and minimum of the surface composed with a parabola would be at (1,1,1,0,0)

In fact, such a minimum finding procedure is effective in factoring integers of arbitrary size when one realizes that for every odd composite, N:

N = (2x+1)(2y+1)
N = 4xy + 2x + 2y + 1
N = 2( 2xy + x + y ) +1
N = 2C+ 1, where C = 2xy + x + y
C= (N-1)/2 and F(x,y) = (N-1)/2 = 2xy + x + y
G(x,y) = F(x,y) - (N-1)/2 = 2xy + x + y - (N-1)/2 = 0
H(x,y) = G(x,y)^2 = (2xy + x + y - (N-1)/2)^2 = 0

This is also an application of Sundaram's Theorem. This does not bode well for "eft" and "https" which each use public-key ciphering systems and the traditional Diffie-Hillman key exchange. It also suggests that a block-chain might be vulnerable and corruptible if each "node" were attacked concurrently.

Yet, on the topic of NP-Completeness, based upon the work of Stephen Cook, if one problem falls then they all do. However, it has been taught that some ciphering protocols depend upon these problems which often result in a combinatoric, factorial, or exponential growth in the number of steps or memory locations required in their solution when solved naively.

Polynomial-time algorithms exists for the solution of the clique problem, which is resolvable using unique prime numbers mapped with each node, the highest common factor algorithm of Euclid, and an iterative pairwise comparison of the adjacency list in graphs until new maximal sub-cliques are not found. It should be noted that these sub-cliques might overlap.

During the Spring of 1987, when the procedure known as the Miller-Kovarik Secondary Method was first jotted down in the author's recreational math textbook, he shared this ideas with his trigonometry teacher for whom it is named. She shared this with the enrichment mathematics teacher who had just hosted a mathematics seminar where a blind student surnamed Miller presented a poster that inspired the approach, about a month earlier. Attending this seminar were a couple of United States naval intelligence officers. The enrichment mathematics teacher stated that the United States military network had been securing the non-public Internet at the time with the difficult problem of factoring large integers, yet they had recently decided that they would secure it with other "secret" Diophantine equations. For, if the equation for public-key exchange is known, then the network is breach-able.

Yet, given a public key K and the knowledge that it might have three parts, A, B, and C. The equation
F(A,B,C) = LA+MB+NC+OAB+PAC+QAB+RABC = K and an application of the Miller-Kovarik would produce candidate private keys. If these private keys do not change over time, the application of F(A,B,C) and Miller-Kovarik on a series of public keys exchanged betwixt the same computing node would result in differing sets of potential private keys (A,BC). It would be the aggregate intersection of these sets that would produce the actual (A,B,C).

Yet, it would be unfortunate if the key exchange were actually this weak; if so, this would justify the old joke that states military intelligence is an oxymoron. Otherwise, our goose is Cooked.

OPEN-VM | General-Purpose Protocol Handler-Interpretor As a Kernel | Houston Embryo

Team. It seems the plenty of interest was shown in the post describing OPEN-VM, a pattern for an actionable overlay that will "open-up" any set of language libraries provided an OPEN-VM is written for such a language. The author should mention that he has written a simple general-purpose protocol handler that uses reflection and dynamic invocation. This basic system for handling protocol message might also process language imperatives (commands). The source might be found on this page of the NuevoArchitect www-site .

Monday, July 8, 2019

Subject Mastery

While working with a student in a freshman computing course, this advice was given:

A first programming course can be quite a challenge, or it can be very easy. Much of your success in a first time course depends upon your previous coursework and preparation.

If you are not as successful as you would like in this course, do not quit. Keep trying and work on filling in "gaps in your learning and preparation".

You can do this. Anyone can.

This is true of any academic subject. Academic fields are based upon "rules", applying them, and deducing them.

If you can understand and follow the legal code in your city, province, or country, you have all of the reasoning power needed for understanding or studying any subject, Learn the rules well and how one applies them. Make flashcards, if you must. That is all that it takes.

Similar advice was shared with a classmate who was rather amazed at the author's work as a NIH scientific apprentice while in high school. She used this simple advice, including the flashcards, and earnt a doctorate in organic chemistry.

She now has a significant role in an international pharmaceutical company. She even proudly displayed her set of homemade flashcards with benzene, aromatic rings, and aliphatic chains, when she met with the author briefly one evening when he studied at Grinnell College. Great Job, KC!

Genuis is 95% practice, 4.9999999999% chance, and 0.0000000001% natural gifting.

Communicating With a Computer - The Programmer's Perspective

The author has the pleasure of teaching an online course in Python at this time for the University of the People. This is a low-cost international distance education opportunity for many across the globe.

Note: UoPeople could use some more computing instructors, if you have a masters or doctorate degree in a computing discipline. Simply apply at www.uopeople.edu.

While working with these students, many who had not taken a previous computer programming course, he sought a way of expressing the simplicity of computing.

And, when simply put, one is communicating with a computer when programming. One simply gives the computer a list of commands, that could be easily spoken in this day of speech recognition. Then, the computer performs these tasks the same way that one would if he were checking the commands for accuracy.

The following passage was shared with the author's students:

Python Programming Explained In Terms Of English.

This is five typed pages and 1083 words, nearly a short paper. You already have plenty of reading for the week, but some of the comments in this passage might address some of your confusion, if you have been struggling with Python. You can read it, if you have time. It is not required.

For some of the class, this will be overly simplified. For others, it will be “spot on”. Summarizing all of the basic abstractions concerning computer programming in a few paragraphs is a near impossibility. But, I will try. Hopefully, this will shine a light on the darkness in which some of you find yourselves.

Electronic computers can complete some amazing feats of computation, but they can only do what they are told. They cannot reason outside those bounds. When we communicate with computers, we do so with an intermediary language. It is not their mother tongue, which is a series of “low-level” commands most naturally expressed with zeros and ones. And, it is not a “natural spoken” language such as Hindi. It is somewhere in between. It is called a “high-level” language, because it is human-readable and not written in a cryptic string of zeros and ones.

“High-level” computer programming languages come in differing classes. The most common class of such languages are those based upon giving the computer “commands” that it must perform. These are called “imperative” languages. In the English language, an “imperative” sentence would be, “Open the door”, a direct “command”. In this sentence, the subject, “You”, proceeding “open” is implied.

So, the following Python command:

>>>print(“Hello World”)

Could be “mentally” read as:

Computer, print the phrase “Hello World” on your screen.

The following commands for calculating the area of a circle:

>>>radius = 5

>>>approximatePI = 3.14

>>>circleArea = approximatePI * radius**2

Could be “read” as:

Computer give the storage location called “radius” a value of five.

Computer store the value of 3.14 in the data location called “approximatePI”.

Computer compute the value produced by what is in the storage location called “approximatePI” multiplied by the square of the value in the data location called “radius” and place the result in a position within memory named “circleArea”.

It must be noted that if you tell the computer that it should do the wrong thing; it will do it “faithfully” every time.

These commands, also called imperatives, that we give a computer follow a format and have certain rules of structure as do spoken languages. Natural languages often have subjects, explicit or implicit, and predicates with verbs plus indirect and direct objects. Also, they contain certain modifiers for the nouns and verbs used in them. And, they use punctuation for signaling certain parts of the sentence, such as the end, with a period or question mark. In Python, the end of each command is simply the end of the line on which it sits. This is marked by a couple of non-printing characters, the carriage return and the line feed. Also, punctuation such as ( ) { } [ ] , “ ' # and others have significance and express something important about the current Python statement. Plus, the punctuation must be used in a certain way so it is meaningful and correctly expresses what the programmer intended.

One of Python's unique features that is not found among many other programming languages is the importance of the program's “indenting” pattern. In most other languages, indenting is simply added for readability. But, it Python, it effects how the interpreter “parses” and understands that commands that is given. A “missing” or “misplaced” indentation can result in a syntax or run-time error, plus it can produce erroneous output although a program completes successfully.

Also, from some of the discussion comments during the second units, it seems that some of the class struggles with the difference between a simple series of instructions in a script and a “well-defined” function.

For instance, if one had the instructions above that calculate the area of a circle, your assignment required that you calculated three areas for the radii of length 5, 10, and 20, plus you did not use a “well-defined” function, then you must list those three instructions three times. Such a program would appear as below:

>>>radius = 5

>>>approximatePI = 3.14

>>>circleArea = approximatePI * radius**2

>>>print( “The area for a radius of “ + str(radius) + “ is “ + str(circleArea) )

>>>radius = 10

>>>approximatePI = 3.14

>>>circleArea = approximatePI * radius**2

>>>print( “The area for a radius of “ + str(radius) + “ is “ + str(circleArea) )

>>>radius = 20

>>>approximatePI = 3.14

>>>circleArea = approximatePI * radius**2

>>>print( “The area for a radius of “ + str(radius) + “ is “ + str(circleArea) )

This is somewhat overly redundant since the value of “approximatePI” does not change and could be set once at the beginning of the list of instructions and then used throughout. Yet, the point is that this usage of Python in not “well-structured” or practical. If for some reason, you needed the surface area of a sphere with that given radius, the calculation of the area and its associated variable name must change in a number of places.

This is why we use the keyword “def” which stands for “define” in Python and create modular functions with it. The modules give our programs structure. Plus, they support the “structured programming” policy of having a “single-point” of modification when corrections or enhancements must be made. A program with a function for determining these areas of a square would be as follows:

import math

def measureCircleArea( radius ):

     areaString = str( math.pi * radius**2 )

     return areaString

print( "The area of a circle with a radius of 5 is " + measureCircleArea( 5 ))

print( "The area of a circle with a radius of 10 is " + measureCircleArea( 10 ))

print( "The area of a circle with a radius of 20 is " + measureCircleArea( 20 ))

This program could be “mentally” read as follows:

Computer, search the files in the Python libraries on this computer for a module called “math” and “import” all of its functions and predefined values for use in this program.

Computer, next define a function with the name “measureCircleArea” that accepts one input, a formal parameter called “radius”.

Computer, when the “measureCircleArea” function is called with an actual argument that should be a number, complete all of the steps that are indented by at least one tab character until you reach a return statement or the indenting stops that started after the function header prefixed with “def”.

Computer, the first step that you should complete for “measureCircleArea” is multiplying a value for the number PI, “math.pi”, that came from the imported library module, by the square of the formal parameter “radius” and store this in a temporary data location called “areaString”, after taking the numeric result and making a string from it with the str() type conversion function.

Computer, next “return” the value in “areaString” for use by the calling program.

Computer, this represents the end of the module “measureCircleArea” as the return has been reached and the program source outdents again.

Computer, for the first line of the programming script that you will execute when this program is called, print on a single line the following phrase, “The area of a circle with a radius of 5 is “ suffixed with the result of performing the “measureCircleArea” function with an input of 5.

Computer, for the next line of the programming script that you will execute when this program is called, print on a single line the following phrase, “The area of a circle with a radius of 10 is “ suffixed with the result of performing the “measureCircleArea” function with an input of 10.

Computer, for the final line of this programming script that you will execute when it is called, print on a single line the following phrase, “The area of a circle with a radius of 20 is “ suffixed with the result of performing the “measureCircleArea” function with an input of 20.

We could have used a variable like “approximatePI” whose value we defined as 3.14, but we used the value of pi stored in a special Python library for mathematical operations. This was done so one concept could be reinforced. The importance of “structure” and modules, such as functions, or subroutines. Someone has written a series of mathematical routines for use by any Python programmer and placed them in a module called “math”. These sub-modules, such as sin() for sine, cos() for cosine, atan() for arctangent, and etc. are available for use by anyone who imports math. Plus, the module contains a number of predefined values that cannot change such as pi.

The important lesson here is that when you learn enough about Python you can write your own modules such as “math” and compile libraries of them. This means that you only must write a function once. Then, you can reuse it many times.

Also, it seems that some of you still have a struggle with the concept of a variable. These are like the variables in algebra, in that, they can take on a wide array of values. But, they are a “little” different in that they represent a label for a location in the computer's memory. If you must use data in your program, you must store it somewhere. Plus, you must have a way of referencing this information. Python is really flexible with variables, in that they can accept data of any type. This is called “dynamic” typing. In many other languages, one must specify the type of data which a certain variable can accept. This specification occurs when you name the variable. This is called “static” typing. Python is much different. One can simply create a meaningful name for a variable. This name is called an identifier. After creating a variable's identifier, one can just store a value in it without specifying any limits on the type of data that it can hold. For instance, the following variable receives a string variable, is printed, is assigned an integer, and then is printed again.

>>>courseNumber = “CS1101”

>>>print( courseNumber ) #prints CS1101

>>>courseNumber = 1101

>>>print( courseNumber ) #prints 1101

In most cases, Python variables must have something stored in them before they are used or an error will occur. Something is stored in them when one uses the assignment operator, =.

Hopefully, that was helpful and not a burden. You already have plenty of reading for the week. If you have any specific questions about any particular Python topic, please send them. Also, please take the time and focus your questions. General questions often get general answers which might not meet your needs.

Note: When this passage was shared in an early term, one student who had some programming experience mentioned that this was a very simplistic view of what was occurring. Plus, he said that much was left out, such as the interaction of the program with the computer's memory. But, these details are not absolutely essential for understanding enough that one can write programs with variables, functions, and other programming structures.

But, most of all, remember that one should hunt, peck. and think while programming.

Wednesday, March 13, 2019

Everything that Rises Must Converge

Team. This phrase, "Everything That Rises Must Converge"  is all that the author remembers from a text read while in English 104W at Vanderbilt University during the Spring of 1989 taught by adjunct professor Robert Bacon. It was the work's title. A work written by Flannery O'Connor, a celebrated writer from the American South. In this title is a plethora of philosophies.

In the Arts, the concept of perspective is commonplace. A landscape's rendering without this would be seemingly strange in appearance. And, with all things tangible, one's life has a perspective and its own unique beauty. A pleasantness in form that is much like the simultaneously transient and timeless loveliness that an artist tries capturing when recording a moment in time on canvas. This is perspective in the Arts. In the Sciences, based on ratio and measure, the bi-stable cornerstone of reason, perspective is often synonymous with one's philosophical standing resting at an advantageous vantage-point, a position that provides a lucid view of the elegant simplicity of a subject or concept while letting one be conversant concerning its grandest complexities. Yet, in Art and Science, parallels exist in terms of the notion of "perspective".

It has been said that "in higher-order geometries parallel lines meet at infinity".
So, whether one is a Cambridge graduate student in mathematics studying the finest nuances of advanced topology in the damp airiness and silent hum of an historic English library or a kindergartner working with finger-paints shortly before Ms. Tschetter places a graham cracker and a carton of milk in the right-hand corner of your work area, resident in the recesses of your mind is that aforementioned simple concept. And, this abstraction guides your thoughts and actions, ultimately shaping your work-product. Perspective is subliminal. It is etched in the subconscious.

If one has the proper "perspective", one shall perform his work tasks well. This also is true in computing. One must understand the simplest of abstractions surrounding the work one does, before he can deal with the intricacy of the concrete details well.

The same is true of programming a computer. The task of programming is best fit within the realm of engineering, whether formal or informal. And, engineering processes are used in the resolution of a problems. So, in terms of abstractions. The field of problem-solving is a super-set of engineering. In other words, certain tasks in problems-solving are outside the realm of engineering. For instance, satisfying the growl in your stomach is a problem that must be solved. Yet, its solution does not and should not require the disciplined use of engineering concepts. And, engineering as a field is a super-set of computing and programming. Certain engineering tasks will not require the use of computer programming in their resolution. This includes tasks such as erecting steel outbuildings for agriculturalists.

Also, before one can consider himself an accomplished, or "professional" computer programmer, he should have mastery is certain sub-disciplines of computing. These include data structures, algorithms, automata, and the theory of computation. And, from the standpoint of abstractions, that last list is backward. Yet, based upon the level of mathematical sophistication required for mastering the theory of computation and its abstraction, it unfortunately is usually taught after basic courses in programming and its concrete concepts. It really should proceed these for the "best learning outcomes". So, the path of mastery in computer programming is palindromic: programming, data structures, algorithms, automata, and the theory of computation  followed by the reversed path theory through programming.

That is a lot of learning. And, one must not known everything about each subject. He just must master the ten or so fundamental learning objectives in each course. And, as he mulls over these concepts and builds interconnections between them, he shall converge upon new innovative insights that will rise up in his inner thoughts. Ultimately, everything which converges must rise.

And, with only a basic understanding of Pascal, elementary data structures, simplistic algorithmic analysis and approximation algorithms, and the briefest description of a Turing machine, this phenomenon of intellectual convergence occurred one sunny weekend afternoon in a freshman quadrangle at Vanderbilt University in the Spring of 1989. These thoughts eventually rose producing JAVA, objects, and much more that is universal, ubiquitous, and useful in modern everyday computing.

Sub-Titled : In The Quadrangle

Team. It seems that we have at least a pair of outstanding tasks which this weblog has promised. The first item is a book on programming fundamentals. The second is an open virtual machine, with a base of primitive instructions, that one can port between differing language application programming interfaces. The next "few" entries in this web-history should address each of these goals.

We will be working on a book that discusses programming starting with the abstractions found in general problem solving, seguing through the concepts in basic engineering, such as processes for building and recycling resources, and ending upon the foundational concepts in computer programming: algorithms, data structures, and automatons.  In the process, the text's capstone project will be the "open" virtual machine.

The text will be sub-titled, "In The Quadrangle".

A previous "web-post" outlines some of the author's experiences while briefly studying at Vanderbilt University during the Spring semester of 1989. In summary, while taking an introductory computing elective in Pascal, he spent one sunny weekend afternoon daydreaming and brainstorming in his dorm-room which lay near West End Avenue and Twenty-First Street during those years. These thoughts intermingled with memories of his freshman sweetheart and bus-riding companion, resulted in some of the most influential concepts in modern computing after they were shared with his instructor and classmates at Vanderbilt plus, most importantly, family members who worked in executive leadership at Sun Microsystems, during the 1980s through the millennium.

Concepts which have taken hold in the modern era were quite easily conceived by an insightful computing novice and amateur, yet they required the skills of professional engineers and computer scientist before they became part of everyday life. These concepts include object-orientation, architectures, programming-by-contract (rebranded "design-by-contract"), prefabricate structures (frameworks, patterns, templates, and stencils), generics, and the basic feature set of JAVA as a language with a "C-like" syntax.

Tons of low-hanging fruit and fallen fruit existed in the computing world of the late-1980s. Much still exists this day in the area of concern partitioning, the use of general-purpose structures,  and other subareas of computing and software engineering.

The goal of the introductory section of this text will be putting it all in perspective: problem-solving, engineering, and computer programming.
The author will develop some exerts of this "planned" text on-line in this web-history.

Tuesday, February 12, 2019

Duke-Orientation by Boonie the Ace

Team. It is said that as we age we forget some of the basic fundamentals learnt as children. This is one reason that researchers in advanced fields often redraft complex research problems in simple elementary terms and then see if a K-12 student can solve one or more of them.

Well, how is your natural Duke-Orientation, in light of your advanced degrees in computing, information, and the sciences or numerous years of experiences with the most advanced concepts in this field. Can you find the upper-left corner of the screen and now the lower-right?

Let us ask Duke?


Did you do an about face in your desk chair, once you where properly oriented?

The following portable document file should help you relearn what you have forgotten. Share it with a friend in computing. Maybe this will help us all sort out the RHS := LHS inversion in formal languages. It will not change the nature of the production, but one might find that he can think more freely and follow blindly less often.

Subconsciously, when one's left side is on the left of the entity with which he is facing and communicating, he is necessarily behind the entity and following. When one is oriented left-on-right and right-on-left, he can communicate and think freely, breaking away from the discussion for a moment of reasoning and returning later so he might add a few salient points.

A similar note was placed earlier in this weblog, yet this fact on "modern lateral disorientation" was repeated. It seems that this legacy is slowly seeing adoption among the future generations as "natural" and is deemed healthy and acceptable. Yet, it is simply one of many "new normals" which is one of yesteryear's "gross dysfunctions".

Sayings such as "righty-tighty" and "lefty-loosey" were for those who had trouble telling time on an analog clock. "Clockwise" tightens a threaded bolt and "Counter-clockwise" loosens. And, considering the rise in digital time-pieces, the "new normal" for orienting one with what he is facing, and that classic freshman Pascal-dilemma ( lhs := rhs ), our modern zeitgeist does not speak well of the current international pool of free thinkers. This pool includes the author who was inverted for about a decade after taking a course in data structures that discussed left and right sub-trees that, at first, seemed out of place and then, gradually over time, appeared properly-oriented.


Hunt. Peck. Think. It works quite well for the birds...