Harvard Mark I 1940s

Tuesday, March 2, 2021

Variations om mergesort. Part I. MIT Scheme

I'll be demonstrating some sample code for variations on the merge sort algorithm in various computer coding systems. The breakdown into parts will be the following.
Part I. Scheme using MIT Scheme.
Part II. Common Lisp using CLISP
Part III. Python 2.7.18
Part IV. C++ using clang, gnu g++ (or maybe even also cl on Windows)
Part V. Fortran using gnu gfortran.

The approach I shall take in the Scheme version is to break up an unsorted list into its already sequentially ordered sublists. These sublists are often referred to as 'runs'. Given an unsorted list
such as
(-9901 -588 4611 19329 -15665 -2951 -28140 -21067 -30182 26262 12038 18460 11128 17996 19257 -28634 31310 -32161 -17620 -12520)
one can partition such a list into its subsections that are already in ascending, or descending, order.
In this case such a partition into ascending order sublists would yield
(
(-9901 -588 4611 19329) 
(-15665 -2951) 
(-28140 -21067) 
(-30182 26262) 
(12038 18460) 
(11128 17996 19257)
(-28634 31310)
(-32161 -17620 -12520)
)

I do this with the Scheme function extract-runs below.

(define (extract-runs ls fn)
  (define (iter in run out)
    (cond ((null? (cdr in))
           (if (null? run)
             (cons (list (car in)) out)
             (cons (cons (car in) run) out)))
          ((fn (car in) (cadr in))
           (iter (cdr in) (cons (car in) run) out))
          (#t
           (iter (cdr in) '() (cons (cons (car in) run) out)))))
  (reverse! (map reverse! (iter ls '() '()))))

Scheme is very friendly to functions as first class citizens.

And since an operator such < or >= is treated as a function in scheme this makes it easy to write functions that can deal with ascending or descending order. For example, at the command line in GNU MIT scheme one can type 
> (extract-runs ls <=)
given a list ls to generate its partition into ascending runs similar to how the built in sort function works, i.e.
> (sort ls <=)
These 'runs' become the raw material for further merge operations eventually leading to the building up of a sorted list. This is done from the bottom up so to speak as the successive runs become longer, and their quantity less numerous.
Merging sorted runs 2 at a time, and leaving any dangling odd at the end to be dealt with in some successive iteration of merge operations on the raw material above yields.
(
(-15655 -9901 -2951 -588 4611 19329)
(-30182 -28140 -21067 26262)
(11128 12038 17996 18460 19257)
(-32161 -28634 -17620 -12520 31310)
)
And so on until we are left with only one 'run'.
(
(-32161 -30182 -28634 -28140 -21067 -17620 -15665 -12520 -9901 -2951 -588 4611 11128 12038 17996 18460 19257 19329 26262 31310)
)
The car (first list) of our list of runs has become the sorted list we wanted to build and so we return it from the merge-runs function.

(define (merge-runs x y fn)
  (define (iter x y z)
    (cond 
          ((null? x)
           (reverse! (append! (reverse! y) z)))
          ((null? y)
           (reverse! (append! (reverse! x) z)))
          ((fn (car x) (car y))
           (iter (cdr x) y (cons (car x) z)))
          (#t
           (iter x (cdr y) (cons (car y) z)))))
  (iter x y '()))

(define (merge-sort ls fn)
  (define (aux in out)
    (cond ((and (null? in) (equal? (length out) 1))
           (car out))
          ((and (equal? (length in) 1) (null? out))
           (car in))
          ((null? in)
           (aux out '()))
          ((null? (cdr in))
           (aux (cons (merge-runs (car in) (car out) fn) (cdr out)) '()))
          (#t
           (aux (cddr in) (cons (merge-runs (car in) (cadr in) fn) out)))))
  (aux (extract-runs ls fn) '()))

There are some who say that Scheme is an elegant programming language conducive to a functional
paradigm and tail call optimized recursive procedures. I have found it to be so. Lexical scope is another thing that is often clarified as one attempts to program using Scheme. One can use the following to generate random lists to use as the raw material for testing the above procedures.

(define (random-integer) (- (random 65536) 32768))
(define (random-real) (random 1.0))

(define (random-list n f)
  (define (iter k ls)
    (if (equal? k 0)
      ls
      (iter (- k 1) (cons (f) ls))))
  (iter n '()))

Topics touched upon, but not really covered in this essay include the following.
Scheme programming language.

Merge sort algorithm.

Tail call optimization.

Structure and Interpretation of Computer Programming. An all time great computer science text, 

and set of associated lectures.

Monday, November 27, 2017

Modeling Random Samples from Normal Distributions with OpenOffice Calc & C++ Part II

Modeling Random Samples from Normal Distributions with OpenOffice Calc & C++
Part II

Previously I explained how the samples console program tabulated statistical data for random samples of size 2 through n from a given normal distribution. I also noted that the errors program tabulated the differences between the analogous statistical values for each sample and the given normal distribution. I shall present some charts below illustrating the behavior of the error from sample to population as the size of the sample approaches that of the given normal distribution considered as a population. The normal distribution in this test run had a population size of 500. 

Behavior of the difference of the sample mean and median from population analogs as sample size approaches population size.

Behavior of the difference of the sample standard deviation, median deviation, and mean deviations from population analogs as sample size approaches population size.


Behavior of the difference of the sample variance from the population variance as sample size approaches population size.

Behavior of the difference of the sample skewness and sample median skewness from the population skewness and population median skewness as sample size approaches population size.


The C++ code can be found at:

https://github.com/joshroybal/cpp-random-samples-tables

Modeling Random Samples from Normal Distributions with OpenOffice Calc & C++ Part I


Modeling Random Samples from Normal Distributions with OpenOffice Calc & C++
Part I


I shall present some observations on the modeling of taking random samples from given normal distributions and their approximations of the general statistical summaries of said normal distributions. The given normal distributions were generated in an OpenOffice Calc spreadsheet using the function NORMINV(num; mean; stddev). I chose rand() for num, 100 for mean and standard deviation 34.135 to generate normal distributions of sizes 1,000, 10,000, and 350,000. I also chose rand(), a mean of 134.135, and a standard deviation of 34.135 to generate normal distributions of sizes 500, 5,000 and 50,000. All normal distributions were of real valued numbers.

The C++ programs samples and errors read in the normal distributions which have been generated by OpenOffice and saved in the comma delimited text .csv format and read them into a standard floating point value C++ vector. When the given normal distribution had been read into the vector it is loaded into a custom C++ statistical calculations class I have implemented and a general statistical summary report for the entire population is printed at the beginning of a text file. After this is accomplished random samples from size 2 up to the size of the population of the given normal distribution are taken and each sample's general statistical summary is added as a row to a table in the text file, all of which can of course be compared to the initial general statistical summary of the given normal distribution considered as an entire population. Below is an example of a test run of the text file output for a particular run of ./samples 500.csv at the bash terminal command line, or samples 500.csv at the Windows command prompt.

 population
 n = 500
 mean = 101.367
 median = 100.863
 variance = 1032.66
 standard deviation = 32.135
 mean deviation = 25.7281
 median deviation = 22.2732
 skewnewss = -0.10289
 median skewness = 0.0470683
 random samples
  size     mean   median      var      std  meandev   mdndev      skw   mdnskw
     2  108.461  108.461 2996.376   54.739   38.706   38.706   -0.000    0.000
     3   97.253   92.826 1545.216   39.309   27.557   32.481    0.111    0.338
     4   89.384   87.200  967.988   31.113   25.465   23.281    0.095    0.211
     5  111.371  116.794  366.891   19.154   14.484   15.326   -0.105   -0.849
     6   99.451   98.133  798.427   28.256   23.507   20.010    0.116    0.140
     7  107.015  117.532 2015.066   44.889   33.987   29.466   -0.178   -0.703
     8   85.060   94.501 2182.623   46.719   31.297   20.638   -1.102   -0.606
                                   . . .
   492  101.266  100.863 1045.855   32.340   25.884   22.710   -0.097    0.037
   493  101.477  100.967 1017.018   31.891   25.506   22.102   -0.097    0.048
   494  101.216  100.659 1038.744   32.230   25.741   22.283   -0.096    0.052
   495  101.336  100.967 1034.720   32.167   25.718   22.268   -0.111    0.034
   496  101.572  100.991 1033.031   32.141   25.724   22.381   -0.106    0.054
   497  101.343  100.967 1026.704   32.042   25.636   22.268   -0.113    0.035
   498  101.290  100.863 1021.343   31.958   25.577   22.188   -0.127    0.040
   499  101.314  100.759 1035.407   32.178   25.726   22.259   -0.099    0.052
   500  101.367  100.863 1034.730   32.167   25.728   22.273   -0.103    0.047

The C++ code can be found at: 
 

Monday, September 11, 2017

Structuralism, Individualism & The First World War

Nine European Sovereigns at Windsor for the funeral of King Edward VII in May of 1910, four years before the war began. Standing, from left to right: King Haakon VII of Norway, Tsar Ferdinand of Bulgaria, King Manuel II of Portugal, Kaiser Wilhelm II of the German Empire, King George I of Greece and King Albert I of Belgium. Seated, from left to right: King Alfonso XIII of Spain, King-Emperor George V of the United Kingdom and King Frederick VIII of Denmark.
Structuralism, Individualism & The First World War
 Two divergent and conflicting theoretical perspectives have come to dominate center stage in international politics. The fist of these being the structuralist perspective, the second being the individualist perspective. My first task shall be to present briefly the major assumptions of both theoretical perspectives; first the structuralist, and then the individualist, perspective will be presented. Taking World War One as my specific case I shall argue that four factors make the structuralist perspective the more useful in explaining the outbreak of that war; namely, colonialism, nationalism, monarchic imperialism on the continent, and the schedules of military mobilization.
There are three relevant conceptual constructs underlying the basic assumptions of the realist account of state behavior (Hughes 1990:111). First, human nature makes the struggle for power a constant in international relations. Second, power can be countered effectively only with counterpower. Third, international peace and stability are maintained via the equilibrium of power and counterpower whereby states restrain each other from undue aggression. Realism also assumes that the nation state is both the basic unit of analysis and the highest authority in a world state system lacking centralized control (Hughes 1990: 52). In contrast an individualist perspective focuses on the conscious decisions and personalities of the leaders of the combatant nations when examining the outbreak of war (Stoessinger 1990:209).
Kaiser Wilhelm II of the German Empire
In the case of World War One Kaiser Wilhelm II has been vilified by many for precipitating the war with unduly aggressive brinksmanship (as evidenced by the Moroccan crises) and paranoid delusions of German encirclement by the other great European powers (Stoessinger 1990: 12-14). Ironically the U. S., a great power a continent away, was in fact going to be the ultimate undoing of the German Empire. According to an individualist account abdication of personal responsibility for decision making and overall mediocrity were displayed on the part of all the leaders of the great European powers at the outset of the First World War (Stoessinger 1990: 21-24). The leaders in question failed on many accounts. For instance, diabolical images of the enemy were prevalent in the minds of said leaders. Furthermore, there was in evidence complete lack of empathy on the part of decision makers; in the jargon of intelligence analysts there was a failure of “mirror imaging” by policy makers to coopt the process of system wide structural failure.
My argument that the structuralist perspective is the more useful in explaining the outbreak of World War One is partially one of context; i. e. I do not think that structuralist accounts are in general more useful in explaining any war. Still realpolitik alliance building and its attendant secret diplomacy, nationalism, mobilization, colonialism abroad and monarchical autocracy on the continent were systemically cancerous in a way that outweighed the merely personal shortcoming of Kaiser Wilhelm II and Czar Nicholas II in precipitating the Great War.
Two cousins ~ Kaiser Wilhelm II of Germany and Tsar Nicholas II of Russia
Of particular relevance to the issue of root causes of the outbreak of World War One is the alliance structure that had developed in Europe during the nineteenth century. Specifically, in the jockeying to maintain systemic political-military equilibrium two fairly rigid alliance structures had arrayed themselves against one another (Hughes 1990:112). The alliances in question were the Triple Alliance and the Triple Entente consisting of Germany, Austria-Hungary and Italy on the one hand, Britain, France and Russia on the other; we have here almost exactly the opposing camps that eventually squared off in Nineteen fourteen. Also, Belgium with its strategic location between the three great powers of Britain, France and Germany, an issue central to Britain’s entry into the war, was declared perpetually neutral by treaty in 1839 (Hughes 1990:111).
Kaiser Wilhelm II and British Lord of the Admiralty Winston Churchill
 Another important structural concept in the explanation of the outbreak of the First World War is that of nationalism (Hughes 1990:197). At the time of the outbreak of World War One many European ethnic groups seeking national self determination and sovereignty were living under the occupation of the German, Austro-Hungarian and Russian empires. The assassination of the Austro-Hungarian archduke Francis Ferdinand by Serbian nationalist Gavrillo Princip, the event most often cited as the spark that made the Balkan “tinderbox of Europe” explode, occurred in Bosnia-Herzegovina, a region of Serbia that had recently been annexed by Austria-Hungary. Interestingly the structural problem of nationalism in the Balkans is still evident in the newspapers of today. Needless to say subjugation of nationalities by the European monarchies was by no means limited to the continent (Wittkopf 1989: 97-99). Competition for foreign markets at that time took on a decidedly more mercantilist character than it presently does, which in turn complicated the balance of power equation for the kings and secret diplomats who played chess with human lives.
In Nineteen fourteen mobilization was a dreaded word almost synonymous with declaration of war (Holsti 1992: 242-243). The decision making process regarding negotiations to cease hostile actions at the outset of the war was hampered by the relentless schedules of mobilization. In fact, even on the field of battle at Tannenberg, one of the wars initial battles, Russian and German corps commanders found military decision making hampered by the structure of mobilization imposed from on high by the general staffs(Liddel Hart 1930: 104-105).
Furthermore, two of the principle combatants, the German and Russian Empires, could quite arguably be characterized as reactionary monarchies. In light of the track record democracies have of not going to war with one another one sees the converse in the relations of the demonstrably expansionist empires of Germany and Russia prior to Nineteen fourteen (Hughes 1990: 188).
In his conclusion to Why Nations Go to War John G. Stoessinger uses sickness as a metaphor for the human condition of war (Stoessinger 1990: 205-206). I shall employ Mr. Stoessinger’s metaphor in a counter argument against his individualist conclusion regarding the outbreak of World War One. I agree that war is a disease, but it is my contention that the disease is rooted in structural degradation rather than the decisions leaders make of their own “free will”. When suffering from a physical disease the doctor diagnoses the physical causes, which like structures human beings do not control. The seeming irrationality of the kings and diplomats who played chess with human beings is the symptom rather than the cause of the systemic failure that precipitated World War One. With the structural forces that were gathering on the horizon at the turn of the century I do not see how the outcome could have been other than the great catastrophe that took fifteen million lives. In seeking to place the blame on a few elites who after all are only human beings the individualist account fails to recognize a dependence of which we are not conscious (Tolstoy).
Gen. John "Black Jack" Pershing visits Arlington National Cemetery in 1925.

Friday, September 8, 2017

Hume, Induction & Bayes's Theorem

Hume, Induction & Bayes's Theorem
Hume’s paradox is the result of an interrelationship between two distinct logical processes; namely, deductive logic and inductive logic.
David Hume 1711-1776
David Hume’s historical context was one where Descartes and other rationalists believed deductive, or a priori knowledge of the ultimate nature of reality could be attained through philosophical investigation. Empiricists such as Hume reacted by asserting deductive arguments were built into the structure of language and were strictly trivial; for example, “there are no married bachelors” or “I think, therefore I am”. The empiricists also stressed the inability of human reason to extend the deductive process to ontological concepts such as cause and effect or the uniformity of nature. According to Hume these concepts were arrived at through a process of induction.
Taking cause and effect as an example, Hume demonstrated the operation of enumerative induction. For instance, every time up until now I have wound my wristwatch it has run. I assume this pattern will hold whenever I wind my wristwatch. The justification of this process is the problem of induction.
Deductive logic is utilized by scientific reasoning to confirm or disconfirm hypotheses. This is because initial boundary conditions of hypotheses are treated as premises which are true only if observable phenomena predicted by them obtain. The paradox here is that while scientific models are confirmed deductively, they are themselves arrived at inductively before being tested for predictive accuracy. The underpinning of testability of hypotheses as laws lies in the putative uniformity of nature. However, appeal to the uniformity of nature fails to resolve the problem of induction. There is an inductive process inherent in assuming that natural processes will be the same tomorrow as yesterday.

Baye’s theorem of conditional probability states that evidence which is improbable antecedently, which obtains if the evidence is true, is most likely to raise the probability of the hypothesis. The probability of the hypothesis is boosted by the evidence. Apropos an argument stating the source of the problem of induction to be a linguistic confusion, which runs along the following lines. Probability given the evidence is the standard of rational belief. The conclusions of inductive inferences are probable when probability is viewed in this light.
The problem of induction remains despite probabilistic theories. Inductive inference determines the rules of inductive evidence. Conclusions supported by inductive evidence must derive from rules known to be correct. This can only be done inductively by inferring the probable correctness of the rules. Reformulating the problem as one concerning degrees of rational belief fails to resolve the basic problem concerning justification.

Thursday, September 7, 2017

Alternatives in Computability Research

Turing Machine

Alternatives in Computability Research
April 27, 1997
I plan to examine the status quo in the field of computability and unsolvability in pure mathematics and theoretical computer science. One aspect of the status quo seems to be the use of high performance computing to facilitate the study of the subject, which leads us to a consideration of possible changes to come.
With the increasing efficiency and speed of computing equipment the old high performance computing research model may need to be relegated to only the most expensive research in terms of computer time. Already in some fields network communications and parallel processing are performing computing tasks that once were the sole task of research supercomputers and mainframes. This phenomenon has occurred in applications as disparate as filtering applications in radio astronomy and chess programs used in grandmaster versus machine matches. Many of the IT labs computer accounts used at the University of Minnesota are maintained on networks of workstations and not on mainframes or supercomputers. The continuing trend toward use of distributed and parallel computing systems is apparent and inevitable.
With limited resources and increasing research demands the use of networked, distributed and parallel systems will become necessary, thus making now an ideal time to implement this process in computability and unsolvability research. In fact, this process has already been occurring for some time in the field. As more and more specialists from the computer science become involved in automata theory the situation will only become more favorable with regard to innovative research. Computer science is a young field in which paradigms of software development and hardware implementation have changed many times, and in which different approaches to these activities have managed to exist simultaneously.
One of the central concepts of Computability and Unsolvability is the algorithm. In essence an algorithm is an effective computational procedure. Intuitively one may think of an algorithm as purely mechanical sets of instructions which will provide the answer to any one of a class of questions. These instructions are devoid of any non-deterministic features, such as the ones often attributed to human creativity. In theory it will always be possible to fabricate a machine or program a computer which carries out these instructions. For example, as school children we were taught algorithms for solving simple sums. This was done in the past by human clerks who applied these algorithms to sets of data given them by their overseers. Today we have adding machines which perform these tasks.i
Consider a given problem that one wishes to solve. It may not be the case that one has already at hand an algorithm which will solve it. Now we ask: is there an algorithm which will solve this problem? We now have what is known as a decision problem. An affirmative solution to a decision problem consists in demonstrating the existence of an algorithm which solves it; a negative solution in showing that no such algorithm exists. In the latter case the problem is said to be unsolvable.ii Showing that a decision problem is unsolvable can save much needless effort on the part of researchers in mathematics and computer science.
In the development of this theory the Turing machine emerged, an important concept in both mathematics and computer science. A Turing machine is an abstract model of an automata that applies algorithmic sets of instructions until it reaches a solution at which point it halts. It may be visualized as an infinite strip of tape with infinitely many boxes subdividing it, similar to the manner in which a strip of film is. There is a scanner or read-write tape head that can move to the left or to the right along the tape and scans the information contained in one of the tape subdivisions. The information contained on each subdivision of the tape is the data manipulated by the instructions and the instructions themselves. A digital computer can be thought of as operating in a similar way. Data is stored in memory locations or cells that contain information which can be instructions or data, and these locations are scanned and written to in a certain sense by registers.
The primary task of the computer programmer is the implementation of algorithms. A computer program is in essence an algorithm. The practice of computer programming has shown that essentially the same algorithm may be implemented in diverse forms. This is because many different and successful models of program development are in existence. These different models are motivated just as much, those more disposed to the point of view of engineering would say more, by the possibilities of actual technology as by the results of mathematical logic. One of the main points I covered in my interview with Dr. Ravi Jarnardan of the University of Minnesota Computer Science faculty was that mathematical analysis provides the answers to questions about the performance that can be expected of algorithms. Consider the problem of sorting n names into alphabetical order. Analysis has shown that both the bubble sort and straight insertion sort algorithms will perform nxn operations in worst case situations. They are expensive algorithms in terms of computer time. However, it is known widely from experience that the straight insertion sort algorithm is often much more efficient than the bubble sort algorithm. Thus an average case analysis may be more relevant in assessing the relative performances of different algorithms. In the case of the heapsort algorithm, which is much more efficient than the aforementioned, a probabilistic analysis which in essence simulates a coin flip at each stage of the algorithm can be very helpful. For example, program A takes action X or action Y at stage Z of its execution. Probabilistic analysis would make the result of the coin flip correspond with the action the taken by program A. Probabilistic analysis tries to account for the conditional probability inherent in the execution of many programs.
On the software side we have the different approaches of modular programming and object oriented programming. Modular programming has been described as top down. In this model the programmer strives to subdivide the problem he is given into as many smaller simple subproblems as they can. A module is the program division that contains the algorithm used to solve these simpler subproblems. A simple main program calls its functions or modules which then perform the requisite data processing. This has the advantage that the given modules can be used various times in the original program without the need for writing the entire set of instructions again, thus wasting increasingly not so valuable computer memory. In the black box approach that is the ideal of modular programming many different modules, the black boxes, are written for use as they may be needed at later times by various main programs. In object oriented programming a class or object which contains its own modules and data is instantiated, dynamically allocated, implemented and then destroyed. All in all contemporary computer science students are expected to be conversant with both software development paradigms. In part this is because object oriented implementations would make little sense without the use of modules in them. An example of the interplay between the two models is the use of object oriented programming packages to write the compilers used by modular programming packages. C++ which is considered an object oriented programming language was used by Microsoft to develop Visual Basic, a programming package which is more modular in nature.
That different problems lent themselves to different solution strategies should be readily apparent. However, these differences in solution strategy may lie more in what is used to solve the problem, viz. software and hardware, as in the actual form of the algorithm used to solve it. I hold that a research team given a set of problems to resolve could optimize the software and hardware options to be implemented. Thus an algorithm that needed to bounce back and forth checking and comparing integers thirty octillion times would be well suited for FORTRAN code and compilers. Research that pertained to more complicated recursive functions or decision procedures that need to evaluate whole classes of objects and not just cheap integers would be more appropriate for parallel computing using object oriented code or a language package used in computer science or artificial intelligence such as LISP, Ada, Smalltalk, Prolog etc..
Several alternative approaches to problems in the field of computability and unsolvability have already been outlined in the proceeding paragraphs. For example, supercomputer and mainframe processing of expensive numerical algorithms, niche languages and specialty items optimized for use in restricted applications such as the prolog computer language. On the hardware side we have parallel or distributed computing projects among the many alternatives. Where alternatives exist one of course always has the opportunity to fuse them together into a hybrid project that is well fitted to those aspects of the problem in question which are themselves of a hybrid nature.
Among the benefits of such an integrated and synthetic approach would be the mutual contributions that computer science and mathematics have to give each other. In utilizing the tools of computer science to further the mathematical endeavor computer scientists would see their software design and hardware implementation in action. Mathematicians would see the fruits of their theoretical labors in mathematics applied to and come to fruition in computer science. At times it is possible to have the best of both worlds. On an intuitive level one should be able to see how the possible project optimization proposal I gave above could be useful. However, as the old cliché says, too many cooks can spoil the broth. One would have to be careful that the proliferation of alternative approaches did not result in a concomitant proliferation of problems. At other times, one has the worst of both worlds. Also there are those schooled in the old ways who have much experience to offer the inquirer. Standard and time honored approaches to certain problems are often the most accommodating for the greatest number.
i Davis, Martin, Computability and Unsolvability, New York 1982, pp xv, 3

iiDavis, Martin, Computability and Unsolvability, New York 1982, pp xvi

Wednesday, September 6, 2017

Overview of Computability and Unsolvability

Burroughs B5500 stack architecture system 1960s.

Overview of Computability and Unsolvability
April 21, 1997
Computability and Unsolvability concerns itself with the existence of exclusively mechanical procedures that solve diverse problems. Automata Theory is a branch of pure mathematics that holds interest for nonmathematicians because its utility in philosophy and theoretical computer science. The Gödel Incompleteness Theorem and the existence of ultimately unsolvable problems are results of Computability and unsolvability research which have philosophical implications. Another important result of the field is the existence of the universal Turing Machine which has served fruitfully as an abstract model of the digital computer; this result is one of the sources of the origin of automata theory as an important field of theoretical computer science. In particular this result has modeled those problems which could conceivably be modeled for solution by any deterministic computing device. The unsolvability of the halting problem is another noteworthy result of research in these areas. The halting problem asks whether there is a general procedure that can determine whether any given set of program instructions contains non terminating sub procedures. The programmer who has tried to debug code of a hard to find infinite loop can continue to despair as no foolproof debugging tool for such situations will ever be found.
The primary work of contemporary mathematicians consists in determining the truth or falsehood of various propositions concerning mathematical objects. For example, real numbers, rational numbers, continuously differentiable functions, etc. However, computability and unsolvability has as its main subject of investigation the existence of algorithms or effective computational procedures which are able to solve various problems. The theory of computability and unsolvability is also known as the theory of recursive functions. A recursive procedure or function is defined as one which can repeat itself indefinitely until a given condition is met, e.g. finding a solution to the problem it was to solve.
For thousands of years questions of what is logically possible and how that knowledge can be applied to outstanding problems have perplexed and encouraged mathematicians and scientists. Developments in theoretical mathematics have often pointed the way to later developments in the sciences and especially to the models used to analyze the data those sciences are concerned with. The classic example of the preceding is Issac Newton’s contemporaneous development of the calculus and Newtonian physics. It is also the case that the prior existence of hyperbolic geometry facilitated Einstein’s development of his relativity theories. One such modeling application is seen in the automation of algorithms that determine the computability of a given problem or that attempt to automate the search for the solution to an outstanding problem. This is seen as particularly valuable in modeling other complex scientific problems and extensive and complicated software projects. Artificial intelligence and mathematical combinatorics are just two of the areas where computability research has played a role.
One example of a long standing problem only recently solved by the electronic computer is Robin’s Theorem. Complex iterative procedures involving binary data trees in computer science are in the province of the field. These data structures seem particularly well suited to the demands of computability and unsolvability research and vice versa; recursive and iterative algorithms can be used to traverse these structures and the nodes of the tree can be used to store relevant data for later retrieval by these processes. The automated generation of theorems and their proofs by effective computational procedures is the main area of research that I will be investigating. The axioms of a given mathematical logic can be viewed as the building blocks of proofs, really sequences of consequences of the given axioms, from which an effective computational procedure can derive a valid theorem by stringing together the consequences of the logical axioms into its proof.
In conclusion the investigation of the field will center around two main questions. 1) What are the principal recent results and encouraging areas of current research in the field? 2) How is this research being implemented? In particular with regard to the use of high performance computing devices to implement this research.

Variations om mergesort. Part I. MIT Scheme

I'll be demonstrating some sample code for variations on the merge sort algorithm in various computer coding systems. The breakdown into...