Harvard Mark I 1940s

Monday, September 11, 2017

Structuralism, Individualism & The First World War

Nine European Sovereigns at Windsor for the funeral of King Edward VII in May of 1910, four years before the war began. Standing, from left to right: King Haakon VII of Norway, Tsar Ferdinand of Bulgaria, King Manuel II of Portugal, Kaiser Wilhelm II of the German Empire, King George I of Greece and King Albert I of Belgium. Seated, from left to right: King Alfonso XIII of Spain, King-Emperor George V of the United Kingdom and King Frederick VIII of Denmark.
Structuralism, Individualism & The First World War
 Two divergent and conflicting theoretical perspectives have come to dominate center stage in international politics. The fist of these being the structuralist perspective, the second being the individualist perspective. My first task shall be to present briefly the major assumptions of both theoretical perspectives; first the structuralist, and then the individualist, perspective will be presented. Taking World War One as my specific case I shall argue that four factors make the structuralist perspective the more useful in explaining the outbreak of that war; namely, colonialism, nationalism, monarchic imperialism on the continent, and the schedules of military mobilization.
There are three relevant conceptual constructs underlying the basic assumptions of the realist account of state behavior (Hughes 1990:111). First, human nature makes the struggle for power a constant in international relations. Second, power can be countered effectively only with counterpower. Third, international peace and stability are maintained via the equilibrium of power and counterpower whereby states restrain each other from undue aggression. Realism also assumes that the nation state is both the basic unit of analysis and the highest authority in a world state system lacking centralized control (Hughes 1990: 52). In contrast an individualist perspective focuses on the conscious decisions and personalities of the leaders of the combatant nations when examining the outbreak of war (Stoessinger 1990:209).
Kaiser Wilhelm II of the German Empire
In the case of World War One Kaiser Wilhelm II has been vilified by many for precipitating the war with unduly aggressive brinksmanship (as evidenced by the Moroccan crises) and paranoid delusions of German encirclement by the other great European powers (Stoessinger 1990: 12-14). Ironically the U. S., a great power a continent away, was in fact going to be the ultimate undoing of the German Empire. According to an individualist account abdication of personal responsibility for decision making and overall mediocrity were displayed on the part of all the leaders of the great European powers at the outset of the First World War (Stoessinger 1990: 21-24). The leaders in question failed on many accounts. For instance, diabolical images of the enemy were prevalent in the minds of said leaders. Furthermore, there was in evidence complete lack of empathy on the part of decision makers; in the jargon of intelligence analysts there was a failure of “mirror imaging” by policy makers to coopt the process of system wide structural failure.
My argument that the structuralist perspective is the more useful in explaining the outbreak of World War One is partially one of context; i. e. I do not think that structuralist accounts are in general more useful in explaining any war. Still realpolitik alliance building and its attendant secret diplomacy, nationalism, mobilization, colonialism abroad and monarchical autocracy on the continent were systemically cancerous in a way that outweighed the merely personal shortcoming of Kaiser Wilhelm II and Czar Nicholas II in precipitating the Great War.
Two cousins ~ Kaiser Wilhelm II of Germany and Tsar Nicholas II of Russia
Of particular relevance to the issue of root causes of the outbreak of World War One is the alliance structure that had developed in Europe during the nineteenth century. Specifically, in the jockeying to maintain systemic political-military equilibrium two fairly rigid alliance structures had arrayed themselves against one another (Hughes 1990:112). The alliances in question were the Triple Alliance and the Triple Entente consisting of Germany, Austria-Hungary and Italy on the one hand, Britain, France and Russia on the other; we have here almost exactly the opposing camps that eventually squared off in Nineteen fourteen. Also, Belgium with its strategic location between the three great powers of Britain, France and Germany, an issue central to Britain’s entry into the war, was declared perpetually neutral by treaty in 1839 (Hughes 1990:111).
Kaiser Wilhelm II and British Lord of the Admiralty Winston Churchill
 Another important structural concept in the explanation of the outbreak of the First World War is that of nationalism (Hughes 1990:197). At the time of the outbreak of World War One many European ethnic groups seeking national self determination and sovereignty were living under the occupation of the German, Austro-Hungarian and Russian empires. The assassination of the Austro-Hungarian archduke Francis Ferdinand by Serbian nationalist Gavrillo Princip, the event most often cited as the spark that made the Balkan “tinderbox of Europe” explode, occurred in Bosnia-Herzegovina, a region of Serbia that had recently been annexed by Austria-Hungary. Interestingly the structural problem of nationalism in the Balkans is still evident in the newspapers of today. Needless to say subjugation of nationalities by the European monarchies was by no means limited to the continent (Wittkopf 1989: 97-99). Competition for foreign markets at that time took on a decidedly more mercantilist character than it presently does, which in turn complicated the balance of power equation for the kings and secret diplomats who played chess with human lives.
In Nineteen fourteen mobilization was a dreaded word almost synonymous with declaration of war (Holsti 1992: 242-243). The decision making process regarding negotiations to cease hostile actions at the outset of the war was hampered by the relentless schedules of mobilization. In fact, even on the field of battle at Tannenberg, one of the wars initial battles, Russian and German corps commanders found military decision making hampered by the structure of mobilization imposed from on high by the general staffs(Liddel Hart 1930: 104-105).
Furthermore, two of the principle combatants, the German and Russian Empires, could quite arguably be characterized as reactionary monarchies. In light of the track record democracies have of not going to war with one another one sees the converse in the relations of the demonstrably expansionist empires of Germany and Russia prior to Nineteen fourteen (Hughes 1990: 188).
In his conclusion to Why Nations Go to War John G. Stoessinger uses sickness as a metaphor for the human condition of war (Stoessinger 1990: 205-206). I shall employ Mr. Stoessinger’s metaphor in a counter argument against his individualist conclusion regarding the outbreak of World War One. I agree that war is a disease, but it is my contention that the disease is rooted in structural degradation rather than the decisions leaders make of their own “free will”. When suffering from a physical disease the doctor diagnoses the physical causes, which like structures human beings do not control. The seeming irrationality of the kings and diplomats who played chess with human beings is the symptom rather than the cause of the systemic failure that precipitated World War One. With the structural forces that were gathering on the horizon at the turn of the century I do not see how the outcome could have been other than the great catastrophe that took fifteen million lives. In seeking to place the blame on a few elites who after all are only human beings the individualist account fails to recognize a dependence of which we are not conscious (Tolstoy).
Gen. John "Black Jack" Pershing visits Arlington National Cemetery in 1925.

Friday, September 8, 2017

Hume, Induction & Bayes's Theorem

Hume, Induction & Bayes's Theorem
Hume’s paradox is the result of an interrelationship between two distinct logical processes; namely, deductive logic and inductive logic.
David Hume 1711-1776
David Hume’s historical context was one where Descartes and other rationalists believed deductive, or a priori knowledge of the ultimate nature of reality could be attained through philosophical investigation. Empiricists such as Hume reacted by asserting deductive arguments were built into the structure of language and were strictly trivial; for example, “there are no married bachelors” or “I think, therefore I am”. The empiricists also stressed the inability of human reason to extend the deductive process to ontological concepts such as cause and effect or the uniformity of nature. According to Hume these concepts were arrived at through a process of induction.
Taking cause and effect as an example, Hume demonstrated the operation of enumerative induction. For instance, every time up until now I have wound my wristwatch it has run. I assume this pattern will hold whenever I wind my wristwatch. The justification of this process is the problem of induction.
Deductive logic is utilized by scientific reasoning to confirm or disconfirm hypotheses. This is because initial boundary conditions of hypotheses are treated as premises which are true only if observable phenomena predicted by them obtain. The paradox here is that while scientific models are confirmed deductively, they are themselves arrived at inductively before being tested for predictive accuracy. The underpinning of testability of hypotheses as laws lies in the putative uniformity of nature. However, appeal to the uniformity of nature fails to resolve the problem of induction. There is an inductive process inherent in assuming that natural processes will be the same tomorrow as yesterday.

Baye’s theorem of conditional probability states that evidence which is improbable antecedently, which obtains if the evidence is true, is most likely to raise the probability of the hypothesis. The probability of the hypothesis is boosted by the evidence. Apropos an argument stating the source of the problem of induction to be a linguistic confusion, which runs along the following lines. Probability given the evidence is the standard of rational belief. The conclusions of inductive inferences are probable when probability is viewed in this light.
The problem of induction remains despite probabilistic theories. Inductive inference determines the rules of inductive evidence. Conclusions supported by inductive evidence must derive from rules known to be correct. This can only be done inductively by inferring the probable correctness of the rules. Reformulating the problem as one concerning degrees of rational belief fails to resolve the basic problem concerning justification.

Thursday, September 7, 2017

Alternatives in Computability Research

Turing Machine

Alternatives in Computability Research
April 27, 1997
I plan to examine the status quo in the field of computability and unsolvability in pure mathematics and theoretical computer science. One aspect of the status quo seems to be the use of high performance computing to facilitate the study of the subject, which leads us to a consideration of possible changes to come.
With the increasing efficiency and speed of computing equipment the old high performance computing research model may need to be relegated to only the most expensive research in terms of computer time. Already in some fields network communications and parallel processing are performing computing tasks that once were the sole task of research supercomputers and mainframes. This phenomenon has occurred in applications as disparate as filtering applications in radio astronomy and chess programs used in grandmaster versus machine matches. Many of the IT labs computer accounts used at the University of Minnesota are maintained on networks of workstations and not on mainframes or supercomputers. The continuing trend toward use of distributed and parallel computing systems is apparent and inevitable.
With limited resources and increasing research demands the use of networked, distributed and parallel systems will become necessary, thus making now an ideal time to implement this process in computability and unsolvability research. In fact, this process has already been occurring for some time in the field. As more and more specialists from the computer science become involved in automata theory the situation will only become more favorable with regard to innovative research. Computer science is a young field in which paradigms of software development and hardware implementation have changed many times, and in which different approaches to these activities have managed to exist simultaneously.
One of the central concepts of Computability and Unsolvability is the algorithm. In essence an algorithm is an effective computational procedure. Intuitively one may think of an algorithm as purely mechanical sets of instructions which will provide the answer to any one of a class of questions. These instructions are devoid of any non-deterministic features, such as the ones often attributed to human creativity. In theory it will always be possible to fabricate a machine or program a computer which carries out these instructions. For example, as school children we were taught algorithms for solving simple sums. This was done in the past by human clerks who applied these algorithms to sets of data given them by their overseers. Today we have adding machines which perform these tasks.i
Consider a given problem that one wishes to solve. It may not be the case that one has already at hand an algorithm which will solve it. Now we ask: is there an algorithm which will solve this problem? We now have what is known as a decision problem. An affirmative solution to a decision problem consists in demonstrating the existence of an algorithm which solves it; a negative solution in showing that no such algorithm exists. In the latter case the problem is said to be unsolvable.ii Showing that a decision problem is unsolvable can save much needless effort on the part of researchers in mathematics and computer science.
In the development of this theory the Turing machine emerged, an important concept in both mathematics and computer science. A Turing machine is an abstract model of an automata that applies algorithmic sets of instructions until it reaches a solution at which point it halts. It may be visualized as an infinite strip of tape with infinitely many boxes subdividing it, similar to the manner in which a strip of film is. There is a scanner or read-write tape head that can move to the left or to the right along the tape and scans the information contained in one of the tape subdivisions. The information contained on each subdivision of the tape is the data manipulated by the instructions and the instructions themselves. A digital computer can be thought of as operating in a similar way. Data is stored in memory locations or cells that contain information which can be instructions or data, and these locations are scanned and written to in a certain sense by registers.
The primary task of the computer programmer is the implementation of algorithms. A computer program is in essence an algorithm. The practice of computer programming has shown that essentially the same algorithm may be implemented in diverse forms. This is because many different and successful models of program development are in existence. These different models are motivated just as much, those more disposed to the point of view of engineering would say more, by the possibilities of actual technology as by the results of mathematical logic. One of the main points I covered in my interview with Dr. Ravi Jarnardan of the University of Minnesota Computer Science faculty was that mathematical analysis provides the answers to questions about the performance that can be expected of algorithms. Consider the problem of sorting n names into alphabetical order. Analysis has shown that both the bubble sort and straight insertion sort algorithms will perform nxn operations in worst case situations. They are expensive algorithms in terms of computer time. However, it is known widely from experience that the straight insertion sort algorithm is often much more efficient than the bubble sort algorithm. Thus an average case analysis may be more relevant in assessing the relative performances of different algorithms. In the case of the heapsort algorithm, which is much more efficient than the aforementioned, a probabilistic analysis which in essence simulates a coin flip at each stage of the algorithm can be very helpful. For example, program A takes action X or action Y at stage Z of its execution. Probabilistic analysis would make the result of the coin flip correspond with the action the taken by program A. Probabilistic analysis tries to account for the conditional probability inherent in the execution of many programs.
On the software side we have the different approaches of modular programming and object oriented programming. Modular programming has been described as top down. In this model the programmer strives to subdivide the problem he is given into as many smaller simple subproblems as they can. A module is the program division that contains the algorithm used to solve these simpler subproblems. A simple main program calls its functions or modules which then perform the requisite data processing. This has the advantage that the given modules can be used various times in the original program without the need for writing the entire set of instructions again, thus wasting increasingly not so valuable computer memory. In the black box approach that is the ideal of modular programming many different modules, the black boxes, are written for use as they may be needed at later times by various main programs. In object oriented programming a class or object which contains its own modules and data is instantiated, dynamically allocated, implemented and then destroyed. All in all contemporary computer science students are expected to be conversant with both software development paradigms. In part this is because object oriented implementations would make little sense without the use of modules in them. An example of the interplay between the two models is the use of object oriented programming packages to write the compilers used by modular programming packages. C++ which is considered an object oriented programming language was used by Microsoft to develop Visual Basic, a programming package which is more modular in nature.
That different problems lent themselves to different solution strategies should be readily apparent. However, these differences in solution strategy may lie more in what is used to solve the problem, viz. software and hardware, as in the actual form of the algorithm used to solve it. I hold that a research team given a set of problems to resolve could optimize the software and hardware options to be implemented. Thus an algorithm that needed to bounce back and forth checking and comparing integers thirty octillion times would be well suited for FORTRAN code and compilers. Research that pertained to more complicated recursive functions or decision procedures that need to evaluate whole classes of objects and not just cheap integers would be more appropriate for parallel computing using object oriented code or a language package used in computer science or artificial intelligence such as LISP, Ada, Smalltalk, Prolog etc..
Several alternative approaches to problems in the field of computability and unsolvability have already been outlined in the proceeding paragraphs. For example, supercomputer and mainframe processing of expensive numerical algorithms, niche languages and specialty items optimized for use in restricted applications such as the prolog computer language. On the hardware side we have parallel or distributed computing projects among the many alternatives. Where alternatives exist one of course always has the opportunity to fuse them together into a hybrid project that is well fitted to those aspects of the problem in question which are themselves of a hybrid nature.
Among the benefits of such an integrated and synthetic approach would be the mutual contributions that computer science and mathematics have to give each other. In utilizing the tools of computer science to further the mathematical endeavor computer scientists would see their software design and hardware implementation in action. Mathematicians would see the fruits of their theoretical labors in mathematics applied to and come to fruition in computer science. At times it is possible to have the best of both worlds. On an intuitive level one should be able to see how the possible project optimization proposal I gave above could be useful. However, as the old cliché says, too many cooks can spoil the broth. One would have to be careful that the proliferation of alternative approaches did not result in a concomitant proliferation of problems. At other times, one has the worst of both worlds. Also there are those schooled in the old ways who have much experience to offer the inquirer. Standard and time honored approaches to certain problems are often the most accommodating for the greatest number.
i Davis, Martin, Computability and Unsolvability, New York 1982, pp xv, 3

iiDavis, Martin, Computability and Unsolvability, New York 1982, pp xvi

Wednesday, September 6, 2017

Overview of Computability and Unsolvability

Burroughs B5500 stack architecture system 1960s.

Overview of Computability and Unsolvability
April 21, 1997
Computability and Unsolvability concerns itself with the existence of exclusively mechanical procedures that solve diverse problems. Automata Theory is a branch of pure mathematics that holds interest for nonmathematicians because its utility in philosophy and theoretical computer science. The Gödel Incompleteness Theorem and the existence of ultimately unsolvable problems are results of Computability and unsolvability research which have philosophical implications. Another important result of the field is the existence of the universal Turing Machine which has served fruitfully as an abstract model of the digital computer; this result is one of the sources of the origin of automata theory as an important field of theoretical computer science. In particular this result has modeled those problems which could conceivably be modeled for solution by any deterministic computing device. The unsolvability of the halting problem is another noteworthy result of research in these areas. The halting problem asks whether there is a general procedure that can determine whether any given set of program instructions contains non terminating sub procedures. The programmer who has tried to debug code of a hard to find infinite loop can continue to despair as no foolproof debugging tool for such situations will ever be found.
The primary work of contemporary mathematicians consists in determining the truth or falsehood of various propositions concerning mathematical objects. For example, real numbers, rational numbers, continuously differentiable functions, etc. However, computability and unsolvability has as its main subject of investigation the existence of algorithms or effective computational procedures which are able to solve various problems. The theory of computability and unsolvability is also known as the theory of recursive functions. A recursive procedure or function is defined as one which can repeat itself indefinitely until a given condition is met, e.g. finding a solution to the problem it was to solve.
For thousands of years questions of what is logically possible and how that knowledge can be applied to outstanding problems have perplexed and encouraged mathematicians and scientists. Developments in theoretical mathematics have often pointed the way to later developments in the sciences and especially to the models used to analyze the data those sciences are concerned with. The classic example of the preceding is Issac Newton’s contemporaneous development of the calculus and Newtonian physics. It is also the case that the prior existence of hyperbolic geometry facilitated Einstein’s development of his relativity theories. One such modeling application is seen in the automation of algorithms that determine the computability of a given problem or that attempt to automate the search for the solution to an outstanding problem. This is seen as particularly valuable in modeling other complex scientific problems and extensive and complicated software projects. Artificial intelligence and mathematical combinatorics are just two of the areas where computability research has played a role.
One example of a long standing problem only recently solved by the electronic computer is Robin’s Theorem. Complex iterative procedures involving binary data trees in computer science are in the province of the field. These data structures seem particularly well suited to the demands of computability and unsolvability research and vice versa; recursive and iterative algorithms can be used to traverse these structures and the nodes of the tree can be used to store relevant data for later retrieval by these processes. The automated generation of theorems and their proofs by effective computational procedures is the main area of research that I will be investigating. The axioms of a given mathematical logic can be viewed as the building blocks of proofs, really sequences of consequences of the given axioms, from which an effective computational procedure can derive a valid theorem by stringing together the consequences of the logical axioms into its proof.
In conclusion the investigation of the field will center around two main questions. 1) What are the principal recent results and encouraging areas of current research in the field? 2) How is this research being implemented? In particular with regard to the use of high performance computing devices to implement this research.

Saturday, September 2, 2017

Sundry Reflections on Computer Science


Sundry Reflections on Computer Science 
 April 14, 1997
 The subject I intend to explore has several facets. First is the automated generation of valid mathematical theorems. Second are investigations of just which theorems may be generated by the computer. Finally, we have the actual practice of generating such theorems or unresolved problems. I still have to develop the relevant definitions of the subject matter such as ‘unresolved problems’, ‘computability and unsolvability’ and other such terms central to the area under study. The field has a long history with its origins in the ancient discipline of mathematical logic. However, modern computing hardware capabilities are increasingly up to the task of putting theory into practice.
For thousands of years questions of what is logically possible and how that knowledge can be applied to outstanding problems have perplexed and encouraged mathematicians and scientists. Developments in theoretical mathematics have often pointed the way to later developments in the sciences and especially to the models used to analyze the data those sciences are concerned with. One such modeling application is seen in the automation of algorithms that determine the computability of a given problem or that attempt to automate the search for the solution to an outstanding problem. This is seen as particularly valuable in modeling other complex scientific problems and extensive and complicated software projects. Artificial intelligence and mathematical combinatorics are just two of the areas where computability research has played a role.
What follows are several examples of my present still developing knowledge of the achievements and day to day work of this research. One example of a long standing problem only recently solved by the electronic computer is Robin’s Theorem. I also know that complex iterative procedures involving binary data trees in computer science are in the province of the field. These data structures seem particularly well suited to the demands of solvability research and vice versa. At the University of Minnesota Professor John Baxter works in this area.
In investigating this subject further there are at least two questions that I hope to answer or at least gain greater insight into the possibility of an answer. First, what big project, goal or dream research problem or application do researchers in this area have that they want to see resolved or investigated further? Second, what is the actual practice involved in achieving this and the efforts underway to ensure that the area continues to resolve and gain insight into new problems and continues to merit research?

The Reeducation of Richard Rodríguez and Oscar Zeta Acosta


The Reeducation of Richard Rodríguez and Oscar Zeta Acosta
With musical canon and dialectical process as my models I will attempt to synthesize strains in The Hunger of Memory: The Education of Richard Rodríguez and The Autobiography of a Brown Buffalo. My thesis is that standing at opposite ends ot the Chicano literary periphery, thesis and antithesis as in dialectic - point and counterpoint as in canon, they together are the fondation whereby the center holds.
Mr. Rodríguez espouses an openly assimilationist posture. He seeks full ‘Americanization’. He develops the thesis that it is a disservice to children speaking the ‘private’ language of their family not to be taught the ‘public’ language of the republic. On the other hand Mr. Acosta uses his literary work to express grievance and rage against the United States government as in the references to FDR and the incident where the protagonist of Autobiography of a Brown Buffalo spits on the stars and stripes. Furthermore the spitting incident takes place in the context of the triumphalist totalitarianism of the second world war. In the context of the civil unrest of the sixties Mr. Rodríguez is lukewarm in his opposition to the Vietnam conflict. At the end of Autobiography of a Brown Buffalo Mr. Acosta wants to break asunder the bonds cast for him by Uncle Sam and forge ahead establishing a new and independent Chicano identity.
A contrapuntal progression can be artificed of the structural antagonisms between the opus duo. Just as I have chosen the canon of music, serene and abstract, over the dialectic of philosophy, discursive and polemical, so too Mr. Rodríguez and Mr. Acosta have at my view superimposed over their respective works their own respective superstructures.
The model superimposed by Mr. Rodriguez, that of the man of the west, the individual prevailing against the opposing currents of sloth and superstition is just the model whose authenticity and sincerity is questioned by the Marxist literary criticism of Lauro Flores. Mr. Flores points out the irony whereby Mr. Rodríguez sets himself up as a socially disadvantaged child, only for us to find later that he never was quite, in his view, socially disadvantaged. States Lauro Flores:
On a first level it could be argued that this device is perhaps intended to operate as a play on irony, inasmuch as we subsequently find out that he never really was underprivileged.1
Still I am critical of Mr. Flores and do not find many of his crypto-socialist arguments compelling, all the while noting the irony that Mr. Flores seeks to structure the domain that Mr. Rodríguez defends as his private realm. There is heartfelt irony in the work of Mr. Rodríguez as in the plaintive realization ala Thomas Wolfe that ‘You Can’t Go Home Again’ and yet in the final chapter Mr. Rodríguez returns to the private intimacy of his family without the rancor that characterized the homecoming of Mr. Wolfe. Much of the book is more reminiscent of Mr. Wolfe’s Look Homeward, Angel in its yearning for the literary development of the protagonists’ life lived. Mr. Rodríguez does in fact have the basic human right to place himself on what many Chicanos consider the assimilationist and bourgeois extreme of the periphery.
In his own way Mr. Acosta can be said to have staked a claim on his sector of the periphery. However the journey of Mr. Acosta is inverse to that of Mr. Rodríguez. Until the very end the Brown Buffalo does not despair that ‘You Can’t Go Home Again’ - and here there is irony when one considers how many times throughout the course of the book the protagonist has indeed left home. Leaving anew ever to embark on what Francisco R. Alvarez has described as the ‘Bios: la doble jornada étnico-existencialista de Oscar.
The loss of cultural identity is accentuated by and also reflected in the motive forces behind the impotence (although this one may read as a criticism of a universe of hispanos that have often presented protagonists of exaggerated virility)2 Translation is mine.
The impotence of Acosta’s protagonist is seen as a symbol of Like the work of Mr. Rodríguez so too Autobiography of a Brown Buffalo has been found to be rife with ironies great and small. Jeanne Thwaites has enumerated these for the curious literary investigator. For example:
When a writer pretends to be ignorant of something the readers know, it is dramatic irony, a device used in stage plays: the audience knows what the actors do not. The reader wants him to be happy - is glad he is happy - but knows all cannot go well. Acosta is severely addicted to drugs and alcohol, has ulcers and both vomits and passes blood.3
As Thwaites has pointed out obscured by the Chicano triumphalism when Mr. Acosta’s protagonist dedicates himself to the brown berets is the drug and alcohol addiction, among other health problems, lurking about casting its specter over the dreams of our hero. This is deadly irony.
Reflection on experience rather than relation of facts seem to be the operative principles of Hunger of Memory: The Education of Richard Rodríguez and Autobiography of a Brown Buffalo.
...the central problem of autobiography is that the author relates experiences and not facts. 4 Translation is mine.
As Alvarez states after quoting Paul Jay on “ the ever present ontological gap between the self who is writing and the self-reflexive protagonist of the work” (Being in the Text 29) which cannot help but remind one of Mr. Rodríguez’s reflections concerning the pseudo public distance he felt from his one words as they passed before his eyes at the typewriter (Hunger of Memory, 182).
In conclusion we note the comparative conclusion of Lauro Flores concerning Ernesto Galarza’s Barrio Boy and Richard Rodríguez’s Hunger of Memory.
As we have seen in the previous pages, Ernesto Galarza and Richard Rodríguez elaborate self-portraits that convey two opposed manners of perceiving the self in its relation with the human group to which they belong. In a broader sense, the contrast between these two distinct perceptions encapsulate the ideological contradiction which lies at the heart of the dialectics of Chicano culture.5
It would seem that one could in the comparative spirit make this conclusion regarding Hunger of Memory and Autobiography of a Brown Buffalo.

1 Flores, Lauro, Chicano Autobiography: Culture, Ideology and the Self, The Americas Review: A review of Hispanic Literature & Art, vol. 18, no. 2, 1990, page 86
2 Alvarez, Francisco R., The Autobiography of a Brown Buffalo de Oscar Zeta Acosta:Escritura, Ser e Ideología en la autobiografía Chicano de los 70, Monographic Review/Revista Monográfica, vol. 9, 1993,
page 167.
3 Thwaites, Jeanne, The Use of Irony in Oscar Zeta Acosta’s Autobiography of a Brown Buffalo, The Americas Review, vol. 20, no. 1, pp 80-81
4 Alvarez, Francisco R., The Autobiography of a Brown Buffalo de Oscar Zeta Acosta:Escritura, Ser e Ideología en la autobiografía chicano de los 70, Monographic Review/Revista Monográfica, vol. 9, 1993, page 164.
5 Flores, Lauro, Chicano Autobiography: Culture, Ideology and the Self, The Americas Review: A review of Hispanic Literature & Art.
vol. 18, no. 2, 1990, page 89

The Television Will Not Be Revolutionized

The Television Will Not Be Revolutionized
The opening credits of The Cosby Show feature each cast member dancing to a funky popular beat. The colorful and kinetic opening seems designed to flow very well into slickly produced high budget prime time commercials. The opening reminded me of some of the peppier Pepsi Cola commercials that aired during the program.
The main plot of the episode I watched revolved around the fact that Elvin, Dr. Huxtabul’s son, and his wife were moving into their first private home. This was an occasion for many members of the family to get together for the house warming party. Elvin’s grandfather among others was present at the house warming party. Several times during the course of the episode family was held forth as a valued ideal.
A subplot was constructed around the expected marriage announcement of Vanessa, Dr. Huxtabul’s daughter, and her fiancee Dabnes. The grandfather had passed on the rumor that Vanessa and Dabnes would announce they were getting married at the house warming party. To the surprise of the Huxtabuls, Vanessa and Dabnes declare that they intend to separate.
Family is central to the stress that Mr. Huxtabul and Dabnes experience regarding their parting. Engaged to Vanessa Dabnes had become a valued member of the “extended family”. In the end Mr. Huxtabul and Dabnes agree that they will still be friends, but that the family ties are no longer the same.
Over the years the network situation comedy has developed its own peculiar conventions. Henry Louis Gates in his article TV’s Black World Turns - But Stays Unreal argues that the genre is necessarily limited not only by its own conventions but also by other factors, which in turn shape those conventions. The Cosby Show exemplifies the empty gestures of tokenism, pabulum sanitized to satisfy profit driven network executives who dare not portray social realism during prime time, lest the Nielsen ratings fall and the sponsors withdraw.
The Cosby Show portrays a relatively large and extraordinarily cohesive upper middle class family. Both of these portrayals are well established conventions of the genre. These conventions reflect values that are popularly held to be ideal not only by the African-American community, but by the majority culture as well. By choosing to emphasize these themes the program articulates a consensus of cultural values; these values represent a consensus of values held by the mainstreams of both majority and minority cultures. For example, the plot of the show I analyzed revolved around a house warming party. At the house warming party several generations of the Huxtable family were to gather in order to foster a sense of family continuity and cohesion. Cliff Huxtable, his father and his son were central characters in the story, presenting a patrilinear view of family continuity that is representative of the dominant cultural ideology.
The Cosby Show implicates individual viewers in the culture’s values through the very situations it portrays. Taking Dr. Huxtable as an example we have a character whose profession is that of physician, a profession viewed as noble and old money by both majority and minority cultures. The Huxtable family is presented as thriving in the brownstone neighborhood, ensconced as it is in immaculate landscaping, of Brooklyn Heights. The plot of many episodes revolves around tension between the articulated consensus values and the encroaching influences of socially undesirable phenomena such as alcohol use or pre-marital sex. These tensions of the society are noticed by, and produce confusion in, many of us. By implicating one in this conflict between core values and moral decline the viewer’s consciousness easily flows into the conflicting messages of commercials that play to core values on the one hand, and to moral decline on the other.
In The Cosby Show viewers have a celebration and a justification of their culture. This show in particular has as its star a celebrity; that is, a person celebrated. During the episodes flashy high budget commercials are commonplace, further contributing to the air of celebration surrounding the ritual of prime time television viewing. Returning to the house warming party we see a depiction of a celebration of the recent acquisition of property by Cliff Huxtable’s son in law. The House warming party depiction should serve to comfort viewers, and to reinforce the mainstream faith in the bootstrap capitalist fantasy analyzed by Henry Louis Gates in TV’s Black World Turns - But Stays Unreal. From their bleak and tangled lives the very victims of the reactionary turn taken by American politics, insofar as affirmative action is concerned, look in through the television looking glass on to the reassuring illusions of The Cosby Show.
The Cosby Show assures viewers of the importance of their beliefs through each episodes successful resolution of the situations that drive the comedy. The successfully achieving and acquiring family presented in the show will naturally reinforce the idealization of the core value “family” held by the dominant cultural ideology. The viewer, who has already been implicated in the cultures core values, sees again and again the successes of a strong and solid family. The Huxtable family also happens to be African-American, further extending the audience to whom reassurance is conveyed. Tragically viewers from the majority culture may hold forth the situation portrayed by the program as their hypothetical foil for the advocate of affirmative action, characterized by remarks to the effect, “if only they would pull themselves up by the bootstraps like the Huxtables everything would be fine.”
Reactionary ignorance of the historical roots of minority disenfranchisement thrives on the comfort and reassurance given to the masses via the idiot box. This comfort and reassurance goes far in the way of convincing the viewers of their status as individuals. Both the disenfranchised African-American viewer and an affluent viewer of the majority culture are comforted and reassured in paradoxically divergent ways by a show such as Cosby’s. The disenfranchised African-American may feel that Cosby is indicative of their own potential for success if only they apply themselves with diligence to that end; the mirror image of this consideration is illustrated by the reassurance of the reactionary viewer as to the validity of their social ideology.
A sense of belonging to the culture, at the cost of attaining a mainstream bourgeoisie lifestyle, is transmitted to the African-American community. However, viewers of the majority culture may also feel a sense of belonging as they are comforted by the fantasies presented to them by the television networks. The very fact that a large portion of the consumers to whom commercials are intended to sell products are people who sorely need comfort and reassurance seems to preclude any move by network executives toward greater social realism on prime time television.
The networks use of an easily identifiable celebrity such as Cosby is yet another indication of the need of a commercial medium to grab the public’s attention with the public figures most readily apprehensible by all. Cosby himself frequently appears in commercials; the viewer’s identification of situation comedy and commercial with one another becomes closer than ever. In its turn the profit driven bias of prime time network television crystallizes, further reinforcing the tendency of network executives to dish up pabulum like Cosby. As long as network executives remain caught in the cycle of celebrity and profit there is little hope that The Cosby Show will amount to anything more than a cynical token gesture on their part.

Variations om mergesort. Part I. MIT Scheme

I'll be demonstrating some sample code for variations on the merge sort algorithm in various computer coding systems. The breakdown into...