Computer Science


Daniel Dennett's Darwin's Dangerous Idea is a very interesting book introducing Charles Darwin's Evolution Theory from a computational standpoint.

The following quote is from Chapter 7. In my opinion it helps in clarifying some misunderstandings on the nature and research goals of computer science.

Now, some scientific problems are not amenable to solution-by-simulation, and others are probably only amenable to solution-by-simulation, but in between there are problems that can in principle be addressed in two different ways, reminiscent of the two different ways of solving the train problem given to von Neumann — a "deep" way via theory, and a "shallow" way via brute-force simulation and inspection.

It would be a shame if the many undeniable attractions of simulated worlds drowned out our aspirations to understand these phenomena in the deep ways of theory. I spoke with Conway once about the creation of the Game of Life, and he lamented the fact that explorations of the Life world were now almost exclusively by "empirical" methods — setting up all the variations of interest on a computer and letting her rip to see what happens. Not only did this usually shield one from even the opportunity of devising a strict proof of what one found, but, he noted, people using computer simulations are typically insufficiently patient; they try out combinations and watch them for fifteen or twenty minutes, and if nothing of interest has happened, they abandon them, marking them as avenues already explored and found barren.

This myopic style of exploration risks closing off important avenues of research prematurely. It is an occupational hazard of all computer simulators, and it is simply their high-tech version of the philosopher's fundamental foible: mistaking a failure of imagination for an insight into necessity. A prosthetically enhanced imagination is still liable to failure, especially if it is not used with sufficient rigor.


The following is from E. W. Dijkstra's, " On the cruelty of really teaching computing science ". This was written in 1988 by one of the most respected pioneers of Computer Science, with a track of (by then) more than 25 years of experience of teaching and researching in academy and industry, in Europe and US.

The problem with educational policy is that it is hardly influenced by scientific considerations derived from the topics taught, and almost entirely determined by extra-scientific circumstances such as the combined expectations of the students, their parents and their future employers, and the prevailing view of the role of the university: is the stress on training its graduates for today's entry-level jobs or to providing its alumni with the intellectual bagage and attitudes that will last them another 50 years? [...] Do the universities provide for society the intellectual leadership it needs or only the training it asks for?

[...]

So, if I look into my foggy crystal ball at the future of computing science education, I overwhelmingly see the depressing picture of "Business as usual". The universities will continue to lack the courage to teach hard science, they will continue to misguide the students, and each next stage of infantilization of the curriculum will be hailed as educational progress.


Writing scientific articles like a native English speaker: top ten tips for Portuguese speakers


Citation-based statistics, such as the impact factor, are often used to assess scientific research, but are they the best measures of research quality?

Three international mathematics organizations have released the Citation Statistics, on the use of citations in assessing research quality, a topic that is of increasing interest throughout the world's scientific community.

The report is written from a mathematical perspective and strongly cautions against the over-reliance on citation statistics such as the impact factor and h-index. These are often promoted because of the belief in their accuracy, objectivity, and simplicity, but these beliefs are unfounded.

Among the report's key findings:

  1. Statistics are not more accurate when they are improperly used; statistics can mislead when they are misused or misunderstood.
  2. The objectivity of citations is illusory because the meaning of citations is not well-understood. A citation's meaning can be very far from "impact".
  3. While having a single number to judge quality is indeed simple, it can lead to a shallow understanding of something as complicated as research. Numbers are not inherently superior to sound judgments.

The report promotes the sensible use of citation statistics in evaluating research and points out several common misuses. It is written by mathematical scientists about a widespread application of mathematics. While the authors of the report recognize that assessment must be practical and that easily-derived citation statistics will be part of the process, they caution that citations provide only a limited and incomplete view of research quality. Research is too important, they say, to measure its value with only a single coarse tool.

The report was commissioned by the International Mathematical Union (IMU) in cooperation with the International Council on Industrial and Applied Mathematics (ICIAM), and the Institute of Mathematical Statistics (IMS).

It draws upon a broad literature on the use of citation data to evaluate research, including articles on the impact factor (the most common citation-based statistic) and the h-index along with its many variants. The work was also based on practices as reported from mathematicians and other scientists from around the world.


Beall's List of Predatory Journals and Publishers is a list of questionable, scholarly open-access standalone journals.


"Profession" é um conto de Isaac Asimov publicado originalmente na revista Astounding Science Fiction (julho/1957) e posteriormente recolhido na coletânea Nine Tomorrows (1959).

Recomendo muito entusiasticamente sua leitura a todos os alunos. Você pode lê-lo aqui


Home