MIT 600 part 1: Mapping World to Something Computational

How do you start teaching/learning Computer Science and Programming? Is “Hello World” the right place to start? But what is computation? What is programming? Relax. It’s easier then it sounds. I’ll use the usual recipe metaphor to approach these questions. The recipe analogy is used in a lot of places, including the first lecture of the MIT’s “600 Introduction to Computer Science and Programming” course (this post is a summary of my notes from that lecture). Why recipes? For several reasons. A recipe is a sequence of steps. So is a computer program. We can map (solutions to) real-world problems to a series of steps, like we can describe the actions necessary to transform ingredients into meals. And computation is what we do when we “take a description of a problem and map it into something computational”. Computation is not about computers, but about a certain way of thinking. It is about dividing a problem into a series of steps that even a machine can run. This, along with some basic computing concepts, is explained well in the first MIT 600 lecture, and if you want to know more, just click “Continue reading”.

(On this page of the course displaying the video you can also find the transcripts.)

The MIT Open Course Ware gives anyone with enough Internet access the opportunity to follow this course, with videos, transcripts, assignments etc. Be sure to visit the MIT OCW page, where you can find many interesting courses to chose from, not only in the field of Electrical Engineering and Computer Science, even though not all offer as much materials as this one, with videos of lectures and all.MAPPING THE WORLD INTO SOMETHING COMPUTATIONAL

When working on computer programs we are working on models of real-world problems. Even a basic “Hello World” program. We are taking a complex situation, choosing the relevant aspects, and putting them in “words” that a computer can understand. Like when we have to tell a story of some event – we have to find the right words and put them in the right sequence. Or when we compose recipes. We look at the ingredients, think of the desired result and the steps required to reach it, and describe the procedures in simple terms so that anyone who has those ingredients we can repeat the steps and have those results any time, if the right conditions are met.

Computers are good at acquiring imperative knowledge. This lecture explains two categories:

1) “Declarative knowledge” (see the Wikipedia article on descriptive knowledge)
2) “Imperative knowledge” (see the Wikipedia article on procedural knowledge)

Declarative is about facts (what is something), imperative is about sequences of steps (how to do something). I won’t get too far into the subject, since I am way behind in Epistemology and Cognitive Psychology (arguments I am interested in and will read about in the future). The thing is, even a machine can “learn” to execute steps. How does this happen? Look at the drawing on the blackboard.

MIT 600 lecture video screenshot explaining computation

THE PROGRAM COUNTER

This paragraph is rather technical but not essential for understanding basic principles. If you find it fun, read it, if it starts confusing you or if it’s stuff you already know, just skip the paragraph. So, computers are devices that take input, process the data and then output or store the results. This is defined in steps. The program counter is the one that that keeps track of which step (instruction) is being executed. A series of steps can become an action that may involve input (ask user’s name),  output (printing “Hello World” to the screen) or processing (check condition, add x to z, if something then do this else do that). If we just print onto the screen, the computer executes the instruction and moves on to the next one. If we are executing a conditional, like if x then y else z, the program calculates the value of x, and in bases of that it changes the value of ‘next step’ to the one relative to instruction x or z, depending on the outcome of x. But enough of that.

THE “PRIMITIVES” AND TURING COMPATIBILITY

We don’t really have to worry about the computer that much. Most of the time we don’t have to think about how the computer does things, and we certainly don’t have to teach it how to do every little thing every single time. We can compose our programs using building blocks called “primitives”. Primitives are basic computer operations we use as building blocks to define more complex behaviors. To cite Prof. Grimson,

“In 1936, that same guy, Alan Turing, showed that with six simple primitives, anything that could be described in a mechanical process, it’s actually algorithmically, could be programmed just using those six primitives.”

Any computer language that can handle those six primitives, is considered Turing compatible (do not confuse this with the Turing test, see Wikipedia on “Turing completeness” for explanations).  Basically, if a problem can be expressed algorithmically, it can be resolved with a Turing compatible programming language. That is why we can do same things in different computer languages, even though some are better at some things, some are more generalist and others more specialized. But we won’t really be thinking about programming at the Turing machine level. As Prof. Grimson explains:

“OK. Now, fortunately we’re not going to start with Turing’s six primitives, this would be really painful programming, because they’re down at the level of, ‘take this value and write it onto this tape.’ (…) What we’re going to see with programming language is that we’re going to use higher-level abstracts. A broader set of primitives, but nonetheless the same fundamental thing holds. With those six primitives, you can do it.”

Computer programming even lets us define our own “primitives”, and some can be at a very high level of abstraction. So, if we are making a web application for a blog, we can define data types like author and post, and instructions such as publish, edit and delete. That way we are mapping a real life problem – publishing content – into computable steps.

FIXED-PROGRAM COMPUTERS

Some of the earliest computers were fixed-program computers, that were specialized for specific tasks and had all the steps hard-wired into the circuits (we’ll get a feel of how to play with hardware in one of the next posts, on the topic of Arduino). An example of this is the Atanasoff computer from 1941 that solved linear equations.

The 1997 replica of the Atanasoff-Berry Computer at Durham Center, Iowa

The 1997 replica of the Atanasoff-Berry Computer at Durham Center, Iowa State University, from Wikipedia, Creative Commons license, by Manop, http://en.wikipedia.org/wiki/File:Atanasoff-Berry_Computer.jpg

Programming gives us the luxury of modifying the computer’s behaviors without touching the circuits. It is as if the computer took the circuit diagram as input and then modified itself accordingly.

THE INTERPRETER

How can we modify computer behavior without little robots that move circuits around? Thanks to the interpreter. Interpreter understands the text and can “explain” it to the machine (the exact way of doing it depends on the context).

PROGRAMMING LANGUAGES

Towards the end of the lecture there is a brief overview of categorizations of programming languages. They are:

1) High-level and low-level programming languages (level of abstraction, how close to the machine or close to the real world that they are modeling)
2) General and targeted languages (more or less specialized)
3) Compiled and interpreted languages

SYNTAX AND SEMANTICS

Like with any other language, we have to distinguish between syntax and semantics. Syntax is about how something is written and if single pieces mean something when written in a certain way. Semantics (static and full) is about the meaning of the whole thing, if the program makes sense and works. The bugs that have to do with semantics are harder to find and fix. Good programming style and methods, like unit testing that is taught in later lectures, can help us feel much less lost inside or code and find those types of bugs.

Towards the end of the lecture, there is also an explanation of some basic programming concepts like data types, expressions, statements, commands, operands and some simple Python examples.

TEACHING THE BASICS

MIT’s “600 Introduction to Computer Science and Programming” course has a very interesting approach. It is meant for students with little or no experience in programming, but that doesn’t mean it’s boring or trivial. On the contrary. Personally I am a fan of introductory courses, and find that the proper explanation of fundamentals is indispensable for any further acquisition of knowledge. Proper teaching is not about stringing together tips and tricks and technical terms, but about offering a good panoramic of the subject that combines declarative and imperative knowledge, theory and practice, one step at a time, without taking important issues for granted. The idea is to familiarize the student with the “frame of mind” of the topic and the related terminology, to present the basic theory and techniques, and give a motivating glimpse of advanced topics that lay ahead.

The 600 course I have followed on-line is the Fall 2007 edition. It was though by Prof. Eric Grimson and Prof. John Guttag. They start by explaining the strategic goals of the course and some administrative questions. The objectives of the course are, in essence, to:

  1. Teach the students to think like computer scientists and map problems to something computational
  2. Give them a basic vocabulary of tools needed to read and write short computer programs
  3. Help them understand capabilities and limits of computation – what can and can’t be solved computationally

Consider that taking a look at an introductory course like this may even help an advanced practitioner. It may help her or him put in words some of the concepts that she or he has already grasped at a more intuitive level, and maybe even clarify some principles. And if done right, it might also be fun and motivating.

P.S. PROF. ERIC GRIMSON NAMED CHANCELLOR OF MIT (EFFECTIVE MARCH 01 2011)

“Eric Grimson, Professor of Medical Engineering and Head of the Department of Electrical Engineering and Computer Science, has been named Chancellor of MIT. Grimson’s Introduction to Computer Science and Programming is one of the most popular courses on MIT OpenCourseWare.”

Advertisements

About apprenticecoder

My blog is about me learning to program, and trying to narrate it in interesting ways. I love to learn and to learn through creativity. For example I like computers, but even more I like to see what computers can do for people. That's why I find web programming and scripting especially exciting. I was born in Split, Croatia, went to college in Bologna, Italy and now live in Milan. I like reading, especially non-fiction (lately). I'd like to read more poetry. I find architecture inspiring. Museums as well. Some more then others. Interfaces. Lifestyle magazines with interesting points of view. Semantic web. Strolls in nature. The sea.
This entry was posted in online courses and tagged , , , , , , . Bookmark the permalink.

2 Responses to MIT 600 part 1: Mapping World to Something Computational

  1. Max says:

    Just found your blog while watching the Introduction course on MIT Open Courseware. keep up the good work, I found this to be very helpful.

    Thank You!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s