Writing [Feed] About Pub

Richard Feynman on Understanding: Introduction to Computers

19 Jun 2013

Yesterday, I read the first chapter of Feynman Lectures on Computation. In it he talks about what makes a computer. He explores the essential building blocks of a computer. Explaining that with a few simple primitives extraordinary complexity can be derived. That any computer, given the right primitives, can accomplish the same task as more sophisticated ones, given enough time. The fact is, a computer is really actually quite dumb. But it is very fast at doing dumb things. By building abstractions on top of these dumb but fast things you can achieve amazing feats. Making the computer look smart. As more layers of abstraction are built we can express ideas concisely and let the abstractions convert them into the dumb things the computer understands. But every abstraction separates us further from the machine. We can express more with less, but start to forget that the computer is dumb. Occasionally, it’s worth reminding ourselves of this fact. That the complex can arrive from the combination of the simple. Because it is in that journey we might rediscover the beauty of computers. Or find a better way to do things. Heck, sometimes it’s just fun to revisit the basics to make sure we haven’t lost sight of the forest for the trees. Sometimes, you just do it for fun!

That brings me to my favorite part about the lecture. No, it’s not him saying that computer science is not a science. No, it’s not his analogy of a computer to a clerk and filing system. No, it’s not his insight into the basic operations a computer needs to support. It’s his eager attitude towards the topic. Here, we have Richard Feynman. A great mind. Highly respected by so many. A man who has furthered the field of science in so many profound ways. And yet he is giving an “Intro to Computers” lecture like a kid who just discovered how computers work. I’m reading words on a page and yet his excitement is palpable.

But my absolute favorite part is after he explains the basic operations of a computer and decides to assign the reader some problems to work out. Before issuing the problems he gives advice on how to go about learning and understanding computer science. It’s so good that I have no choice but to quote all three paragraphs.

Now there are two ways in which you can increase your understanding of these issues. One way is to remember the general ideas and then go home and try to figure out what commands you need and make sure you don’t leave one out. Make the set shorter or longer for convenience and try to understand the tradeoffs by trying to do problems with your choice. This is the way I would do it because I have that kind of personality! It’s the way I study—to understand something by trying to work it out or, in other words, to understand something by creating it. Not creating it one hundred percent, of course; but taking a hint as to which direction to go but not remembering the details. These you work out for yourself. The other way, which is also valuable, is to read carefully how someone else did it. I find the first method best for me, once I have understood the basic idea. If I get stuck I look at a book that tells me how someone else did it. I turn the pages and then I say “Oh, I forgot that bit”, then close the book and carry on. Finally, after you’ve figured out how to do it you read how they did it and find out how dumb your solution is and how much more clever and efficient theirs is! But this way you can understand the cleverness of their ideas and have a framework in which to think about the problem. When I start straight off to read someone else’s solution I find it boring and uninteresting, with no way of putting the whole picture together. At least, that’s the way it works for me! Throughout the book, I will suggest some problems for you to play with. You might feel tempted to skip them. If they’re too hard, fine. Some of them are pretty difficult! But you might skip them thinking that, well, they’ve probably already been done by somebody else; so what’s the point? Well, of course they’ve been done! But so what? Do them for the fun of it. That’s how to learn the knack of doing things when you have to do them. Let me give you an example. Suppose I wanted to add up a series of numbers, 1 + 2 + 3 + 4 + 5 + 6 + 7 … up to, say, 62. No doubt you know how to do it; but when you play with this sort of problem as a kid, and you haven’t been shown the answer…it’s fun trying to figure out how to do it. Then, as you go into adulthood, you develop a certain confidence that you can discover things; but if they’ve already been discovered, that shouldn’t bother you at all. What one fool can do, so can another, and the fact that some other fool beat you to it shouldn’t disturb you: you should get a kick out of having discovered something. Most of the problems I give you in this book have been worked over many times, and many ingenious solutions have been devised for them. But if you keep proving stuff that others have done, getting confidence, increasing the complexities of your solutions—for the fun of it—then one day you’ll turn around and discover that nobody actually did that one! And that’s the way to become a computer scientist. Feynman, Richard P. Feynman Lectures on Computation. Eds. Tony Hey, and Robin W. Allen. Boulder: Westview Press, 2000. pp. 15-16.

This is beautiful. A man truly not caught up in his ego. Perhaps it was this attitude alone that allowed him to accomplish so much. Nothing was beneath him, and therefore nothing was beyond him. I very much look forward to the rest of the lectures. Not just for the insights on computing, but the insights on life as well.