Which programming language should I learn?

The most difficult choice for someone who is new to programming is what language to learn.

You are in the worst position to choose for yourself because you don’t have enough knowledge yet to assess the pros and cons.

If you ask people that do know, you will get tons of different responses.

Most important things:

  • There are enough learning materials.
  • The community is active and helpful to newcomers.
  • The language is difficult enough to challenge you, but easy enough not to discourage you.

Why it doesn’t matter:

  • Over the course of the next few years you will learn many languages (if you plan on being any good).
  • People say that if you pick an easy language (Python or Javascript) you will not learn the important concepts. This can be true, but only if you stop learning a few months in.
  • You’re going to do it wrong in the beginning no matter what language you pick.

The most important thing when learning to program is staying motivated. The easiest way to stay motivated is to work on projects that you think are cool and actually complete them. If you pick a language that is too hard, you are likely to give up in frustration. Nothing is stopping you from learning the harder languages after you pick up the basics of the easier ones.

Pros and Cons of Specific Languages

In order of best to worst languages to learn as your first IMHO.

Pros: Beginner friendly, lots of community support for newbies, lots of libraries to get you going quickly
Cons: If you don’t continually push yourself to learn, you can miss out on a lot of important computer science concepts that Python abstracts away for you.

Pros: Very easy to start with, lots of libraries and code in the wild for you to study.
Cons: Community is full of non-programmers, you will pick up bad habits,

Pros: Beautiful language, great books for intermediate level study
Cons: Small community, not very beginner friendly, few books that start at the ultra-noob level.

Pros: Will force you to learn all of the difficult concepts.
Cons: It will be hard. There are many traps and pitfalls that await you.

Pros: Huge community and libraries
Cons: A sizable portion of people who only know Java (The perils of Java schools), forces you into a single way of thinking about programming. I strongly recommend staying away from Java as a first language.

More Tips

You should spend about a year getting to know your first language. After that you will want to start learning other languages. In my opinion you can’t call yourself a programmer until you are at least familiar with 3 languages. Meaning that you have written a small/medium sized program in each (~3,000 lines each).

But that’s not all. Programming isn’t really about languages at all! Programming is about being familiar with data structures, algorithms and other fundamental computer science concepts and how they apply to solving real problems. In the end it’s all about experience, build applications and solve problems and eventually you will look back on what you have done and say, “Now I think I’m a programmer”.

But wait! That’s not all! Next year you will look back on the year before and say, “I had absolutely no idea what I was doing last year.” The year after you’ll say the same, and the year after that, and the year after that, and so on. Learning to program is a never-ending process that will keep challenging you until you decide to hang up your hat.

Also read Tips for a New Programmer.


What In The Hell Are Errors

GOTO All What In The Hell Articles

Almost every program you will ever write will have errors.

As soon as we started programming, we found out to our surprise that it wasn’t as easy to get programs right as we had thought. Debugging had to be discovered. I can remember the exact instant when I realized that a large part of my life from then on was going to be spent in finding mistakes in my own programs.
-Maurice Wilkes


Syntax Errors

Syntax errors are probably the most common errors in programming. Luckily they are usually also the easiest to fix. Most languages will point out syntax errors as soon as you try to run or compile your program. Usually you will be provided with a file and line number of the offending syntax. Some IDEs check your syntax as you type, providing a sort of spell checker for syntax.


def add(x, y)
  return x + y
File "./test.py", line 2
  def add(x, y)
SyntaxError: invalid syntax

Chicken Scheme

(define add(x y) (+ x y))
Error: during expansion of (define ...) - in `define' - too many arguments: (define add (x y) (+ x y))

        Call history:

        <syntax>          (define add (x y) (+ x y))


Type Errors

Type errors occur when your code tries to do things like adding an integer and a string together. Depending on the language you use you may be notified of type errors when compiling your program or while your program is running. Type errors are also very common and are a little bit harder to fix.


def add(x, y):
  return x + y
add(1, "a")
Traceback (most recent call last):
  File "./test.py", line 4, in <module>
    add(1, "a")
  File "./test.py", line 3, in add
    return x + y
TypeError: unsupported operand type(s) for +: 'int' and 'str'

Chicken Scheme

(+ 1 "a")
Error: (+) bad argument type: "a"

        Call history:

        <syntax>          (+ 1 "a")
        <eval>    (+ 1 "a")     <--


Logical Errors

Logical errors occur when you write code that performs correctly, but does not give the output that you desire. Logical errors can be the worst kind of bugs. There is rarely any support for detecting them built into the language, as there is technically nothing wrong with the code. These bugs happen somewhat frequently and can range from minor inconvenience to major disruption.

Below is an example of a logical error that would fall into the minor inconvenience category. We’re trying to define a function named add that takes two arguments, adds them together and returns the result. The code we have below does not have any syntax or type errors in it and it is perfectly valid code. However instead of adding the two arguments it subtracts them and we get the answer 0 when we expected the answer to be 4.


def add(x, y):
  return x - y
add(2, 2)
#>>> 4

Chicken Scheme

(define (add x y)
  (- x y))
(add 2 2)
;>>> 4


Errors and Error Handling

Each language will have its own way of representing errors. Some languages provide a mechanism called error handling which let’s you control what happens when an error occurs. We’ll get deeper into errors and error handling in an upcoming WITH article.

GOTO Table of Contents

What In The Hell Is Big O

GOTO All What In The Hell Articles

Big O is a notation for describing the worst-case performance of an algorithm or procedure. This post will show real-world examples of big O notation.


O(1) – Constant Time

A constant time algorithm will always take the same amount of time to run.

Finding a book in a library.

There are many methods to finding a book in a library, but I’m going to stick with the simplest, ask the librarian.  Assuming the librarian is familiar with their library, it should take about the same amount of time to find a book about mathematics as it would about poetry.

For another example see indexing an array.


O(n) – Linear Time

A linear algorithm runs once for each item(n).

Finding a CD in a stack of CDs

The simplest method of finding a CD in a stack of CDs is to just look at the one on top.  If that CD is the one you’re looking for, you’ve found it.  If it is not, look at the next one.  Repeat until there are no CDs.  If the CD we’re looking for is the last one in the stack, we had to look at every CD to find it.

But what about all the times that you look for a CD and the CD you’re looking for is in the middle or on the top?  Big O notation describes the worst-case running time of your algorithm.  It assumes that every search will be for the last CD (or a CD that doesn’t exist).  Big O is not a statistical model of how often a given condition will occur in an algorithm!


O(log n) – Logarithmic Time

A logarithmic algorithm runs log2(n) times.

Let’s play a guessing game.  I’m going to pick a number between 0 and 100 and you have to guess it.  You have to try to guess the number in as few guesses as possible.  Every time you guess I will tell you if my number is higher or lower.

You could devise a number of strategies for playing this game, however one strategy is particularly good at this type of game, binary search.  Each time we guess, we guess the number right in the middle.  By doing this we halve the number of possibilities every time, let’s play a little game…

The game above plays out the worst case performance of an O(log n) algorithm like binary search.  Instead of making 100 guesses at worst, we only make 7!


O(n2) – Quadratic Time

A quadratic time algorithm runs n2 times, or n times for each n.

Check if there are any duplicates in a deck of cards.

The simplest way to check if there are any duplicate cards in a deck is to pick the first card from the deck. Then compare that card to every other card in the deck. If there are no matches take the second card from the deck. Then compare the second card to every other card in the deck. Continue until you have checked all the cards.

By the time we are done we have looked at each card in the deck 52 times for a total of (52 * 52) 2704 comparisons. There are ways to lessen this number, like not looking through cards that you have already compared. However the most important thing to remember with big O notation is that it always describes the worst-case time, not the average.

Here’s what the process looks like if you don’t re-check cards you’ve already compared.

I’ll come back later and add some more big O notations, so check back. For now I’ll leave you with a question to ponder. What is the big O notation of the diagram above?


Sorting (this will be the topic of another WITH article, but for now…)

Visual Representation of Sorting Algorithms in Javascript

Visualizing Sorting Algorithms


Parallel GPU Sorting (pdf)

What different sorting algorithms sound like

GOTO Table of Contents