What In The Hell Are Errors

GOTO All What In The Hell Articles

Almost every program you will ever write will have errors.

As soon as we started programming, we found out to our surprise that it wasn’t as easy to get programs right as we had thought. Debugging had to be discovered. I can remember the exact instant when I realized that a large part of my life from then on was going to be spent in finding mistakes in my own programs.
-Maurice Wilkes


Syntax Errors

Syntax errors are probably the most common errors in programming. Luckily they are usually also the easiest to fix. Most languages will point out syntax errors as soon as you try to run or compile your program. Usually you will be provided with a file and line number of the offending syntax. Some IDEs check your syntax as you type, providing a sort of spell checker for syntax.


def add(x, y)
  return x + y
File "./test.py", line 2
  def add(x, y)
SyntaxError: invalid syntax

Chicken Scheme

(define add(x y) (+ x y))
Error: during expansion of (define ...) - in `define' - too many arguments: (define add (x y) (+ x y))

        Call history:

        <syntax>          (define add (x y) (+ x y))


Type Errors

Type errors occur when your code tries to do things like adding an integer and a string together. Depending on the language you use you may be notified of type errors when compiling your program or while your program is running. Type errors are also very common and are a little bit harder to fix.


def add(x, y):
  return x + y
add(1, "a")
Traceback (most recent call last):
  File "./test.py", line 4, in <module>
    add(1, "a")
  File "./test.py", line 3, in add
    return x + y
TypeError: unsupported operand type(s) for +: 'int' and 'str'

Chicken Scheme

(+ 1 "a")
Error: (+) bad argument type: "a"

        Call history:

        <syntax>          (+ 1 "a")
        <eval>    (+ 1 "a")     <--


Logical Errors

Logical errors occur when you write code that performs correctly, but does not give the output that you desire. Logical errors can be the worst kind of bugs. There is rarely any support for detecting them built into the language, as there is technically nothing wrong with the code. These bugs happen somewhat frequently and can range from minor inconvenience to major disruption.

Below is an example of a logical error that would fall into the minor inconvenience category. We’re trying to define a function named add that takes two arguments, adds them together and returns the result. The code we have below does not have any syntax or type errors in it and it is perfectly valid code. However instead of adding the two arguments it subtracts them and we get the answer 0 when we expected the answer to be 4.


def add(x, y):
  return x - y
add(2, 2)
#>>> 4

Chicken Scheme

(define (add x y)
  (- x y))
(add 2 2)
;>>> 4


Errors and Error Handling

Each language will have its own way of representing errors. Some languages provide a mechanism called error handling which let’s you control what happens when an error occurs. We’ll get deeper into errors and error handling in an upcoming WITH article.

GOTO Table of Contents


What In The Hell Is Big O

GOTO All What In The Hell Articles

Big O is a notation for describing the worst-case performance of an algorithm or procedure. This post will show real-world examples of big O notation.


O(1) – Constant Time

A constant time algorithm will always take the same amount of time to run.

Finding a book in a library.

There are many methods to finding a book in a library, but I’m going to stick with the simplest, ask the librarian.  Assuming the librarian is familiar with their library, it should take about the same amount of time to find a book about mathematics as it would about poetry.

For another example see indexing an array.


O(n) – Linear Time

A linear algorithm runs once for each item(n).

Finding a CD in a stack of CDs

The simplest method of finding a CD in a stack of CDs is to just look at the one on top.  If that CD is the one you’re looking for, you’ve found it.  If it is not, look at the next one.  Repeat until there are no CDs.  If the CD we’re looking for is the last one in the stack, we had to look at every CD to find it.

But what about all the times that you look for a CD and the CD you’re looking for is in the middle or on the top?  Big O notation describes the worst-case running time of your algorithm.  It assumes that every search will be for the last CD (or a CD that doesn’t exist).  Big O is not a statistical model of how often a given condition will occur in an algorithm!


O(log n) – Logarithmic Time

A logarithmic algorithm runs log2(n) times.

Let’s play a guessing game.  I’m going to pick a number between 0 and 100 and you have to guess it.  You have to try to guess the number in as few guesses as possible.  Every time you guess I will tell you if my number is higher or lower.

You could devise a number of strategies for playing this game, however one strategy is particularly good at this type of game, binary search.  Each time we guess, we guess the number right in the middle.  By doing this we halve the number of possibilities every time, let’s play a little game…

The game above plays out the worst case performance of an O(log n) algorithm like binary search.  Instead of making 100 guesses at worst, we only make 7!


O(n2) – Quadratic Time

A quadratic time algorithm runs n2 times, or n times for each n.

Check if there are any duplicates in a deck of cards.

The simplest way to check if there are any duplicate cards in a deck is to pick the first card from the deck. Then compare that card to every other card in the deck. If there are no matches take the second card from the deck. Then compare the second card to every other card in the deck. Continue until you have checked all the cards.

By the time we are done we have looked at each card in the deck 52 times for a total of (52 * 52) 2704 comparisons. There are ways to lessen this number, like not looking through cards that you have already compared. However the most important thing to remember with big O notation is that it always describes the worst-case time, not the average.

Here’s what the process looks like if you don’t re-check cards you’ve already compared.

I’ll come back later and add some more big O notations, so check back. For now I’ll leave you with a question to ponder. What is the big O notation of the diagram above?


Sorting (this will be the topic of another WITH article, but for now…)

Visual Representation of Sorting Algorithms in Javascript

Visualizing Sorting Algorithms


Parallel GPU Sorting (pdf)

What different sorting algorithms sound like

GOTO Table of Contents

What In The Hell Are Trees

GOTO All What In The Hell Articles

A tree is a hierarchical data structure that contains a set of nodes. The node is the the basic building block of a tree and to understand trees you have to understand nodes.

A node is a data structure that has a parent, can contain a value and can have 0 or more child nodes.  A node with no child nodes is called a leaf node, or more commonly just a leaf.

A node with no parent is referred to as a root node.

Each node can have a child node of its own.

A node can have more than two children too.

By now you should grasp the basic structure of a tree, but what are they good for and where are they used?  Well the good news is that trees are all around you.  Your operating system’s directory structure is a tree, HTML and XML are both trees.  Your filing cabinet could even be considered a tree.  A linked list is also a form of tree, where each node has exactly one child node.  Let’s take a look at some more trees.

You belong to a tree too!

and of course…


Implementation of Trees

Programming languages have varying levels of support for tree-like data structures.  The abstract idea of a tree and the implementation of one are often very different things.  Pretty much any nestable data structure can suffice for an ad-hoc tree implementation.  For example here’s one of the simplest trees you can implement in Python.

class Node:
  def __init__(self, value=None, *args):
    self.value = value
    self.children = args

# A tree can then be defined as
tree = Node('root', Node(1), Node(2))

Scheme makes it quite a bit easier, both code and data are represented using trees (actually lists).

(define tree (cons 1 2))
;>>> (1 . 2)

(define nested-tree (cons 1 (cons 2 3)))
;>>> (1 2 . 3)

If you’re wondering why the nested-tree looks like a list, remember that lists are a subset of trees.

I didn’t get to heavily into how trees are implemented in any particular language.  Mostly this is because the implementations of tree data structures varies widely from one language to the next and sometimes there are even multiple implementations of trees in a single language.

Many decisions go into a tree implementation and the tools that work with them.   Because of this there are many different types of trees, each with its own strengths and weaknesses and tools for manipulating them.  Future WITH articles will will go deeper into trees, their implementations and characteristics.

GOTO Table of Contents