Why Go feels like a balloon boy

One of my friends like to use the expression “balloon boy” to everything that gets a lot of attention but it turns to be a lot less interesting in the end.

Go is a new language created by Google that recently went open source and generated a lot of buzz in the interpipes.

As someone who have been working as programmer for almost 20 years and worked with almost a dozen languages and, on top of that, have a blog, I think I’m entitled to give my biased opinion about it.

One of the first things that got me off was the video pointing that the language is faster. Or the compiler is. Honestly, pointing that you become more productive because your compiler is fast is utterly wrong. If you’re aiming for a new language and you want people to be productive with it, make it so it’s easier to write code right in the first time. If you need to keep compiling your code over and over again till it does the right thing, you should probably check if there isn’t any impairment in the language itself that prevents right code to be written in the first place.

Which brings us to my second peeve about Go: The syntax, as presented in the tutorial. Syntax, in my opinion, is the biggest feature any programming language have to offer. If the syntax is straightfoward and easy to understand, it makes easier to have multiple developers working on the same code; if the language allows multiple “dialects” (or ways to write the same code), each developer may be inclined to use a different approach to write the code (which basically does the same thing) and you end up with a mess of a code where most developers would feel like rewriting than fixing a bug or adding a feature.

The first thing that caught my eye was the “import” statement that at some point uses a name before it and a block in the second example. Why two different ways (well, three if you count that one is probably optional — in the middle of the statement, nonetheless!) to import other packages with the same command?

Variable declaration also feels weird. “a variable p of type string” is longer to read than “a string p” (comparing var p string := ""; with C way string *p = "";). And that goes on. If you keep reading the statements in their long form (expanding them to natural English), all commands start to feel awkward and adding unnecessary cruft on the code, things that could be easily dismissed and force people to do less typing.

The “object” interface seems derived from JavaScript, which is a bad idea, since JavaScript have absolutely no way to create objects in the proper sense. And, because object attributes and members can be spread around instead of staying grouped together, like in C++ and Python, you can simply add methods out of imports. Ok, it works a bit like duck-taping methods in existing objects, but still can make a mess if you add two objects in one file and people decide to add methods just in the end of the file: You end up with a bunch of methods about different objects all spread your code, when you could “force” them to stay together.

So far, those were my first impressions of the language and, as you can see, it was not a good first impression. Focusing on compile speed instead of code easiness/correctness seems out of place for current IT necessities and the language seems to pick some of the worst aspects of most languages around.

The multiple faces of nothing

[… or “C, variants and the NULL”]

In C, you have a way to represent nothing. It’s NULL (all caps). NULL points to nowhere and it’s defined as “0”. Why would someone use it? Well, if you have a list and some of the elements aren’t valid, you make them NULL. Since NULL is not a valid pointer, your application will crash if you try to access it. The whole point of NULL is to provide a way to represent the nothing. There is also a nothing type “void”, which you can define anything statically, but you can make it a point of it. Since all pointers have the same size, a “void pointer” is, basically, a pointer to anything.

Also, C have the idea of “nul-terminated strings” (yes, with just one “l”.) the “nul” character is represented by “\0”, which, in practical terms, is a space of memory with the size of a “char” with the value 0 on it.

When going down to the very bits of NULL and nul, they go almost the same, except for their size.

C++ was build on top of C, but if defined NULL as a pointer pointing to the byte 0. It’s almost the same thing as the C NULL but, because it’s a pointer, it doesn’t need to be converted when you’re using a CPU which have a different size for “int”s and pointers (usually, pointers are “long int”s or even more, if your CPU have more than 64 bits.)

Objective-C is a variant of C adding support for objects in a different way and the biggest “user” of Objective-C is Apple. The Apple version of Objective-C provides some basic types like lists. But, because you can’t leave an empty space in the list (which I think it similar to the way we deal with nul-terminated string), they created a NSNull object, which is a valid object, but it represents the null (which, by the way, are called “nil” in Objective-C.) It’s not an invalid memory address, as it points to a real object. The NSNull object provides just one method, “null” which returns a “nil” pointer (are you confused already?)

Now, the fun part: Most list (dictionaries actually, but the process is almost the same) operations, when you try to access an object that doesn’t exist, returns nil. But remember that the only way to leave an empty spot in a list is adding a NSNull object. So, to be really sure that something is not there, you need to check if the result is “nil” or “not an [NSNull null]”.

That’s too much stuff for nothing…