Why 3.3.1 is the best thing what happened recently

The IT industry is in turmoil over a change Apple did in their iPod/iPhone/iPad license:

Applications may only use Documented APIs in the manner prescribed by Apple and must not use or call any private APIs. Applications must be originally written in Objective-C, C, C++, or JavaScript as executed by the iPhone OS WebKit engine, and only code written in C, C++, and Objective-C may compile and directly link against the Documented APIs (e.g., Applications that link to Documented APIs through an intermediary translation or compatibility layer or tool are prohibited).

Basically, what they are saying is “you will use our SDK and that’s it!” I’m not going to expand the point that about 90% of the people complaining about this change did not and wouldn’t ever write an App for the Apple store.

The good thing about this all is that Adobe thought it was a direct attack to their Flash platform (which I kinda don’t agree because I have my own conspiracy theories, but I can see their point) and decided to bash Apple. Apple (Steve Jobs, actually) decided to write a long response to Adobe. Yes, there are a lot of wrong points on it and I’ll let you read Thom Holwerda article about this.

If there is a lot of bashing around, why I think this whole mess is any good?

Well, first of all, Jobs is right about Flash: I’m tired of closing Firefox ’cause a Flash applet is burning my CPU just to show a small game of two guys trying to beat each other in eating bananas or because, apparently, the runtime is still running, eating memory and making Firefox slow. Flash is not accelerated in anyway in OS X or Linux, even if the technology is around for years. And Jobs claims about Flash will (or, at least, I hope it will) force Adobe to produce a decent runtime for Flash very soon. The more Jobs bash them, the better.

Second, we finally have a good discussion about the open platform of the future: the web. I can’t recall so many discussions about HTML 4.0 or XHTML 1.0 before this. And now we have a lot of people discussion the merits and weakness of HTML 5. “Can it do that?” “Can it replace this?” and such will only improve the draft even further. The “can’t”s is actually the best point of this all: If the W3C keeps an eye on it, who knows what new features HTML 5.1 will have?

As a side note to the HTML 5 discussion, it seems that some companies are already aiming products that will use HTML 5 features (Google seems to be pushing better features for HTML5-capable browsers, although the look and feel is still the same) and I expect that in a few months, some sites will display the dreaded “this page requires [browser X] or superior” what we saw in the 90s. But it will be for a good thing: old, bug ridden browsers will not display things properly and people will be force to drop that in favor of newer, better browsers. And not only that, but the hidden “you need that browser because we put something that only that browser supports” will be replaced by “you need that browser because we put something that only the new, open standard supports it”.

Third, still part of the HTML 5 discussion, we have the h264 codec discussion (which is the codec used to transmit videos on the web in HTML 5.) Jobs position of the “open web” pointing h264 is just bringing more and more discussion about the patent encumbered codec. The more Jobs hits the point about this, the more people will point that h264 is not an open codec and that, sooner or later, some company may screw the whole internet because they got angry with someone and decided to revoke all licenses.

The whole Adobe vs Apple discussion is awesome for the open web, because both companies are pointing exactly what’s wrong with the current situation.

Realism != Immersability

It’s been around 3 months that I’m away from World of Warcraft. Not because I’m trying to give up my addiction or because I’m pissed with something Blizzard changed; the problem is that I don’t have a proper place to sit down and play for hours like I used to. Also, internet is not that good here, and latency is a problem with WoW. Due these problems, I kept thinking about going back to Guild Wars, the first MMO I played.

Guild Wars have a different movement model, which makes it easier to play without a mouse (and, thus, without a proper place to sit.) Also, some places (the outdoors, outside “outposts”) have their own instance, so you don’t need to worry about someone coming and messing with your game and, better yet, since you can enter those areas alone, you don’t need to worry about latency that much, since you’re running most of the area all by yourself (thus, solving the latency problem.)

There was another thing drawing me back to Guild Wars, though: The gorgeous scenery. I’m not kidding: There is one place in the first game (they had 3 expansions already, I own 2 of those plus the original game), which I could sit and just keep looking at the screen for hours. I may have taken a screenshot a long time ago and used as wallpaper, so gorgeous it looked.

This weekend, after fighting for ages trying to run on every way I could think of (VirtualBox, wine, free version of Crossover), I finally managed to make it run thanks to the paid version of Codeweaves Crossover (still on trial, but I may buy it.) And I spent a good part of my weekend playing the starting areas again, just to remember how to play (not to mention that I may have messed up my skills/talent points on my previous characters so better start clean.) And, after that long, one question that I asked myself while playing WoW never pop up:

Am I that character or a person playing that character?

I know it sounds weird, but I asked that myself several times: When I’m playing… Am I the character? Or Not?

Truth is, I never really found a good answer for that. Yes, I get immersed in the game and its story but I can’t quite make it if I’m that character running around killing things and getting gold for that.

Thing is, even if Guild Wars looks better and have a more natural look on everything (i.e., the characters have a more human look, the animals based on real ones really look like the real ones), it doesn’t give that impression of immersability that WoW have, even if the later have a much more cartoonish look.

In a thought, Guild Wars should provide a bigger immersability than WoW: It looks more natural, the events look more like real life, the locations are more real life but, in the very end, it doesn’t feel like the game “traps” you into itself. WoW, in all it’s cartoonish way with dwarfs, elfs and blue goats from outer space still is capable to dragging you out of this plane to somewhere else.

Experiment continues…

Why the iPad matter

or “It’s not the change, but it’s the seed of it”

So Apple announced yesterday their new product, the iPad. Some people call it table, some people call it a big iPhone/iPod touch, some call it “balloon boy”…

But, in the end, it’s a game changer. Not directly, but it put the seed to change a lot of stuff.

PDAs
If you had any hope PDAs would come back, well, forget it. Although most of the smart phones have PDA features, their small screen isn’t so good for most of the stuff the “real” PDAs do. The iPad big screen (compared to most smart phones), with it’s non-really-tiny keyboard (even being virtual) kills most of it.

Kindle
The Kindle seems to be the first target of the iPad and Jobs even said the iPad wouldn’t exist if it wasn’t for the pioneer work from Amazon and now they would “stand on their shoulders.” Well, at the first look, it doesn’t look so much of a challenge:

  • Kindle costs about $230, the low entry level iPad costs $499 (almost twice);
  • The Kindle screen offers higher resolution (824×1200 vs 768×1024) and have a better ppi (150 vs 132.) And let’s be honest, when you’re reading a text, it doesn’t matter if the screen is gray scale or color, it’s black text over white background.

So, why the iPad affects the Kindle market? First of all, the iPad is not just a eBook reader: It also have a browser and email client and, althought Kindle also have a browser, it’s fairly limited. So, when you count that you have a small device that can do more than just read books, it may be worth paying twice for it.

In the very heart of the situation, though, is the fact that Apple is selling books. Let’s be honest, the Kindle is nothing more than a vechile to Amazon sell books without worrying about the logistics of sending a bunch of paper sheets with ink on them to a person somewhere in the globe. Apple iBook store will go head to head with Amazon on that and, after the 1984 fisasco, it’s image is somewhat scratched. And let’s not forget that Apple managed to convince a bunch of corporate luddites that music can be sold without DRM (even after selling them with DRM for a long time — I know, I was there when they switched.)

Netbooks
Small form, can connect on most WiFi networks… Sounds a bit like a netbook, doesn’t it. Well, not a first glance. A netbook like the Dell Mini 10, which comes with 160GB (10x more than the entry level iPad), 11.6″ screen (against a 9.7″ screen) may sound like an undisputed winner, specially when it costs $399 against iPad’s $499. But when you think about what people do with Netbooks, it mostly email, web and text editing. But when you add the latest Windows version, it’s price jumps to $520. And it can still go higher if you replace Microsoft Works (bundled) with the latest Microsoft Office.

Apple redesigned their iWorks suite to fit the small screen of the iPad. And they are offering each of the 3 applications (Pages [word processor], Numbers [spreadsheet] and Keynote [presentation]) for $9.90 each. So you can get a small office suite for about $30. Which is around the same price for the Dell Mini (although you’ll have to deal with a virtual keyboard instead of real one.)

And really, I don’t think the harddisk size actually matters that much. Most people that use a netbook for email, web and small editing really don’t go that deep into the 160Gb (which is mostly used by the operating system itself.)

Not saying that the iPad is a clear winner, but it has a nice place in the netbook market.

Telephony
Wait, what? Telephony? What the hell!

Well, it’s one of the small gems hidden in the iPad. Together with the launch of the new device, Apple is releasing a new SDK, version 3.2. This version removes the restriction of VOIP applications.

Now think about it: You have a VOIP application that can run on your Wifi (and 3G) tablet and on your 3G phone (since the same OS runs on both iPad and iPhone/iPod touch.) This is big. With the price of a data transfer, you can talk to anyone in the world, anywhere you are. Old telephone companies must shiver with the prospect of landlines going to be canceled ’cause people won’t need them anymore.

(Edit) MID
MID (Mobile Internet Devices) is an area where Nokia pushed a lot. The N900 is the latest of that line of devices, which started with the N770 and, as far as I know, it’s the most famous (and successful) line of MID devices so far. Again, the iPad goes head to head against them and, due the screen size, I must say it’s almost a loss for Nokia.

On the other hand, if you remember that on every new series Nokia simply stop any support for the previous operating system (the N770 with Maemo 3 lost support when the N800 was launched and now the N800 with Maemo 4 is out of support with the N900 and Maemo 5), basically means Nokia shot itself pretty good in the foot. If only they cared about their older systems (the first iPhone STILL can get the new OS) they might had a chance. But too late.

So it’s all good?
No, not at all. The iPad, although (as I believe) is a game changer by concept, it’s new that big in the real world.

First of all, it’s the lack of multitasking, which is, let’s be honest, a stupid move by Apple. It have the power to do so, but it doesn’t. It doesn’t make any sense. It’s like buying a Ferrari and going all around on second gear. The only hope is that, at some point, Apple releases an OS that it’s capable of multitasking properly (if not, it will have to be jailbroken.)

Second, it’s the centralized model around the iTunes Store. As an old user of it, I thought it was really amazing that I could get music easier than pirating it. But it’s not all roses about it: I was living in Australia and the Australian Store, although selling the soundtrack of “Across the Universe”, didn’t have the full version of some albums: Most of them are only complete (2 discs and all) only in the US store. And, worst of all, there is absolutely NO WAY of buying ANYTHING in Brazil. This is completely stupid. And you can believe some more stupidity may come, like not being able to buy some books in the original language due your region (or worst, no books at all.)

Third, no Flash. Oh wait, that’s actually a good thing. ;)

(Edit) Fourth, the lack of ports. For everything you need to connect on the iPad, you’ll need a converter. A huge mistake here. Imagine if that came with a simple video output. BLAM! Install Keynote and you have a nice presentation tool to carry around!

Summary
I really believe the iPad is the start of a new generation of computing devices. I want my PADD and walk around the Enterprise with things to show to the captain. But the centralized model Apple insists on pushing may do more harm than good (well, maybe not at their home.)

(Edit) In case you’re asking yourself “so, he means I should get one or not?” the answer is “no”. I’d like to get one myself ’cause I’m a gadget guy (I walk around with a phone and an iPod touch, sometimes I carry my N800 with me, I have a Palm T|X in a box, a GPS thingy somewhere and just thrown away one of the first iPaq models ’cause it was not working anymore) but I’m pretty sure I’d save the money to buy something else. At the same time, as it’s the first iteration of such line of devices, I guess it’s better to let the people with huge piles of money to buy it right now and wait for the next generations. Unless, of course, you have huge piles of money or is a gadget guy (with some money to spare.)

Why Go feels like a balloon boy

One of my friends like to use the expression “balloon boy” to everything that gets a lot of attention but it turns to be a lot less interesting in the end.

Go is a new language created by Google that recently went open source and generated a lot of buzz in the interpipes.

As someone who have been working as programmer for almost 20 years and worked with almost a dozen languages and, on top of that, have a blog, I think I’m entitled to give my biased opinion about it.

One of the first things that got me off was the video pointing that the language is faster. Or the compiler is. Honestly, pointing that you become more productive because your compiler is fast is utterly wrong. If you’re aiming for a new language and you want people to be productive with it, make it so it’s easier to write code right in the first time. If you need to keep compiling your code over and over again till it does the right thing, you should probably check if there isn’t any impairment in the language itself that prevents right code to be written in the first place.

Which brings us to my second peeve about Go: The syntax, as presented in the tutorial. Syntax, in my opinion, is the biggest feature any programming language have to offer. If the syntax is straightfoward and easy to understand, it makes easier to have multiple developers working on the same code; if the language allows multiple “dialects” (or ways to write the same code), each developer may be inclined to use a different approach to write the code (which basically does the same thing) and you end up with a mess of a code where most developers would feel like rewriting than fixing a bug or adding a feature.

The first thing that caught my eye was the “import” statement that at some point uses a name before it and a block in the second example. Why two different ways (well, three if you count that one is probably optional — in the middle of the statement, nonetheless!) to import other packages with the same command?

Variable declaration also feels weird. “a variable p of type string” is longer to read than “a string p” (comparing var p string := ""; with C way string *p = "";). And that goes on. If you keep reading the statements in their long form (expanding them to natural English), all commands start to feel awkward and adding unnecessary cruft on the code, things that could be easily dismissed and force people to do less typing.

The “object” interface seems derived from JavaScript, which is a bad idea, since JavaScript have absolutely no way to create objects in the proper sense. And, because object attributes and members can be spread around instead of staying grouped together, like in C++ and Python, you can simply add methods out of imports. Ok, it works a bit like duck-taping methods in existing objects, but still can make a mess if you add two objects in one file and people decide to add methods just in the end of the file: You end up with a bunch of methods about different objects all spread your code, when you could “force” them to stay together.

So far, those were my first impressions of the language and, as you can see, it was not a good first impression. Focusing on compile speed instead of code easiness/correctness seems out of place for current IT necessities and the language seems to pick some of the worst aspects of most languages around.

GPL and the web

A few years ago (two or three), I saw Richard Stallman at FISL where he said that things like Webmail were bad ’cause you don’t have any control over the software it runs in the server. In a way, he is right: How do you have any control over your data if you don’t have any control over your software? How can you be sure that the server isn’t doing something nasty with your information since you have no way to request the source code?

Requesting the source code is one of your rights if you are using a GPL-licensed software. That way, you can be sure that the application is not sending your information to someone else or looking for things it shouldn’t. But the GPL says that distributed software should have its code available; in a web 2.0 world, nobody is distributing any software: it simply is there. Therefore, even if you run a GPL application, do lots of modifications, because you’re not distributing it, you don’t need to make your changes available to the world.

The thing that was bothering me, though, is related to some web apps/websites I used at some point. They had this pretty cool thing and I was wondering “Is that something I know, like WordPress, Drupal, Joomla or whatever?” but, in the end, I couldn’t find anything that would say what they were using in the backend. And, just now, I was wondering how the GPL would apply to such websites.

Besides the GPL, there is another very useful license: The modified BSD license or simply “BSD”. The only rule the BSD license requires (compared to the “5 freedoms” GPL enforces) is that you can’t remove the copyright from the original authors. You may add your name, but the original copyright must appear somewhere. I wondered, then, if the GPL would have such requirement. I’m not a lawyer, but I think this does:

5. Conveying Modified Source Versions.
[…]
b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to “keep intact all notices”.

That, to me, sounds exactly like the BSD. So, if you’re using a GPL software in your webserver, you must point, somewhere, that the engine behind your powerful site is copyright the original authors.

Now you must ask yourself this: How many websites out there are using WordPress with a modified theme that completely removed the “Powered by WordPress”? Or sites that chose (not sure why) the GPL version of the jQuery and didn’t mention that anywhere?

Why the new Star Trek bothers me

For a while, I’ve been ranting about the new “Star Trek” movie by J.J.Abrams and written by Roberto Orci and Alex Kurtzman. This morning I finally realized why it bothers me and why the line “OMG, boobies in Star Trek?” makes me giggle.

First, let’s take a look at the list of main Star Trek characters in the series:

  • The Original Series: James T. Kirk, Spock, Dr Leonard “Bones” McCoy, Montgomery Scott, Hikaru Sulu, Pavel Checkov, Uhura (and let’s throw Christopher Pike just for the sake of it.)
  • The Next Generation: Jean-Luc Picard, William Riker, Geordi La Forge, Worf, Beverly Crusher, Wesley Crusher, Deanna Troi, Data.
  • Deep Space Nine: Benjamin Sisko, Kira Nerys, Odo, Julian Bashir, Jadzia Dax, Quark, Miles O’Brien, Jake Sisko, Worf (yes, again), Ezi Dax.
  • Voyager: Kathryn Janeway, Chakotay, Tuvok, B’Elanna Torres, Tom Paris, Harry Kim, The Doctor, Neelix, Kes, Seven of Nine
  • Enterprise: Jonathan Archer, T’Pol, Charles Tucker III, Malcolm Reed, Hoshi Sato, Travis Mayweather, Phlox.

    Go on. Go clicky-clicky and try to find the two that doesn’t fit. I’ll wait.

    Did you spot the two?

    Ok, the answer is: Wesley Crusher and Jake Sisko (although I made it hard for you to noticed why Jake doesn’t belong there.) They are the only teenagers in the whole list of series that were main characters (there we some kids in “Voyager”, but they would appear in only one or two episodes.) All the others look like they are in the late twentys or early thirties (with a few exceptions that look more like they are getting into their fourtys.) And that also includes non-human, ageless forms, like Odo, Data and the Doctor, and the ones with longer lifes, like the Vulcans. Even the youngest crew of all series, the Voyager (they were going into final training before going officially into service when they were transported to the Delta Quadrant) looks like they were in the later twentys.

    And that’s why the new Star Trek bothers me. All the actors (with the exception of McCoy) look like they are in their early twentys and in full operational status already. Even in the original series, when the Enterprise goes into its official mission of “explore strange, new worlds, to seek out new life and new civilizations”, Kirk looks like he’s in the late thirties. And now you have a Kirk that looks like he just out of puberty.

    Yes, there were boobs in the TOS. But they belonged to mature females, not some out of puberty, hormone full chick.

    To me, it looks like the tone of Star Trek changed from “When you get out of your studies and do some real life training, you may be a member of the most important ship of the human race” to “jump into the most important ship of the human race! All you need to do is be able to talk!”. Sign of the times, maybe, when you’re supposed to finish college and be a full experienced whatever-they-call-you-in-the-field. But, still, Star Trek looks a little bit tainted with an “easy way to get there” view.

    But, then again, I’m an old trekkie (although I never remember if the proper way is trekker or trekkie…)

Web 2.0 is not streamable

This week our connection at home is shaped. This means that, instead of the shinny 1Mbp/s that we usually have, now we have to suffer to see pages with a bandwidth of just 64Kbp/s. But there is one thing that such limited bandwidth made me realize: The next web isn’t streamable.

To get to that conclusion, I hadn’t have to go far: Just opening Google Reader shown that it’s impossible to live with a very limited bandwidth. Right now, I should have something like 1000 unread news in 1 hundred subscriptions, which means Reader have to download a large description file with all that information. Thing is, right now, it doesn’t do anything: It shows the default Google application header, the logo and that’s it. But, knowing how things usually works in this Web 2.0 universe, I know that there is something going on:

Interactive sites, like Google Reader and GMail use AJAX. AJAX relies on XML, which is a structured plain text data (the same can be said for JSON.) XML allows the data to be in any other inside their structure. As an example, imagine a book information list: Inside the “Book” item, you can have a “Title”, which can be in the very beginning or the very end, but the result would be the same. So, any application that uses XML need to first receive the information, then convert it to some internal representation and then it can be used. Google Reader wasn’t “doing nothing”: It was receiving the list of feeds and the initial 100-something feed items which, due the small bandwidth, was taking very long. And, because it needed the whole thing, nothing was being displayed.

Which is a problem I see with many XML/JSON results: You can’t stream them in a way that you can start using the information before having it all. For example, in Mitter, we can’t display tweets before we received the whole message. If XML and JSON weren’t so loosely defined and we had a way to assure that after the element “User” we would have an element “Message”, then we could start displaying tweets before we had all of them (not that the format changes all the time, but since we can’t ensure that ordering, we must be ready for the data appearing in a different order — or with some other data between the ones we need.)

In a way, that’s a complete reverse of roles for AJAX. In the very beginning, AJAX was used to prevent large downloads: If you had a page where it would be useful to display all options to the user to help him/her to find data, you’d have to fill the page with that data (imagine, for example, a page with all your Del.icio.us tags, plus all the possible suggestions for all the other users.) The use of AJAX meant the site could filter results, so you’d have a smaller page, with would do small requests to the webserver, returning small amounts of data. In overall, it meant that the user experience would be faster. Now, we have so much information packed in XML/JSON formats that the user experience is not as responsive as it should.

The reverse ideas

On the post about Final Fantasy, I realized that most of the series follow the same basic premise. And yesterday, after watching the next season of “Heroes”, I realized that most TV series also follow the same idea. That’s when I came with the reverse ideas for those things:

Reverse Final Fantasy: The forces of Light and Darkness most be in balance, or the universe will explode. Unfortunately, the Light is getting over and so the Warriors of Darkness must be summoned to save the planet. To do that, they must pillage villages, destroy families, corrupt kings and such. Honestly, I think it’s cool because you’ll end doing wrong things for the right reason.

Reverse TV series: This occurred to me when I saw “Continue in the next episode” in the end of the first episode of “Heroes.” Almost every TV series starts showing the personalities of the main characters, then add some action, add some cliff-hangers, try to connect every main character in a way and (in the really well written series) it ends closing all the open plots and shows a happy ending. What I’m thinking here is a series which the first episode is the happy ending. Everyone is fine, the universe is saved, the villains are in jail… and it ends with “Continues in the previous episode.” So the whole thing is a lot of retcons over and over again, trying to explain how character X became the villain, how Y found his/her super-powers, how the city was destroyed…

World of Blizzard

The year is 2010. To reduce production costs, Blizzard decided to join all its franchises into one single product. That’s when “World of Blizzard” was born.

On it, you can be a Protoss Zealot Hunter, in your quest to save the world from Diablo and his brothers.

One of the most popular races/classes is the Zergling Priest.

When open-source fails

When I started using Linux, around the year 2000, you could use a very simple window manager or you could use GNOME. GNOME, at the time, had this very very cute window manager called Enlightenment, which was also a royal pain in the ass to use. It required certain settings on your X to be enabled or you couldn’t use some features (like key-bindings.)

The thing about Enlightenment is that it was really nice on the eyes. Its themes were extremely good looking and there were lots to choose from. I could say that Enlightenment was the Vista of that time.

One of the things that actually did happen a few years later was that GNOME changed its default window manager from Enlightenment to Sawfish. Sawfish, although not that good looking, was way more configurable and, apparently, the author was willing to make it more integrated to GNOME than Enlightenment. No biggy, GNOME changed window manager, but Enlightenment had its own fan base, so they took separated ways.

More years later, the Enlightenment team announced the start of release 17, also called E17. They plans were big: they would use a lot of new libraries and it would be fast and you would get even more eye candy, with shadows and real transparency and real time updates on icons and such (almost what you get today using Compiz.)

The biggest problem with E17 is that it didn’t survived its own promises. Every step forward in development was followed by two steps back. Features added were moved to yet another library and everything had, apparently again, to be refactored again. All libraries were being constantly hacked and never had any releases.

It was almost a year after the E17 announcement when the XFree team announced some new features they were planing, which would allow any window manager to use features like real transparency and real drop shadows and almost everything that the E team promised. At this point, anyone would think “So the E team joined the big dogs and help them to develop something for the community.” Well, wrong. They decided to keep their plans and don’t look around.

Even more time later, XFree announced XDamage and XRender, two features that paved the way to the current compositor-enabled window managers. Even weirder, there were two projects that managed to do that, one lead by RedHat and another by SUSE. Problems? No, they decided to talk and found a way to merge their projects into a single entity, not fragmenting the community and giving a fair change to everyone.

So, since the E17 announcement, we had a major release of X, now forked into X.org, opening it to the great community (XFree was not that open with other people suggestions), several libraries and almost every single desktop is using these new features and getting some very nice eye candy every day.

And what did happen with E17?

Last week I downloaded gOS, a light-weight distribution which uses E17 as window manager and desktop environment, using the sources from the repository, as there are no formal release yet.

What I saw was a alpha release of… something. The keybindings still don’t work properly, the themes are more proof-of-concept than usable (too much animation and very few helpful things), their widget set sometimes decides to ignore the current theme and falls back to the default one and, still, it feels like the window manager is always trying to get in your way and annoy you in the worst possible way and never ever help you.

More than 7 years and still no stable release. In that time, every single desktop environment managed to slim down and get more eye candy and be more user-friendly.

I don’t know about you, but I think E failed.