This time I’m going to start with a little story, and the conclusion of this part is probably not what you would have expected, either, but that’s what life is: a constant surprise package waiting to be opened.
I came to the demo party in Denmark, The Party 97, expecting to see cool things. One of the first cool things that we saw, after all of the two thousand PCs and other computers were turned on simultaneously, was a massive power outage, that took out the shiny Christmas decorations outside, the PCs inside, as well as the domestic power supply of half of Aars. When the lights came back up and the demos actually started running, the Party Motto — originally “Batteries not included” — was quickly hacked to display “Power not included” by almost every crew.
When I came to Aars I certainly didn’t expect that the winner would be the demo Second Reality, just like at Assembly ’93 in Finland. It was the same demo, down to the music, the speech samples, the 3d part… everything. Except that the disk had to be turned over in the middle. Because this time, it wasn’t running on a 80486 with 50 MHz, it was running on the Commodore 64, with an 800 KHz, 8-bit CPU, a machine that should, by all standards of measurement, be about 200 times slower than the machine the demo was originally written for.
People’s jaws were hanging wide open.
But not all people’s. If you read the reviews, you will see that the “old school” C64 crowd wasn’t even that impressed. All of the effects have been done on the C64 BETTER at some point. It was the PC crowd that peed their pants for excitement, because the achievement blew their minds.
And how could the C64 people with their tiny lame machines that can’t even multiply in hardware out-do a PC?
By having had 10 years more practice: the C64 was built in 1982. Read: 10 years of programming practice beat 6 years of hardware evolution.
Does that mean that the all that PC business had been superfluous, and the C64 architecture should have been kept forever? No new standards, no improvements, no changes…?
No, of course not. We could not do the things we do with computers now, if we would have been stuck with a system that doesn’t really allow modularity. If the CPUs wouldn’t have made the steps to 32 and now 64 bit, if memory sizes wouldn’t have grown over the years, if the CPUs didn’t have virtual memory management, modern genetics and Google would have been impossible. Without the modern manufacturing techniques, there would be no smartphones — not just due to the package size, but also due to the power consumption of the older chips. Besides, by now, all computers are incorporating the design elements that made C64 and AMIGA technically superior back into the design: separate sound and graphic chips or cores that do their own jobs with very little CPU help; shared memory to avoid copying where memory is a constraint; better standardization, etc, etc.
But the actual problem that this C64 demo has shown us in 1997 is by now all-pervasive in the entire software industry: nobody, but really nobody has more than a few years of practice using the new systems and frameworks, because the technologies become obsolete within less than two years from the moment of their creation, and everybody is forced to move on.
Remember the times where such a thing as “craftsmen” existed? People like smiths, or even, not quite so long ago, car mechanics, who spent their lifetimes learning how to do their job well. When knowledge evolved, it took generations. The next generation could learn the old things, invent new things, and pass both on to the next.
Today, we cannot afford to. If we only learn one trade, like one programming language, we will be obsolete within five years, max. If you can learn quickly, you won. If you learn well but slowly… well, you’re screwed. By the time you have learned your one thing, it is already obsolete. Pass knowledge to your kids? You’re kidding! Even the stuff they learn in school is already outdated. They only learn it so they know the theory, and learn how to learn. How to do things? They will have to learn that on the job.
This is, in my opinion, why there is so much bad code out there. Programs that don’t work properly, because their creators have never learned to do their job well — not even because they didn’t want to, but because they had to spend this time to learn yet another new technology that will be obsolete in a year. This is why everybody and their grandfather nowadays think that they can program, because “X is a new technology, so I had as much time to learn it as everybody else out there.” People who had previously never heard of “Product Y” or anything similar, and when they finally heard of it, didn’t understand it properly, are sold as the top master senior level consultants for “Product Y” by a big consulting firm two years later.
What it really tells us, is, there is only that much speed at which progress can take place. There needs to be a balance between invention and expertise. Innovation cannot take place if you don’t have any experience with the thing you are trying to innovate. Otherwise you will end up re-inventing the wheel again, and again, and again instead — because you didn’t have enough time to go out there and find out that wheels already exist. Or because you are not the first victim of the cycle, and there are so many different wheels to choose from that it is easier to just invent your own again, than to go through all the existing specifications and comparisons to find out which wheel is the optimal one for your project, and how to use it.
And so, it turns out, progress can be its own worst enemy.