Ognen Duzlevski bio photo

Ognen Duzlevski

Senior tinkerer.

Email

I started playing with computers back in the 80s, just when I was about 11 years old. I suppose these days that is not a very “big” statistic or an important detail - after all, most children today get exposed to a smart phone or a parent’s laptop at a much earlier age. What I think is much more important, however, is to understand the era when this happened.

1980s were a very interesting and exciting time for computers, even though most people did not have access to the kind of raw computing power of today (even a smart phone is many, many times more capable than a Commodore 64 was), we had quite a diversity of manufacturers, all with their original ideas and approaches. Commodore had the C64 and later the Amiga, there was Atari with their ST and later the Falcon models, ZX Spectrum, Schneider, Archimedes, IBM with the XT, even Apple was in the running. Each of these computers had different CPUs (Motorola, Intel, MIPS, ARM, Zilog - and this is just on the home computer front!), different amounts of RAM, they ran non-compatible operating systems, utilized very proprietary components, specialized video and audio chips, archival media, so on and so on. Compared to the world today where we are settled on two or three OSs that all inter-operate, one hardware environment on the desktop (Intel), one on the mobile (ARM) = the field has become commoditized, easier to predict, easier to write software for, but, much more difficult to disrupt. However, at the same time, the field has become boring.

In the 1980s people tried harder to write clever software that was easy on the memory while also paying attention to the instructions, register sizes, interrupt tables etc. A software engineer (or a programmer as they were called back then) had to understand fully how the hardware worked and how the OS worked. Every memory location was important, people took pride in knowing all the details. Demo competitions were one interesting part of that world - where talented hackers would compete in trying to produce the most interesting graphics and music together in very limited memory environments - 4K, 8K, 16K - some of the stuff that was produced was just amazing.

Many people used higher level languages like C or Pascal, however, to be competent one needed to know some assembly too. Hand optimizing loops was everyday business for a lot of us. Yes, compilers have become better than people in many of the optimization cases, however, that does not mean one should not understand or be able to hand-optimize themselves. Otherwise, the knowledge is lost, the engineer is no longer complete, not to mention the fact that compilers today have become massive!

Even the virus and hacking industry in the 80s and 90s was innovative, it was a pleasure to study the codes and techniques of some of these “living things”. Today that whole field reduced itself to taking over remote machines for extortion or building up computing power for yet another brainless but brute force attack launched for a network of “zombie” machines. The “real” research is now blunted and in the hands of either stealthy government teams, sterile university labs or well-funded criminal organizations - you can forget about that kid sitting at home, learning about the ins and outs of the computer and the network - the complexity of both of these has just become too high to effectively be owned and fully understood by a single person.

I look around today and I talk to my fellow software engineers - most of them enjoy (get sold for?) the “perks” of the job like an Apple computer running a proprietary OS with countless services on it, on many gigabytes of RAM, supported by ultra-fast SSDs, however, the vast majority of my colleagues do not know what exactly runs on their computer, neither are they interested in knowing. Computer science has moved to the realm of abstract (as most sciences eventually do) - unlike math (which does not run on a real machine), computer science concepts eventually end up on bare metal. Not knowing, understanding and not even wanting to know the bare metal and all the layers above is, in my humble opinion, a recipe for disaster. As an end result, we have people who write software but do not understand the field nor do they know the details. Yes, certain layers and levels can be understood enough to “skip over” some more intricate details (maybe) - but let me ask this question, for example - should an average software engineer understand the security model of their OS in order to be more competent? After all, if someone writes software that deals with files, directories, has network access, stores some kind of data that has expectations of privacy (99.9% of software) - should they not understand in detail how to make the software secure? Can this be done without understanding what the OS offers and how it is offered/implemented?

In addition to the sad reality above, the Dells and Apples of the world have killed all competition and ruined all innovation. These days people clamor over how thin the laptop will be or whether Apple will provide a faster interface to move data to a disk but in reality this is not innovation, it is iteration. First there was USB, then USB3, then Firewire, then Thunderbolt. Big deal. Some years back real innovation used to happen, dreams were tested in reality - Commodore with the Amiga and the specialized chips inside, Atari with the ST and its own chips, IBM with their own ideas about things, Sun, HP, Digital, SGI, so on and so on - they all had ideas of how things should be done and tried their best engineering and design expertise on these ideas. As a side benefit to “society”, these companies also had the money and the need/desire to assemble the best teams of people to solve real problems but also to do academic research that may or may not ever result in any commercial products - they understood that furthering theoretical research eventually ends up furthering the practical side of things. All of these are dead now, I don’t even know what HP does anymore, Compaq and Dec are long dead, IBM is a services company (hey, at least they are still making cool stuff like Watson). The new “kids” like Apple with their slave factories, closed designs and “pretty” packaging are the new reality - blame it maybe on a world not interested in substance but in “bling”? Dunno…

Saddest of all, we, the engineers today, are most often glorified “glue monkeys” - we obey the contracts between the layers, we “glue” code together in some “agile” iterative race with no idea of where the product will end up. But hey, at least EVERYTHING has a REST-ful interface! ‘Cause exposing half-baked stuff to other machines is much more important!

Finally, most products do not even end up past 50% done. As soon as there is something functional, a pretty website comes up and the race starts. It is infuriating. Sorry but no thanks, I am leaving while I still have some self-respect left.