Why I abandoned object oriented programming
Whetting your appetite
It started happening a short while after discovering python. Now, this seems like a paradox, because python, a great language, is an object oriented language, everything is an object and all. So there! I have whetted your appetite as I promised. The alerted python users will probably notice that I borrowed the title of this paragraph from the opening section of the (excellent) python tutorial.
I was 30, and object oriented programming was the hottest thing. I was thrilled about it and thought that it should be used everywhere all the time. Using it I could eliminate all the if statements from the code. I talked about it passionately, and rewrote my code (and others') to be object oriented. I rewrote C code to be object oriented in more than one way, so that no one but me could understand it.
The Design Patterns book was very popular then (probably still is) and I read it cover to cover. I was fascinated about it. My mind was always busy about the question of how to identify those cases that require being applied a design pattern.
Then I found python.
Fun fun fun
I was very pleased with the ability to write scripts that didn't require any preparations, like a main function or a header file. Not declaring types was a nice and calming feature as well. I found the indentation methodology to be brilliant, and enjoyed simply hitting Return to terminate a block. It felt like a breeze.
In the first few scripts I built a list the C way – creating an empty list, then appending to it in a loop. It was fun already, because I didn't need to care about memory. Daring to go further I tried replacing this with a list comprehension, and that was the steroids boost that drove me away from the old-style programming, towards always seeking for new ways, languages and techniques.
Stopping and thinking
A question started arising and disturbing me. I was fascinated about object oriented, and python was object oriented, but all that period of python thrill had nothing to do with object oriented programming. I didn't write a python class even once! Actually, I (unconsciously, now I know) tried to stay away from it, probably due to being frightened by the double underscored constructor. It's amazing how things that now look so simple and meaningless seemed then complicated. It only shows what a big leap one's mind does upon adapting to new constructs, especially in high level languages. But I'm deviating from the subject.
Why was I so pleased with python, I asked myself, and how did it help me build applications so fast? It was many things. Powerful lists manipulation, string functions, regular expressions, not worrying about memory, powerful GUI (Tkinter) and a great exception mechanism, that always told me where my problem was. All of this stuff wrapped up nicely within a clean and simple syntax.
It had nothing to do with object oriented programming. That was the beginning of realizing that the path to good software engineering doesn't go through the object oriented land. There were other things, much more important, that helped promote quick development, easy debugging and creating great applications.
If I now (10 years later) try to point out to those things, I would say:
- Garbage collection
- Functional style – functions returning an array instead of filling a passed array
- Lists and strings manipulation (and dictionaries)
- Libraries (math, graphics, net, encryption).
- Dynamic typing (although nowadays Scala proves that static typing doesn't have to stand in the way)
- Exceptions that point to the exact location of failure
- Other functional programming stuff – closures, first class functions
- Lispers – you may now add meta-programming to the list :-)
OOP is not on my list.
The other reason
OOP techniques are extremely easy to abuse.
The past 10 years I read a lot of code, both good and bad. Bad code wrapped up in classes was the worse. The programmers of some application I worked on thought that private was cool, and had fun hiding things. But the hidden things had to be accessed somehow, so they created many delegations. So calling some_function started with calling
object_1->some_function()which delegated to
object_2->some_function()which sometimes ended up in
Ladies and gentlemen, information hiding is not about nested calls, nor private data. It's about really hiding the information, not accessing it at all from outside the file where it is defined. Writing code this way requires some experience. Simply making it private is not the way. You just complicate things by doing it, if you need to access it from outside later.
Deviating a little bit from the subject, another common error is to define private data with getter and setter methods. This is equivalent to using public data, only via a more verbose (and less readable) way. Hidden data is data which is never accessed (at least not for reading) from outside.
I remember taking a few of these inner hidden objects and making them global. My colleague got upset with me. He told me that I should know that using global variables was a bad idea. I totally agreed with him, but tried to explain him that there are some things that are global from their nature. The file system, for example. You can access it from wherever you want in your program, right? You don't hide it in 4 levels of indirections, but simple ofstream it. The same thing was true for our hidden objects, which I made global. They had actually the role of libraries in their nature. Libraries are global, aren't they? It's not my fault that their functions were wrapped up inside classes. My colleague wasn't convinced, and I gave up arguing.
This is one of the bad uses of classes. Instead of simply writing libraries of functions, functions are wrapped inside classes. Then in order to use them you need to create an instance. But this makes you end up with a variable. A variables is not supposed to be global, so you must think creatively on how to hide it. But since that variable is in nature a library, you would like to access it from many places in the code. So you end up drowning it inside several delegetaion levels, so that it's not global anymore, but can be still be accessed in (with effort) from your code, and by this, well, uh, I can't talk about it anymore. Please forgive me.
A nice (?) trick that helps hiding the data, but allows to access it from everywhere in the application without declaring it as extern is the singleton. You can access its functions by MyClass::createInstance()->myFunction(). Simply great! It does exactly the same thing that myFunction() would do if it had been a free function. But a singleton is cool. It also plays nicely in the object oriented neighborhood.
I think that the singleton was invented in order to show how creating instances can always return one and only instance of an object, and also demonstrate some C++ techniques by implementing it. It doesn't matter. The net effect is that a singleton method does exactly the same thing a free function does (every function has one instance too), only in an unreadable way. When you debug code, under a tight schedule maybe, the singleton verbosity certainly doesn't help. In these situations the singleton turns from a nice academic research involving deep C++ understanding to a reading obstacle.
Once upon a time there was a class in my application, having many methods. A new feature in our software required a similar class. Naturally I used inheritance. It was very pleasing.
But slowly I started feeling that I was wasting too much time on figuring out which methods were overridden and which weren't. Slowly it became really difficult to follow the chain of methods. A method in the parent class called some other method, which didn't explain why the application behaved as it did. It did explain its behaviour, however, if the other method was overridden by the derived class, which was actually the case, but at that very moment I didn't remember it. You see, when you read the code in the parent class, every method has a potential of being fake, having the actual functionality overridden by the derived class. It's very tiring to debug code like this, especially when you do it a while after you wrote it, not to mention when you are not the author of the code.
Eventually I dumped away the derived class, and rewrote the parent to use if statements instead. That was so good. The code became explicit and simple to understand. I must admit though that errors started occurring, that is, the conditions I wrote didn't cover all cases. However, since my error reporting mechanism was so good, it wasn't an issue. I concluded from that experience that inheriting a class with many methods only to override a few of them, although a safe and robust technique, is a bad idea. It makes the code too difficult to comprehend, and makes the one reading the code suffer. As simple as that.
So, is it bad?
Not necessarily. It's a programming technique and can become useful when appropriate. There are 2 cases where I use classes.
The first one is the classic case of having many (not just 2) mechanisms that share a common purpose but perform a different functionality (there's a design pattern name for it which I don't recall right now). For example, I wrote a grid that could show numbers in various formats. The user could also type numbers in it, and the grid would parse the input according to its current format. I implemented it simply by defining a common interface supporting converting a number to the required string format and vice versa, converting a string to a number according to the current format. This is classic. For each format you gather all the functionality under a derived class and end up with a well organized code.
The second case is to improve syntax. If you think timer.expired() is more natural than expired(timer) then go for it! This isn't object oriented programming, anyway. I think it's called object based. Whatever improves readability – I vote for it!
A 40,000 lines of code of a program I worked on had really only three justifiable, helpful object oriented mechanisms. All were related to gathering different sets of functionality under one entity, which is what I call the classic use. This might point out to about how frequent object oriented programming should be used in a program.
I'm not a psychologist, but I do have the habit of investigating my feelings and behavior and try to understand them. These inner explorations brought me to conclude that programming is a very psychological issue. Programming is hard. Not mastering a programming language or an API is what is hard. It's the wealth of entities that the programmer has to deal with at the same time, while he is coding. Many things depend on each other. There are numerous connections between software pieces. That's why design is so difficult.
My programming psychology understanding tells me that we unconsciously seek for escapeways that allow us to postpone the annoyance of dealing with the details. So we invest our time in making up coding conventions. Here is something that requires design and is not so difficult! We set the way comment headers of functions should look like. Likewise, we take all the time needed to build classes, wrap things with other things, build frameworks, establish relationships. It feels like a very important task, UML looks good, and it's all easier than diving into the essence of the application, that is, reading the data from the files, parsing, checking errors, testing against contradicting inputs, launching threads and implementing the messaging protocol. Classes are wrappers. And wrappers are easier to deal with.
We use object oriented programming to postpone diving into details. The reason we don't like details is that they make us use our memory. I think there are usually two kinds of mental effort we must make all the time: Analytical effort and memory effort. Most of us love the first one. We solve puzzles. We try to be smart. Contrarily, we don't like making memory effort. We don't like memorizing, and we don't like trying to remember things. I would even say we are afraid of it. All the things I listed above, that we try escaping from when writing an application, are not so bad from the analytical aspect. But in order to implement them we need to split them to many small things, like variables, stages, checks, reports and their interoperability. The problem with these things is that they consume our memory, because we must hold them all at once in our mind. That's why we escape to designing classes. It's fun and makes us feel like we do important stuff.
Sooner or later we must face the hard work of implementation. In the presence of the object oriented framework we built, this becomes even harder.
My opinion is that object oriented techniques are very easily abused. They make the code hard to understand, and almost always have a shorter and simpler functional counterpart.
We like it because it allows us to focus on easy things, like code cosmetics, instead of dealing with the really difficult essence of implementation.
Getting addicted to object oriented programming will prevent you from getting familiar with the list of other programming techniques I mentioned in the beginning of the article, like list processing.
Finally, I must say that I like C++. Not for its support in object oriented programming. Rather, it's because of STL and the one precious object based feature – the destructor. STL brings into C part of the convenience of list and string processing powered by automatic memory handling. This constitutes the great programming technology leap of C++ over C.