Biz & IT —

Why isolate “lower” application layers from “higher” layers?

It seems like a good practice, but let's articulate why.

Why isolate “lower” application layers from “higher” layers?
Stack Exchange
This Q&A is part of a weekly series of posts highlighting common questions encountered by technophiles and answered by users at Stack Exchange, a free, community-powered network of 100+ Q&A sites.

Jason Swett asks:

In a typical (well-designed) MVC Web app, the database is not aware of the model code, the model code is not aware of the controller code, and the controller code is not aware of the view code. (I imagine you could even start as far down as the hardware, or perhaps even further, and the pattern might be the same.)

Going the other direction, you can go just one layer down. The view can be aware of the controller but not the model; the controller can be aware of the model but not the database; the model can be aware of the database but not the OS. (Anything deeper is probably irrelevant.)

I can intuitively grasp why this is a good idea but I can't articulate it. So why is this unidirectional style of layering a good idea?

See the full, original question here.

Taking human thinking into account

Kilian Foth answers (101 votes):

Layers, modules, indeed architecture itself, are means of making computer programs easier to understand by humans. The numerically optimal method of solving a problem is almost always an unholy tangled mess of non-modular, self-referencing, or even self-modifying code—whether it's heavily optimized assembler code in embedded systems with crippling memory constraints or DNA sequences after millions of years of selection pressure. Such systems have no layers, no discernible direction of information flow, in fact no structure that we can discern at all. To everyone but their author, they seem to work by pure magic.

In software engineering, we want to avoid that. Good architecture is a deliberate decision to sacrifice some efficiency for the sake of making the system understandable by normal people. Understanding one thing at a time is easier than understanding two things that only make sense when used together. That is why modules and layers are a good idea.

But inevitably modules must call functions from each other, and layers must be created on top of each other. So in practice, it's always necessary to construct systems so that some parts require other parts. The preferred compromise is to build them in such a way that one part requires another, but that part doesn't require the first one back. And this is exactly what unidirectional layering gives us: it is possible to understand the database schema without knowing the business rules, and to understand the business rules without knowing about the user interface. It would be nice to have independence in both directions, allowing someone to program a new UI without knowing anything at all about the business rules, but in practice this is virtually never possible. Rules of thumb such as "No cyclical dependencies" or "Dependencies must only reach down one level" simply capture the practically achievable limit of the fundamental idea that one thing at a time is easier to understand than two things.

Related: "What is MVC, really?"

The replacements

John R. Strohm answers (54 votes):

The fundamental motivation is this: You want to be able to rip an entire layer out and substitute a completely different (rewritten) one, and nobody should (be able to) notice the difference.

The most obvious example is ripping the bottom layer out and substituting a different one. This is what you do when you develop the upper layer(s) against a simulation of the hardware, and then substitute in the real hardware.

The next example is when you rip a middle layer out and substitute a different middle layer. Consider an application that uses a protocol that runs over RS-232. One day, you have to change the encoding of the protocol completely, because "something else changed". (Example: switching from straight ASCII encoding to Reed-Solomon encoding of ASCII streams, because you were working over a radio link from downtown Los Angeles to Marina Del Rey, and you are now working over a radio link from downtown Los Angeles to a probe orbiting Europa, one of the moons of Jupiter, and that link needs much better forward error correction.)

The only way to make this work is if each layer exports a known, defined interface to the layer above, and expects a known, defined interface to the layer below.

Now, it is not exactly the case that lower layers know NOTHING about upper layers. Rather, what the lower layer knows is that the layer immediately above it will operate precisely in accordance with its defined interface. It can know nothing more, because by definition anything that is not in the defined interface is subject to change WITHOUT NOTICE.

The RS-232 layer doesn't know whether it is running ASCII, Reed-Solomon, Unicode (Arabic code page, Japanese code page, Rigellian Beta code page), or what. It just knows that it is getting a sequence of bytes and it is writing those bytes to a port. Next week, it might be getting a completely different sequence of bytes from something completely different. It doesn't care. It just moves bytes.

The first (and best) explication of layered design is Dijkstra's classic paper "Structure of the T.H.E. Multiprogramming System". It is required reading in this business.

Minimize breakage

Dunk answers (2 votes):

While ease of understanding and (to some degree) the availability of replaceable components are certainly good reasons for layers, an equally important reason (and probably the reason that layers were invented in the first place) is from the software maintenance viewpoint. The bottom line is that dependencies cause the potential to break things.

For example, suppose A depends on B. Since nothing depends on A, developers are free to change A to their hearts content without having to worry that they could break anything other than A. However, if the developer wants to change B, then any change in B that is made could potentially break A. This was frequent problem in early computer days (think structured development) where developers would fix a bug in one part of the program and it would raise bugs in apparently totally unrelated parts of the program elsewhere. All because of dependencies.

To continue with the example, now suppose A depends on B AND B depends on A. In other words, a circular dependency. Now, anytime a change is made anywhere, it could potentially break the other module. A change in B could still break A, but now a change in A could also break B.

So in your original question, if you are on a small team for a small project then all this is pretty much overkill because you can freely change modules at your whim. However, if you are on a sizeable project, or if all modules depended on the others, then every time a change is needed it could potentially break the other modules. On a large project, knowing all the impacts can be difficult.

It gets worse on a large project where there are many developers, (e.g. some who only work on layer A, some on layer B, and some on layer C). As it becomes bigger, it's more likely that each change will have to be reviewed/discussed with members on the other layers in order to make sure your changes don't break or force rework on what they are working on. If your changes do force changes on others, then you have to convince them that they should make the change, because they aren't going to want to take on more work just because you have this great new way of doing things in your module. In other words, it's a bureaucratic nightmare.

But if you limited dependencies to A depends on B, and B depends on C, then only layer C people need to coordinate their changes to both teams. Layer B only needs to coordinate changes with the Layer A team and the layer A team is free to do whatever they want because their code doesn't affect layer B or C. So ideally, you'll design your layers so layer C changes very little, layer B changes somewhat and layer A does most of the changing.

Find more answers or leave your own at the original post. See more Q&A like this at Programmers, a question and answer site for professional programmers interested in conceptual questions about software development. If you've got your own programming problem that requires a solution, login to Programmers and ask a question (it's free).

Channel Ars Technica