Just gathering opinions about this architecture? Is this the only future? Are there other architectures? Which ones?
Also wondering if nodejs advantages other technologies regarding micro/nano services?
For those who don’t know what microservices are:
modern interpretation of service-oriented architectures (SOA) used to build distributed software systems.
Similar to SOA in:
microservice architecture: processes that communicate with each other over the network in order to fulfill a goal.
these services use technology-agnostic protocols.
Different to SOA in:
microservices gives an answer to the question of how big a service should be and how they should communicate with each other
services should be small and the protocols should be lightweight
Microservices architectural style is a first realisation of SOA that has happened after the introduction of DevOps and this is becoming the standard for building continuously deployed.
Pro’s and Con’s (wikipedia)
enhances the cohesion and decreases the coupling.
reduces the need for a big up-front design and allows for releasing the software early and continuously.
services form information barriers
the architecture introduces additional complexity and new problems to deal with, such as network latency, message formats, load balancing and fault tolerance, ignoring one of these belongs to the “fallacies of distributed computing”
testing and deployment are more complicated
inter-service calls over a network have a higher cost in terms of latency and message processing time than in-process calls within a monolithic service process
moving responsibilities between services is more difficult; it may involve communication between different teams, rewriting the functionality in another language or fitting it into a different infrastructure
the complexity of a monolithic application is only shifted into the network, but persists
Development fashion, like clothing, comes and goes in cycles, so the answer to “is this the future” is “Yes, as long as you’re willing to wait for it to become popular again”. There are still a few larger trends that seem pretty consistent over time, and division of large systems into independent services is one of them (helped along by underlying techs like Docker and rkt). The way in which those systems are assembled varies though, ranging from complex orchestration platforms, to detailed service description languages, to simple orthogonal APIs cooperating through mere convention.
NodeJS platforms (and others in the same niche) have tended to focus entirely on the last, and paradoxically enough, systems tend to be more robust for it. I think that has to do with the fact that such platforms are forced to address concerns about interop directly rather than place the burden on some “glue layer” that may have its own set of problems.
That said, it’d be nice if we could stop reinventing the wheel when it comes to queueing, broadcasting, caching, storage, and so on. Sure, there will always be better implementations that come along every year, and everyone’s going to have their preferred APIs, but I keep seeing platforms spring up that don’t seem to benefit from even the most basic history lessons. I suppose It’s just part of the maturation process for any language ecosystem.
Answering the question what is microservices architecture, I would say the concept of microservices is based on a simple idea: the developer creates separate parts of the application instead of one whole. To understand microservices architecture better, it’s necessary to compare it with traditional monolith model.
A lot of developers consider monolithic architecture as a traditional way to create applications. Monolithic architecture means that every part of the app is connected to one another. So, each part of the project has to work as intended to get a smoothly working end product.
Unlike of the first option, microservice architecture is an approach that allows creating an application from a combination of small services. Each of these components is created individually and deployed separately. As a result, they run their own processes. The services communicate with each other utilizing lightweight APIs. There is a really informative article about microservices architecture, in case someone is interested: https://www.cleveroad.com/blog/benefits-of-microservices-architecture
My main problem with that approach when I mentioned was complexity. Microservices tackled some of the complexity found in monolith services but it also introduces new hurdles. In some aspects, it is a good approach for some problems, but it was not the panacea as people claimed by the time microservices were introduced.
It would be beter to know WHEN they are more likely to work correctly.
Warning: Long Post Ahead. Please take breaks if your fingers get tired from scrolling.
There are no silver bullets. Anything pitching some approach as a panacea is trying to sell you something. But that doesn’t automatically mean what they’re pitching is worthless.
Many of the points in that list above date back to when the debate was over client/server apps, and “fat client” vs “thin client”. These weren’t debates over how to deploy and manage your cloud services over the web. This was debates between SunRPC vs DCE. There was no common protocol for clients and servers to talk to each other.
Not only did HTTP not exist, in the desktop world of the early 90’s, you couldn’t even count on TCP/IP being supported.
So in that climate, debates raged over how to get apps – distributed on CDS and floppies mind you – to talk to the backend, such as the CRM database. Do they…
Use a resource that’s shared on the local network and pretend it’s local, letting the operating system sort it out? Sure, worked for millions of MS Access and FoxPro users at the time, even if they had to sit around and wait for the lazy greasy fat bastardLAN Admin to bust a stale lock so they could save their changes.
Talk to the server instead with an RPC mechanism where the app pretends it’s making local function calls, but they’re actually tunneled over the network? Sure, worked grandly for Sun Microsystems who relied on it when they also invented NFS, something almost every unix implementation (there were lots) picked up support for. Grander and much more baroque attempts at the same idea like ToolTalk and CORBA were much less successful. And there was much rejoicing and sighs of relief, because dayum those protocols were nasty. So things became sane for a while… at least until SOAP came along.
Use a generic protocol with orthogonal operations, that can be extended in nearly infinite ways, at the cost of the complexity of implementation: the app can’t just bind arbitrary functions to RPC anymore and has to “speak the protocol” now. Aside from working like a charm for X11, such a design sound like something you know?
Much more on that later.
My overall point with the history lesson is that back in Ye Olden Dayes, before the Worlde Wyde Webbe didst conquer all It Doth Survey, there were some pretty legitimate reasons to differ on how you connected your machines together. And for quite a while, RPC seemed the way to go. Programmers were much fewer and more expensive, so the easier you could port legacy apps, the better. Networks were getting faster and more reliable, so everyone figured hey, why not figure in the future the network will be as reliable as the power.
But then… those vile ingrates called USERS: they would whine and wheedle, those ingrates, that their Windows apps like, say Remedy (a ticket tracking system), would freeze up or and/or just crash.
The culprit was pretty much always network failure, whether transient gremlins or reboot-the-router disaster. And that kind of thing happened a lot, a lot of network infrastructure was dodgier than a San Francisco massage parlor back then. All of it came down to the fact that the app was pretending that functionality was local, calling a function like const int get_ticket(int ticketid, ticket_t* out), and not knowing that the server at the other end of the call was down, or had kicked out your session after being rebooted, or … whatever. The app wasn’t written with threads or async I/O either; this is Windows for Workgroups we’re talking about here. So it would crash as Windows apps were wont to do, or they’d just block forever waiting for the network. Might give you a timeout popup if it was feeling generous.
So let’s get get back to these miserable users: they would also whine they didn’t get any of those lockups on the mainframe app where it all ran on The Mainframe, which was Always Up All the Time. Sure they were limited to a 3270 text terminal, but damn if they didn’t just glide in their sleep through their finger muscle memory, anticipating to the millisecond how long it took each screen would load. And all that industrial strength reliability would cost your business somewhere around … ever see Goodfellas?
The Moral, making the long story short: Shit happens, and your app needs to be ready to deal with it and not just play like the network never happened.
That is what “The Fallacy of Distributed Computing” is about, and it’s spot-on: making a resource transparently available over the network has the problem of thinking the network will always be available. You need to always check the status of a resource and handle it gracefully in your app, you can’t just call-and-forget. That’s the point of many of the bullet points above.
But so many of the points are so dated as to be outright wrong now. Let’s take the first one, about services forming information barriers. Are you kidding me? The service called the World Wide Web (just another /etc/services entry, port 80 to be exact) opened up, let’s say uh “several” services. Wikipedia, Youtube, IMDB, Amazon, Netflix, Tindr, 4chan … gotta take it with the good, right? They’re not only services in the classic sense, most of them expose APIs as services for other apps.
Anyway, I’m not going to take it point-for-point, lest this post become a book, like “The Fallacy of Undistributed Computing”. Snap back to now. HTTP won. So however you’re designing services, whether you’re doing things front-end or backend, it’s pretty damn likely they’re speaking HTTP, or at least set up a websocket over HTTP. As for RPC vs Generic Protocol, the analogue here would be SOAP vs REST. Not even close to a photo finish as to who won that one. As for common data interchange formats, JSON now dominates, XML a distant second. Still not a bad win and place. You should see what CORBA had in mind for you.
Anyway, the current trendy name for microservices is “serverless”.
Postscript: A fun fact about SunRPC (that brittle tightly-coupled-to-the-server thing) is that it gave rise to NFS. And NFSv4 might just be the ultimate simple and orthogonal protocol, not just for sharing files over the network, but for connecting to anything. Of course, v4 was when they ditched SunRPC, but even so there were whole companies like Xerox who still ran all the things on NFSv3 and maybe earlier. Much later, Plan9 would adopt a very similar protocol and base the whole OS around it. Shame it wasn’t more successful.
that service() function fetches services over the network, creates an ApiClient obect, then generates methods on it using the advertised endpoints of the service, one of which is ‘browse’. Looks a lot like RPC, except for those extra keywords. And boy do they make a difference, because my global error handler is a hojillion lines long (I’ll admit it needs refactoring). The extra complexity is there for sure, but it’s as certain as Trump using spray-on tanners that it doesn’t pretend like outages don’t happen.
I guess things move in cycles, but sometimes, against all expectations, learning from the past. An upward spiral if you will.