At work, we now have a big internal application which has been under development for close to a couple of years now; I’ve just lately joined the project and some associated with the architecture has me slightly perplexed, and so I’m hoping somebody right here provides some advice so I can have an informed discussion with them) before I go out to ask the architects these same questions (.
My apologies if the below is a small long, we would like to make an effort to paint a picture that is g d of the device is before we ask my question
What sort of system is setup is that we’ve one web that is main (asp , AngularJS) which does mostly just aggregates information from many other solutions. So essentially it is a host for an AngularJS application; there clearly was literally one MVC controller that b tstraps the customer part, then almost every other controller is a WebAPI controller.
Calls through the client-side are managed by these controllers, that will be constantly implemented to bins that do nothing but host the Web Application. We now have 4 boxes that are such.
Nevertheless, the telephone calls are then finally routed right through to yet another group of WebAPI applications (typically they are per company area, such as for instance security, consumer data, product data, etc). Most of these WebAPIs get deployed together to dedicated containers t ; we also provide 4 among these boxes.
With a solitary exception, these WebAPIs are not used by just about any parts of our organisation.
Finally these WebAPIs make yet another group of phone calls to your end that is”back services, which are typically legacy asmx or wcf services slapped together with different ERP systems and Data shops (over which we have no control).
Exactly What has me personally confused is really what possible benefit there is in having this kind of separation between the WebApplication and the WebAPIs that serve it. Since nobody else is using them, I don’t see any scalability advantage (in other words there is no point in setting up another 4 API bins to manage increased load, since increased load on the API servers must mean there is certainly increased load on the Web servers – therefore there has to be a 1 1 ratio of online server to Api host)
I additionally do not see any benefit at each of being forced to make A http that is extra Browser=>HTTP=>WebApp=>HTTP=>WebAPI=>HTTP=>Backend solutions. (that call that is HTTP WebApp and WebAPI is my issue)
Therefore I have always been presently trying to push to really have the current WebAPIs moved from split solutions, to simply separate tasks inside the WebApplication solution, with easy project recommendations in between, and a deployment model that is single. So that they would fundamentally simply become class libraries.
Deployment-wise, what this means is we’d have 8 “full stack” internet boxes, rather than 4+4.
The benefits I see for the brand new approach are
- B st in performance while there is one less cycle of serialisation/deserialisation between the internet application and the WebAPI servers
- A lot of code that can be deleted (i.e. you don’t need to maintain/test) in terms of DTOs and mappers during the outgoing and incoming boundaries of this Web Application and WebApi servers respectively.
- Better ability to generate significant integration that is automatied, because I can just mock the back-end solutions and avoid the messiness across the mid-tier HTTP jumps.
So the relevant real question is am I wrong? Have actually we missed some fundamental that is”magic of divided WebApplication and WebAPI containers?
I’ve researched some N-Tier architecture product but can not appear to find anything as I am able to inform, and this is definitely an interior application so security with regards to the WebAPI applications is not a concern. in them that can provide a tangible benefit for our situation (since scalabilty isn’t a problem as far)
And in addition, what would we be losing in terms of benefits if I had been to re-organise the operational system to my proposed setup?
2 Answers 2
One reason is security – if (haha! when) a hacker gains access to your front-end webserver, he gets access to every thing this has use of. If you’ve placed your center tier in the internet host, then he has usage of every thing it offers – ie your DB, and next thing you realize, he is just run “select * from users” on your own DB and taken it far from offline password cracking.
Another reason is scaling – the web tier where in fact the pages are constructed and mangled and XML processed and all that has a much more resource than the center tier which is often an efficient method of getting information through the DB towards the web tier. As well as moving all that fixed information that resides ( or perhaps is cached) on the net host. Incorporating more web servers is a task that is simple you have past 1. There shouldn’t be a 1 1 ratio between web and logic tiers – i have seen 8 1 before now (and a ratio that is 4 1 logic tier and DB). It depends what your tiers do however and how caching that is much on in them.
Sites never really care about single-user performance while they’re developed to measure, no matter that there surely is an extra call slowing things straight down only a little if it means you can serve more users.
Another reason it could be g d to own these layers is it forces more discipline in development where an API is developed (and simply tested as it is standalone) and then UI developed to eat it. We worked at a place that did this – different teams developed various layers and it worked well because they didn’t have to worry about the other tiers – ie a UI javscript dev could add a new section to the site by simply consuming a new webservice someone else had developed as they had specialists for each tier who could crank out changes really quickly.