Exercise 8 PR #95
|
@ -1,6 +1,6 @@
|
|||
# 2. Seperate service for Executors
|
||||
# 2. Seperate service for Executors and Executor Pool
|
||||
|
||||
Date: 2021-10-18
|
||||
Date: 2021-11-21
|
||||
|
||||
## Status
|
||||
|
||||
|
@ -8,14 +8,16 @@ Accepted
|
|||
|
||||
## Context
|
||||
|
||||
The users need to be able to add new executors to the executor pool. The functionality of the executor is currently unknown.
|
||||
The executor pool has a complete list of all executors and knows if they are available or not, executors can execute tasks that match their type. The executors can therefore be part of the executor pool service, or each executor is a standalone service, as well as the executor pool.
|
||||
|
||||
## Decision
|
||||
|
||||
We will use a separate microservice for each executor.
|
||||
We will use a separate microservice for each executor and one service for the executor pool.
|
||||
Having the executor pool and the executors as separate services would increase fault tolerance. If the executor pool goes down, the executors would stay online and execute their tasks without being affected by the executor pool’s outage. Likewise, if an executor goes down it does not impact other executors or the executor pool.
|
||||
Different executors can have different execution times and a different load. This means the executors scale differently. Thus, we need a separate service for each executor.
|
||||
Executors of different kinds will also scale differently than the executor pool and new executors of new types might be added at some point, further increasing the need for separate services to guarantee scalability and evolvability.
|
||||
New executors will be added/removed during runtime. Therefore, we need a high extensibility.
|
||||
Different executors can have different execution times and a different load. This means the executors scale differently.
|
||||
|
||||
## Consequences
|
||||
|
||||
Having executors as its own service we can deploy new executors independently and easily add new executors during runtime and guarantee high scalability as well as evolvability.
|
||||
Executors will be added/removed quite frequently, making the deployment of the system easier and less risk-prone if each executor is a separate service, also separated from the executor pool, which just keeps track of the executors and their status. However, having these separate services, the complexity might increase, and the testability of the system will decrease.
|
||||
|
|
|
@ -1,21 +0,0 @@
|
|||
# 3. Seperate service for assignment domain
|
||||
|
||||
Date: 2021-10-18
|
||||
|
||||
## Status
|
||||
|
||||
Accepted
|
||||
|
||||
## Context
|
||||
|
||||
The Assignment Service handles the assignment of a task to a corresponding and available executor. It keeps track of all the connections between tasks and executors.
|
||||
|
||||
## Decision
|
||||
|
||||
The assignment domain will be its own service.
|
||||
The assignment service will be a central point in our application. It will have most of the business logic in it and will communicate with all the different services. Therefore, other services can be kind of “dumb” and only need to focus on their simple tasks.
|
||||
The code of the assignment will change more often than the code of the other services, thus having the assignment service split from the other makes it more deployable.
|
||||
|
||||
## Consequences
|
||||
|
||||
Having this system as its own service we reduce the Fault tolerance because the assignment service can be the single point of failure. We can mitigate this risk by implementing (server) replication and/or having an event driven communication with persisting messages. Therefore, all other services can run independently, and the assignment service can recover from a crash.
|
|
@ -0,0 +1,21 @@
|
|||
# 3. Separate service for the Roster
|
||||
|
||||
Date: 2021-11-21
|
||||
|
||||
## Status
|
||||
|
||||
Accepted
|
||||
|
||||
## Context
|
||||
|
||||
The roster acts as an orchestrator for the system. It communicates directly with the task list, the executors, the executor pool, and the auction house. It handles the assignment of a task to a corresponding and available executor, keeps track of all the connections between tasks and executors, and communicates the status of tasks and executors to other services.
|
||||
|
||||
## Decision
|
||||
|
||||
The Roster domain will be its own service.
|
||||
The Roster service will be a central point in our application. It will have most of the workflow logic in it and will communicate with all the different services. Therefore, other services can focus on their business logic and be largely ignorant of the overall workflow.
|
||||
The code of the assignment will change more often than the code of the other services, thus having the assignment service split from the other makes it more deployable.
|
||||
|
||||
## Consequences
|
||||
|
||||
Having this system as its own service will reduce the fault tolerance because the assignment service can be the single point of failure. We can mitigate this risk by implementing (server) replication and/or having an event driven communication with persisting messages. Therefore, all other services can run independently, and the assignment service can recover from a crash. Additionally, we need to ensure a high level of interoperability, since the roster has to communicate with all other parts of the system.
|
|
@ -1,6 +1,6 @@
|
|||
# 4. Seperate service for executor pool
|
||||
# 4. Separate service for the Task List
|
||||
|
||||
Date: 2021-10-18
|
||||
Date: 2021-11-21
|
||||
|
||||
## Status
|
||||
|
||||
|
@ -8,14 +8,13 @@ Accepted
|
|||
|
||||
## Context
|
||||
|
||||
The Executor pool keeps track of the connected executors and their purpose and status.
|
||||
Tasks are created in the task list, and the status of each task (created, assigned, executing, executed) is tracked in the task list as well. The task list mainly communicates with the roster so that tasks can get assigned and the roster will give the task list feedback about the tasks’ status.
|
||||
|
||||
## Decision
|
||||
|
||||
We will have a separate service for the executor pool.
|
||||
There are no other domains which share the same or similar functionality.
|
||||
The executor pool also scales differently than other services.
|
||||
The task list will be its own service.
|
||||
The task list needs to scale based on the number of active users and the intensity of their activity at any time while the scaling of other parts of the system can be constrained by other factors.
|
||||
|
||||
## Consequences
|
||||
|
||||
Having the executor pool as a separate service will help with the deployability of this service but will make the overall structure more complex and reduces testability.
|
||||
Although having the task list as its own service might slightly increase the complexity of the system and decrease the testability, it also makes the system easier to deploy and protective of its data. However, to ensure that this data is always available and does not get lost, the task list needs to be able to recover all its data (the entire history of all tasks) in case it goes down.
|
||||
|
|
|
@ -4,7 +4,7 @@ Date: 2021-10-18
|
|||
|
||||
## Status
|
||||
|
||||
Accepted
|
||||
Superceded by [8. Switch to an event-driven microservices architecture](0008-switch-to-an-event-driven-microservices-architecture.md)
|
||||
|
||||
## Context
|
||||
|
||||
|
|
|
@ -0,0 +1,22 @@
|
|||
# 7. Seperate service for Auction House
|
||||
|
||||
Date: 2021-11-21
|
||||
|
||||
## Status
|
||||
|
||||
Accepted
|
||||
|
||||
## Context
|
||||
|
||||
The auction house is the service that can connect to other groups’ auction houses. If there is a task whose task type does not match that of our executors, the auction house can start an auction where other groups can bid on doing the task for us. Moreover, it can also bid on other groups’ auctions.
|
||||
|
||||
## Decision
|
||||
|
||||
The auction house will be its own service.
|
||||
The auction house is the only part of our system that has external communication; therefore, it makes sense to have it as its own service, also to guarantee better deployability.
|
||||
The auction house does not scale directly based on the number of tasks, but only the proportion which needs external executors. Moreover, there could be limits on the number of auctions that could be started. Therefore, the auction house scales differently to other services.
|
||||
Moreover, having the auction house as its own service also improves the fault tolerance of our system.
|
||||
|
||||
## Consequences
|
||||
|
||||
Since the auction house will be a standalone service, we have to make sure that if it goes down, it can recover its data in some way (which auctions it has launched, which auctions it has placed bids on or even won, etc.). Even though the testability and latency of our system might worsen by having a separate service for the auction house, we can implement different kinds of communication for internal and external communication in a much easier way.
|
|
@ -0,0 +1,26 @@
|
|||
# 8. Switch to an event-driven microservices architecture
|
||||
|
||||
Date: 2021-11-21
|
||||
|
||||
## Status
|
||||
|
||||
Proposed
|
||||
|
||||
Supercedes [5. Event driven communication](0005-event-driven-communication.md)
|
||||
|
||||
## Context
|
||||
|
||||
Our Tapas App is currently implemented based on a microservice architecture, where the services communicate synchronously via request-response. Each service encapsulates a different bounded context with different functional and non-functional requirements. Internal communication could also be done using asynchronous or event-driven communication.
|
||||
|
||||
## Decision
|
||||
|
||||
Pros:
|
||||
Scalability: Different services within the Tapas app are not always able to scale at the same rate. For example, we could have thousands of users adding printing tasks at the same time, but maybe we only have one printer. In this scenario we might want to scale the task-list service up to handle the creation load, but scaling up the printing executor operates on a different time-scale (i.e. adding a printer takes time). Moreover, we could have a lot of new tasks coming in, most of which can be executed internally. In this case we want to be able to scale up the task list but might not need to scale up the auction house. Event-driven communication would decrease the coupling of services. Consequently, the scalability of individual services would be enhanced as they no longer depend on the scalability of other services. This improves the apps overall scalability. Since scalability is one of the systems top 3 -ility, this seems quite important.
|
||||
Fault tolerance: Another of the systems top 3 -ilities is fault tolerance. We could have highly unstable IoT executors that fail often. This should not disrupt the system’s overall operation. The decoupling facilitated by event-driven, asynchronous, communication ensures that when individual services go down, the impact of other services is limited and once they go back up then can recover the systems state from persisted messages.
|
||||
Cons:
|
||||
Error handling, workflow control, and event timing:
|
||||
The aforementioned topics outline the drawbacks of event- driven architecture. These drawbacks can be mitigated by using an orchestrator (like we currently do with the roster) to orchestrate assignment of tasks, auctioning off tasks and error handling when executors fail. More research needed.
|
||||
|
||||
## Consequences
|
||||
|
||||
Consequences to be determined but would relate to the three concepts mentioned as cons.
|
Loading…
Reference in New Issue
Block a user