vlambda博客
学习文章列表

读书笔记《cloud-native-applications-in-java》数字化转型

Chapter 12. Digital Transformation

云计算的出现正在影响企业领域的方方面面。从核心基础设施到面向客户的应用程序,企业环境正在见证变革力量的影响。一些企业是这些转型的先行者,而另一些企业仍在试图弄清楚从哪里开始以及做什么。根据行业领域的成熟度,转型之旅可能会有很大不同。一些领域率先采用技术趋势(如 BFSI),而另一些领域则等待技术过时采用新技术(制造、公用事业)。在本章中,我们将介绍以下内容:

  • Mapping the application portfolio for digital transformation
  • Breaking an existing monolithic application into a distributed cloud-native application
  • Changes required at the process, people, and technology levels
  • Building your own platform services (control versus delegation)

Application portfolio rationalization


数字转型的决策通常映射到更大的应用程序组合。以客户为中心、更好的客户体验、合规/监管、云计算的出现、开源等方面的外部力量导致企业审视他们的整个应用程序环境并确定需要改进、增强和返工的领域。

第一步是确定需要为云部署转换的机会或应用程序。在这一步中,我们通常会跨业务和技术参数进行整体投资组合分析。这些参数有助于提供投资组合的加权分数。使用分数,我们可以将应用程序映射到四个象限。这些象限帮助我们确定在哪里关注以及我们将在哪里看到最大值。

Portfolio analysis – business and technical parameters

该应用程序经过测量,并针对业务和技术参数进行评分。

technical值的参数如下:

  • IT standards compliance
  • Architecture standards compliance
  • Quality of service
  • Maintainability
  • Operational considerations
  • License/support cost
  • Infrastructure cost
  • Project/change cost
  • Application maintenance cost
  • Sourcing (insourcing/outsourcing)

商业价值的参数如下:

  • Financial impact
  • Application user impact
  • Customer impact
  • Criticality
  • Business alignment
  • Functional overlap/redundancy
  • Regulatory / compliance risk
  • Service failure risk
  • Product/vendor stability

您可以在 1-5 的范围内对这些参数进行评分(一个是最低的,五个是最高的)。 

映射这些参数有助于我们识别成本和复杂性热点,并根据业务能力区域隔离应用程序。进一步分析这些应用程序类别的相互依赖性、接触点、集成点和底层基础设施。使用所有这些,我们可以分析好处并为转型路线图提供建议。下一步是基于商业价值和技术价值;我们将应用程序绘制在以下象限之一中:

读书笔记《cloud-native-applications-in-java》数字化转型

这些分数帮助我们在应用程序和投资组合级别提供成本效益分析。它还有助于确定哪里存在功能重叠(由于并购活动),了解业务缺乏、IT 一致性以及业务优先级在哪里。这些可以帮助确定投资机会在哪里以及潜在的非核心领域。

使用前面的基础,每个象限中的应用程序可以进一步映射到一个配置,如下图所示:

读书笔记《cloud-native-applications-in-java》数字化转型

这些配置为我们提供了跨领域应用的合理机会,我们将在以下各节中讨论。

Retire

所有属于低商业价值和低技术 价值的应用程序都可以标记为停用。这些通常是在不同的业务环境中失去相关性或实施了新功能的应用程序。

这些应用程序的特点是使用率低,业务风险非常低。还可以通过根据这些应用程序和使用量聚合票证来识别此类应用程序。使用率低且票证数量较少的应用程序通常是退役的候选者。

Retain

所有技术价值低而商业价值高的应用程序都属于这一类。技术成熟度可能较低,但它们为业务提供了重要价值。从 IT 的角度来看,这些应用程序的运行成本并不高。我们可以为这些应用程序保持开启状态,因为这些应用程序仍然为业务提供重要价值。

Consolidate

所有具有高技术价值和低商业价值的应用程序都属于这一类。高技术价值可能是由于技术支持成本高、缺乏具有技术 技能的人员、缺乏文档, 等等。企业可以明确说明应用程序的价值,但目前在这些应用程序上的支出可能不合理。这些应用程序集需要迁移和整合以升级技术货币。

Transform

这些是具有高技术 价值和高商业价值的应用程序。这意味着应用程序拥有庞大的用户群、多个版本、大量的票证和高昂的基础设施支持成本,但仍为业务提供了显着优势。这些应用程序是需要付出努力的地方,因为它们为组织提供了显着的差异化优势。

使用上述方法,我们可以确定适合转型的应用程序。对于我们的示例,我们可以采用一个现有的 Java/JEE 应用程序,该应用程序当前在本地运行并且需要针对分布式应用程序设计模型进行转换。

Monolithic application to distributed cloud-native application


J2EE 规范的出现与提供必要服务的 application 服务器相结合,导致了单体应用程序的设计和开发:

读书笔记《cloud-native-applications-in-java》数字化转型

单体应用程序及其生态系统的一些特征是:

  • Everything is packaged into a single .ear file. The single .ear file requires a multi-month test cycle, which results in a reduced velocity of change in production. Typically, a big push to production once or twice a year.
  • Application build complexity is very high with dependencies across various modules. At times, there is a clash between the versions of the JAR files used by the application.
  • Reuse between application is primarily performed by sharing .JAR files.
  • Huge bug and feature databases—from a backlog perspective, there are many feature sets/bugs across the various application modules. At times, some of the backlogs might be at loggerheads with each other.
  • User acceptance criteria usually undefined. There are some smoke tests, but by and large, new features and integration is mostly visible in production only.
  • Requiring multiple team involvement and significant oversight (business team, architecture team, development team, testing team, operations teams, and so on) for design, development, and operations management. During the release cycles, coordinating between the various teams is a Herculean effort.
  • Technical debt accumulation over a period of time—as new features/functions are added to the application, the original design never undergoes any change/refactor to account for new requirements. This results in a lot of dead and duplicate code accumulating in the application.
  • Outdated runtimes (licenses, complex updates)—the application might be running on an older version of JVM, older application servers and/or database. Upgrade costs are high and usually very complex. Planning an upgrade means foregoing any feature release during that development cycle. The involvement of multiple teams requires complex project management models. The absence of regression test scripts makes this even worse.
  • The technical design-led approach followed by the team. The architecture and design is frozen upfront before development starts. As the application grows, new features/functions are getting added, there is no second look at the application architecture/design.
  • Barely any business components or domains are used. The application design is usually sliced horizontally based on the tiers (presentation tier, business tier, integration tier, and database tier) and on the customer/application flows into specific modules/patterns. For example, applications making use of the MVC pattern will create packages along the lines of model, views, and controllers, with value and common thrown in.
  • Usually, there is a single database schema for the entire application. There is no segregation of the functionality at the database level. The domains are linked to each other with foreign keys and databases following the third normalization form. The application design is usually bottom-up, with the DB schema determining the application database tier design.
  • The average enterprise application will have more than 500k lines of code, with plenty of boilerplate code. As the application grows, there will be plenty of dead and duplicate code in the source code base.
  • Applications typically supported by heavyweight infrastructure—the abilities of the application are managed by adding more and more hardware. Server clustering is used to scale the application.
  • Thousands of test cases lead to increased time to run the regression test suite. At times, the release will skip the regression test suite to speed up the cycle.
  • The team size is higher than 20 in most of these projects.

我们可以看到,在单体应用的情况下,业务速度和变化率非常低。这种模式可能在 10 到 15 年前就已经奏效了。在当今竞争激烈的市场中,以令人难以置信的速度发布特性/功能的能力至关重要。您不仅要与其他大型企业竞争,还要与许多没有遗留应用程序、技术和流程包袱的更小、更灵活的初创企业竞争。

消费公司开源增长的出现,以及移动设备的增长等因素,导致了应用程序架构空间的创新,以及由微服务和反应模型驱动的更多分布式应用程序。单体应用程序被分解为更小的应用程序/服务集。

接下来,我们将探讨与分布式应用程序相关的关键架构问题。我们将看到这些关键问题如何映射到整体应用程序技术能力以及应该雇用哪些能力以及应该构建哪些能力:

读书笔记《cloud-native-applications-in-java》数字化转型

分布式应用程序及其生态系统的一些特征是:

  • Lightweight runtime containers: The advent of microservices correlates to the demise of the heavyweight JEE containers. As the applications morph into microservices with a singular purpose and loose coupling, there is a need to simplify the container managing the component lifecycle. The advent of Netty led to the development of the reactive framework that was just right for the purpose.
  • Transaction management: Another causality of the application simplification was transaction management. Bounded context means the services are not talking to multiple resources and trying to do a two-phase commit transaction. Patterns such as CQRS, Event Store, Multi Version Concurrency Control (MVCC), eventual consistency, and so on, helped simplify and move the application to model a where locking resources are not needed.
  • Service scaling: Breaking the application allows individual services to be scaled up and out independently. Using the Pareto principle, 80% of the incoming traffic is handled by 20% of the services. The ability to scale these 20% services becomes a significant driver toward the higher availability SLAs.
  • Load balancing: Unlike the monolithic application, where the load balancing was between the application server cluster nodes, in the case of distributed applications, the load balancing is across the service instances (running in Docker-like containers). These service instances are stateless and typically can go up/down very frequently. The ability to discover the instances that are active and which are not active becomes a key feature of the load balancing.
  • Flexible deployment: One of the key abilities of the distributed architecture is moving from a rigid cluster deployment model to a more flexible deployment model (cattle versus pets), where the deployment instances are deployed as immutable instances. Orchestration engines such as Kubernetes allow the optimum utilization of the underlying resources and take away the pain of managing/deploying hundreds of instances.
  • Configuration: As the service instance becomes immutable, the service configuration is abstracted out of the services and held in a central repository (configuration management server). The service at boot time, or as part of the service initialization, picks up the configuration and starts in the available mode.
  • Service discovery: The use of stateless immutable service instances running over commodity hardware means the services can go up and down at any time. The clients invoking these services should be able to discover the service instances at runtime. This feature, along with load balancing, helps maintain the service availability. Some new products (such as Envoy) have merged service discovery with load balancing.
  • Service versions: As the services start getting consumers, there will be a need to upgrade the service contracts to accommodate new features/changes. In this case, running multiple versions of the service becomes paramount. You will need to worry about moving the existing consumers to a new service version.
  • Monitoring: Unlike the traditional monolithic monitoring that focused on infrastructure and application server monitoring, the distributed architecture requires monitoring at the transaction level as it flows through the various service instances. Application performance management (APM) tools such as AppDynamics, New Relic, and so on are used to monitor the transactions.
  • Event handling / messaging / asynchronous communication: Services do not talk to each other on a point-to-point basis. Services make use of asynchronous communication through events as a means to decouple them from each other. Some of the key messaging tools such as RabbitMQ, Kafka, and so on are used to bring asynchronous communication between the services.
  • Non-blocking I/O: The services themselves make use of the non-blocking I/O models to get the maximum performance from the underlying resources. Reactive architecture is being pursued by microservices frameworks (with the likes of Play framework, Dropwizard, Vert.x, Reactor, and so on) used to build the underlying services.
  • Polyglot services: The advent of the distributed application and using APIs as integration allows the service instance to be built with best-of-breed technologies. Since the integration model is JSON over HTTP, the services can be polyglot, allowing the use of the right technologies to build the services. The services can also make use of different data stores based on the type of service requirements.
  • High-performance persistence: With the services owning their own data stores, the read/write services need to handle large volumes of concurrent requests. Patterns such as Command Query Request Segregation (CQRS) allow us to segregate the read/write requests and move to the data store to an eventual consistency model.
  • API management: Another key ingredient of the distributed architecture is the ability to abstract out concerns such as service throttling, authentication/authorization, transformation, reverse proxy, and so on and move to an external layer called API management.
  • Health check and recovery: Services implement health checks and recovery in order for the load balancer to discover the healthy service instances and remove the unhealthy ones. The services implement the heartbeat mechanism which is used by the service discovery mechanism to track healthy/unhealthy services across the application landscape.
  • Cross-service security: Service-to-service invocation needs to be secured. Data in motion can be protected by a secured communication (HTTPS) or by encrypting the data over the wire. The services can also use public/private keys to match which client services can call the other services.

我们看到了构建分布式 应用程序所需的一些架构问题。为了涵盖作为一组微服务构建的整个应用程序的范围,我们正在研究各个领域的以下关键架构问题:

读书笔记《cloud-native-applications-in-java》数字化转型

对于云原生的应用程序,使用云供应商提供的 SaaS/PaaS 构建应用程序非常重要。该模型使您可以专注于业务功能,提高创新节奏并改善客户体验。除非技术不是组织的关键差异化因素,否则核心基础设施和平台服务的运行应留给专家。在需求存在巨大变化的情况下,云弹性规模模型提供了动力。我不想为云供应商做营销,但除非基础设施不是你业务的重要方面,否则你不应该运行基础设施。

唯一的缺点是您与云提供商提供的服务相关联。组织正在采用多云供应商战略,他们在其中传播应用程序并利用云供应商的关键差异化优势。例如,GCP 提供了丰富的分析和机器学习功能库,能够运行您的分析工作负载和破译含​​义洞察,以及机器学习  (ML) 模型是一种方式 使用同类最佳功能。同样,对于面向消费者的应用程序,AWS 提供了一组丰富的 PaaS 服务,可用于启动和转向以客户为中心的解决方案。

Transformation of a monolithic application to a distributed application

在本节中,我们将采用一个单一的应用程序,并了解将其构建到分布式应用程序中需要哪些步骤。

我们假设一个典型的 Java 应用程序在应用服务器上运行,通过集群模型进行扩展并使用典型的 RDBMS。该应用程序已经投入生产,需要重构/迁移到分布式架构。

我们将讨论需要协同工作以重构/推出分布式应用程序的多个并行轨道。我们将首先介绍各​​个曲目,然后将它们全部融合在一起。在您的组织中,您可以选择为每个轨道设置单独的团队,或者选择一个团队管理多个轨道。我们的想法是让您了解在整体应用程序的实际转换中所涉及的活动。

Customer journey mapping to domain-driven design

启动数字化 转型的关键驱动因素是定义新的客户旅程并构建新的客户体验。这种以客户为中心的理念推动了企业为数字化转型计划提供资金。对于我们的案例,我们可以假设企业已经批准了数字化转型计划,我们从那里开始。

从服务分解的角度来看,我们需要遵循这里提到的步骤:

读书笔记《cloud-native-applications-in-java》数字化转型
  • Customer experience journey mapping: One of the key drivers for digital transformation is defining new customer journeys. A customer experience journey is a map of initial contact point of customer, through the process engagement model. This exercise is typically done by specialists and involves customer focus studies, touch points, actors/systems involved, business requirements, and competition analysis, among other things. The customer journey is typically created as an infographic. A customer journey helps identify the gaps as the customer interactions move across devices, channels, or processes. It helps plug those gaps and identify means and ways to enhance the overall customer experience.
  • Deriving the domain models: The customer experience journey maps are mapped for the current and future requirements. These requirements then form the basis for the user stories. For new applications, the requirements can form the basis for functional decomposition of the system. In case of an existing application, the system might already be decomposed into identifiable domains/sub-domains. Once we have the requirements, we can start identifying the various sub-domains within the system. The domain model is documented using ubiquitous language. The whole idea is to use a language that is understood both by business and technology teams. The domains are modeled around entities and their functions. We also consider dependencies which interoperate among the functions. Usually, as a first pass, we end up with a big ball of mud, where all the known entities and functions have been identified. For smaller applications, the domain model might be the right size, but for larger applications, the big ball will need to be broken down further, and that's where the bounded context comes in.
  • Defining the bounded context: The big ball of mud needs to be broken down into smaller chunks for easy adoptability. Each of these smaller chunks or bounded contexts has its own business context that is built around a specific responsibility. Context can also be modeled around how the teams are organized or how the existing application code base is structured. There are no rules to define how the context is defined, but it is very important that everybody understands the boundary conditions. You can create context maps to map out the domain landscape and make sure that the bounded context is clearly defined and mapped. There are various patterns (for example, Shared Kernel, Conformist, Producer/Supplier, and so on) that can be applied to map out the bounded context.
  • Service decomposition: Using the bounded context, we can identify the teams that will work as part of one bounded context. They will focus on the services that need to be produced/consumed to provide functionality as part of the bounded context. The business capabilities are decomposed into individual microservices. The service can be decomposed based on the following principles:
    • Single responsibility: First and foremost is the scope of the service and the capability that will be exposed by the service
    • Independent: Changes in function/feature requirement should be limited to one service, allowing the one team to own and complete the same
    • Loose coupling: The services should be loosely coupled, allowing them to evolve independent of each other
  • Mapping the up/down stream service dependency: As the services are identified in each of the domains, the services can be mapped as for dependency. Core entity services that encapsulate the system of records are the upstream services. Changes from the upstream services are published as events that are subscribed or consumed by the downstream services.
读书笔记《cloud-native-applications-in-java》数字化转型

Defining the architecture runway

业务应用程序需要站在平台的肩膀上。该平台可以构建或购买,具体取决于业务和应用程序的需求。组织需要定义一个有意的架构模型并定义护栏,以确保团队在给定的技术限制内构建服务。平台团队拥有这个总体架构,选择架构和技术组件,并帮助构建成功运行应用程序服务所需的任何常见问题。

读书笔记《cloud-native-applications-in-java》数字化转型
  • Platform architecture: One of the key ingredients of a successful distributed architecture is the underlying platform. One can choose to build the platform by using off-the-shelf, open source / commercial software (Red Hat OpenStack, Cloud Foundry, and so on) or can choose a strategic cloud provider (such as AWS, Azure) to start building the platform. The elastic nature of the underlying infrastructure (compute, network, and storage) provides the fundamental building blocks for the platform.
  • Tech selection, validation, and integration: To build the platform services, you might want to evaluate multiple sets of technologies to determine what works the best in your ecosystem. The tech stack evaluation is typically a multiple-step process where the requirements are mapped to the available technologies/products, and a detailed series of steps to validate is undertaken, resulting in a matrix with regards to the integration of the technologies.
  • Design decisions: The result of the technology evaluations is mapped to the underlying requirements, resulting in a matrix. This matrix is used to determine the best fit and help take a design decision. This step works in close conjunction with the previous step.
  • Environment setup: Once the key design decisions are in place, we need to start with the environment setup. Depending upon whether the choice is on-premises or the cloud, there will be variation in the setup and the related steps. You can start with the setup of the development, test, pre-production, and production environment. The environments are built in the order of complexity and go through multiple iterations (to move from manual to script/automated).
  • DevOps/Maven archetypes: Next, we start working on the continuous integration (CI) / continuous deployment (CD) part of the application build and deployment. For applications being developed in the Agile model, the CI/CD model helps do multiple releases in a day, and bring higher velocity to the entire process. We can also develop accelerators to aid the CI/CD process. For example, Maven archetypes that come with requisite bindings for creating the deployable artifact.
  • Platform services build: Next comes the set of platform services that need to be built/provided to the users of the platform. The services are in application development (for example, queuing, workflows, API Gateways, email services, and so on), database (for example, NoSQL, RDBMS, Cache, and so on), DevOps tooling (for example, CI/CD tools, service registry, code repos, and so on), security (such as directory services, key management services, certificate management services, hardware security module (HSM), and so on), data analytics (such as Cognitive Services, Data Pipelines, Data lake, and so on).You can buy these services from multiple vendors (such as Tiles, offered as part of the Pivotal Cloud Foundry (PCF), Iron.io platform) or subscribe to services provided by cloud vendors or create your own platform services on top on products.
  • Non-functional requirements (NFR) concerns: Once the key platform services are in place, and the first set of applications start getting onboarded to the platform, we need to start worrying about how to handle the NFR concerns of the applications. How will the application scale based on the incoming load, how to detect failures, how to maintain minimum threshold of the application, and so on. Again, you may want to integrate existing products to your platform that provide/support these NFR concerns.
  • Production concerns: Last of all, we need to start worrying about the production concerns such as service management, monitoring, security, and so on. We will need to build services and requisite portals from an operations point of view to monitor, detect, and take appropriate actions in case of deviations/defined rules. The services are usually built using the organization standards in mind. The services mature as more and more uses cases are identified. The idea is to automate all possible operations to make sure the platform is ticking all the time, without any human intervention.

Developer build

数字化转型的另一个关键方面是专注于您现有的团队管理/维护现有应用程序。团队需要在技能和技术方面进行升级,以便能够将现有应用程序重构/构建/部署到一个分布式应用程序。我们将介绍重新培训团队以处理分布式应用程序故事所需的步骤。

读书笔记《cloud-native-applications-in-java》数字化转型
  • Developer reskill/training: First and foremost is teaching developers new skills for the new application architecture techniques and design patterns. This means classroom training, online technology training, vendor product sessions/training, and so on. Another way to raise the skill of a team is to hire people with relevant skills and have them spearhead the overall development with support from existing developer teams. At times, you might want to have two teams—one that changes the business and a second that runs the business. In this case, the first business team brings the new skills to the team. The other business team manages and operates the existing application during the transformation period.
  • Dev machine upgrade and setup: The new technology stack requires upgrades of the developer machines. If the machines are running on 4 GB RAM, we might want to upgrade them to minimum of 8 GB RAM, better still 16 GB RAM. The newer stack requires virtual machines, Docker engine, IDEs, and other software for development and unit testing. Slower machines increase the time to build/test the code. Without adequate horse power, the developer is simply not productive enough.
  • Hands on lab / proof of concept: Once the machines are upgraded and developer training is done, the developer can start doing hands on lab and/or proof of concepts with the new technology stack to familiarize themselves with new development techniques. The developer can be given small projects or be involved as part of stack evaluation to enable them to become familiar with the technology stack. The work done by the developer team should be evaluated by an SME in the area to point out what they are doing wrong and the correct way of doing it. Having an external consultant (either SME or vendor consultant team) helps bridge this gap.
  • Code branching and configuration: Once the developer team is ready to start working on the distributed application, the next step is to branch off the code from the monolithic application. You may want to branch off the configuration data also. Remember, even with branching, the existing application maintenance continues on the main code trunk. The branch version is used to refactor the code. We will see more details in the next section.
  • Develop/build microservices: Once the code is branched and refactored, the developer should start packaging them as microservices. The team can also start creating new microservices that map to new requirements of the application. The code on the branch is regularly synced with the trunk to ensure changes made to the trunk are available in the branch code. Movement to specific PaaS services provided by the cloud vendor is also part of this phase. If you want to make use of services such as queuing or notification, or any of the other services, then this is the phase where you make the relevant changes.
  • CI/CD process of microservices: Developers will start creating pipelines for continuous integration and deployment of the microservices. Service dependencies are mapped out and considered. Various code analysis checks are run as part of the CI process to ensure production readiness of the code. Additional service governance processes can be built into the various stages of the pipeline.
  • Functional/integration test: Last but not least, developers will write functional and integration test suites to verify the correctness of the services. These test suites are integrated as part of the CI pipeline. As and when the new code is deployed, these tests are run as part of the regression to ensure the functional correctness.

Breaking the monolithic application

数字化转型的关键步骤之一是对单体应用进行实际重构。在这种情况下,我们假设基于 Java 的应用程序需要 被重构/分解为分布式应用程序:

读书笔记《cloud-native-applications-in-java》数字化转型
  • Initial state: Before we begin, we take the initial state of the monolithic application. In this state, the application is composed of a deployment unit (such as a WAR file), which is internally composed of multiple JAR files. The code is laid out in a logical manner, with some semblance of logical structuring across presentation, business, and data tiers. Each of the layers is further bifurcated by the modules or sub-packages modeled based on modules. If not, there is some distinction based on the class names to identify the modules. The configuration is stored as a set of external properties files. Code coverage is decent (more than 60%) and there is potential to write more test cases.
  • Code refactoring: The next step is to carve pieces of the code from the monolithic application that potentially go together. For example, classes across the module can be packaged as a separate Java project. Common files or utility classes can be packaged as separate JAR(s). As you refactor the code from a single code project, you will create multiple, interdependent Java projects. Package the JARs as part of the larger WAR or EAR file only. Remember, we are working on the master trunk of the code base. Changes are integrated and synchronized back on the branch code. Besides the code, you will also need to refactor the application configuration. As you refactor the code, the configuration needs to be mapped to the respective Java projects. The configuration might be specific to the project/module, be shared across modules, or global, which is used across the application.
  • Build process update: As you work on the code refactoring part, creating smaller independent Java projects, you will need to update your project build process. The Java projects need to be built in the order that they are dependent on each other. As you carve out the projects, the build process keeps going through iterations. The build process is updated in conjunction with the code refactoring steps. As the code gets refactored, the updated WAR/EAR needs to be deployed to production. This ensures that the code refactoring works, and other metrics—code coverage, unit test, regression test, and so on are factored in. This makes sure that the work you are doing gets incorporated on a daily basis to production.
  • Java version update: Multiple times, we have seen that the JVM version being used on the project might not be current. Some of the newer reactive frameworks usually work with Java 1.7 upwards. This means the base JVM version needs to be upgraded. This might require application code to be refactored for features that got deprecated. Some pieces of the code might need to be upgraded for newer features. The refactored code needs to go into production along with the upgraded JVM version.
  • Introducing circuit breaker / reactive patterns: The next step in the code refactoring is to upgrade the code for resiliency patterns. You can bring in patterns such as a circuit breaker by implementing a Java library such as Hystrix. You can also improve the code across the modules by implementing patterns such as decoupling the modules by implementing async messaging, bringing in reactive frameworks (such as Spring Boot, Vert.x, Dropwizard, and so on), and improving concurrency (such as Akka, RxJava, and so on). All the changes are to the production code and integrated with branch code.
  • Feature flag implementation: At times, you might be integrating code coming from the branch. In this case, you may not want some piece of code going live. You can introduce feature flags in the code, controlled through configuration. So you can take code into production which might be dead till the feature is ready to go live.
  • Ongoing functional updates: The application will be undergoing regular functional changes/updates. The changes are made to the code and synchronized back to the branched code on regular basis.

Bringing it all together

我们看到了四个轨道是如何在应用程序中以各自的身份工作的。现在我们以协作的方式将所有四个轨道放在一起。随着单体应用程序的转变,其他轨道建立了用于划分有界上下文和相关微服务的基础平台:

读书笔记《cloud-native-applications-in-java》数字化转型

我们可以看到这两个轨道如何改变业务,运行业务重叠,并提供从单体模型迁移到分布式应用程序模型的完美平衡。

这类似于更换行驶中的汽车的轮胎。

Building your own platform services (control versus delegation)


enterprises 的另一个关键决定是如何选择您的平台:

  • Should I be building my own platform?
  • Should I subscribe to an existing platform and develop my application on top of it?

这个决定归结为一个因素,您如何看待技术,作为推动者(控制)或差异化因素(授权)?

从本质上讲,所有公司都是科技公司。但问题是控制技术是否为您提供了在竞争中的额外优势,或者有助于建立可能阻止新玩家加入的护城河。让我们举几个例子,看看它是如何发挥作用的:

  • If you are to planning to compete with a company such as Amazon in the retail space, you need to have deep pockets. The low margin business of Amazon retails is bankrolled by profitable business from AWS. So, unless you have a sugar daddy or alternate revenue models, competing with Amazon is not going to be easy. But assuming you have deep pockets, can you start modeling your retail platform on top of AWS or any of the cloud providers? Yes! You can start with any of the public cloud platforms and once you have predictable demand, you can move into a private cloud model. This model saves you the upfront capex.
  • Let's take an example of a manufacturing domain that sells physical products. They can potentially augment their product with internet of things (IoT) devices that provide a regular stream of data about the performance and usage of the product. The company collects this data and provides analytics services (such as predictive maintenance) as digital services around these products. Now, you can model and build the analytics model on any of the cloud providers. The choice of the platform can be determined by the choice of cognitive or data churning capabilities. You can choose the cognitive services from the platform or even create your own. The underlying platform capabilities are delegated to the cloud provider. You focus on building the right model to predict.

没有正确或错误的模型。您最初可以从委托(与公共云提供商一起使用)开始,然后转到控制模型(私有云),您可以在其中完全控制应用程序的特性/功能。无需大量前期投资和锁定就可以轻松转向云提供商模型。这个想法是确定差异化因素在哪里!

Summary


这使我们走到了数字化转型的尽头。我们看到了我们需要如何评估我们的应用程序组合以获得转型机会。我们看到了单体应用程序成为我们实现业务目标的障碍的原因。

一旦确定了转型机会,我们就可以采用现有的单体应用程序并转移到分布式应用程序模型。我们看到需要跨人员、流程和技术级别采取的各种步骤。

这也结束了用 Java 构建云原生应用程序的整个旅程。我们看到了构建新时代基于微服务的应用程序的各种工具/技术,如何构建它们,如何将这些应用程序投入生产,如何监控它们,以及我们如何为 AWS 和 Azure 等云提供商采用这些应用程序。我们还看到了构建基于 API 的平台的一些最佳实践,以及如何将现有的单体应用程序转换为基于微服务的分布式应用程序。