The goal of a Service Oriented Architecture (SOA) is to achieve reusable assets in an agile manner that is aligned with the business needs of the organization.
The SOA Reference Architecture provides guidelines for making architectural and implementation decisions. To that end it serves as a blueprint for creating or evaluating a solution architecture for different groups within the organization. In addition it provides insight and patterns for integrating the different elements of an SOA as illustrated by the different layers of an SOA.
An SOA Reference Architecture is designed to answer some of the key questions and issues encountered while developing a solution such as:
To answer these questions this document provides the following:
This document presents the layers, building blocks, architectural and design decisions, patterns, options and the separation of concerns needed to design or evaluate an SOA architecture.
The architecture is consists of multiple components which are divided into the following categories:
Design-Time Activities are the technical activities, guidelines, and deliverables that allow for everyday development within the SOA and Integration Architecture. Including:
Services must be modeled and designed to support an extensible plug-and-play model by being:
To enable the above qualities, each Service (except for Data and Utility Services) needs to have:
To ensure that each type of Shared Service is designed and implemented in a uniform manner, a SOA Governance team should create SOA Best Practices documents for: Service Design, Web Service Design, and XML Message Design in Designer's Guides (i.e., working agreements). Topics would include:
Many organizations have several autonomous packaged and custom developments that have evolved independently of one another. Many times, each system has become a system with its own implementation methodology, culture, processes, business rules, and vocabulary. Many companies face the problem of using the business data to enable applications to communicate in a distributed systems environment.
Organizations design Canonical Data Models (CDM) to help analyze the message exchange within the organization and with their trading partners. Data is embedded in the basic architecture of any organization. To develop the CDM, an organization must inventory business concepts and map the vocabulary into fundamental business concepts. The CDM provides a framework for integrating the disparate terms for each line of business. The CDM is a catalog of neutral terms defined in an XML Schema, including:
A CDM defines an organization's data in motion - the XML messages exchanged between:
A Canonical Data Model provides the following benefits:
Policies provide service-level meta-data to Web Services consumers. Specifying these policies is an important step in Web Service design, and this is accomplished by attaching policy expressions to the WSDL. Specifically, these policies include:
A Business Analyst first develops a Business Process Diagram (using BPMN - Business Process Modeling Notation) to model a business process at a business/requirements level. Then, designers and developers create a runtime model in either BPEL (Business Process Execution Language) or JBI (Java Business Integration) to:
Designers use MEPs (Message-Exchange Patterns) to model the runtime characteristics of a business process. Typical MEPs include:
These patterns are gaining in popularity and many of the newer ESB and BPMS products support executable MEP models. Please see the following for further information:
Most Shared Services will be implemented as Web Services, and it is important to take a Test-Driven Development (TDD) approach to Web Services development because:
Service Testing includes:
Service Identification is one of the key steps in designing a Service-Oriented solution because it defines and identifies high-level Business Services by using the following the following approaches:
A Middle-out approach is recommend because it takes the best parts of the other approaches:
Service implementations are software like any other software module or application. As such, they go through a similar lifecycle. The service lifecycle is depicted in the following diagram.
The lifecycle starts with Service Identification. Services are part of more general business processes. As such, new services are usually identified by the design of a new business process. This is not the only way that new services are identified however. Sometimes services are identified as part of portfolio management. This involves analyzing the requirements for a service across the breadth of the enterprise. The level of analysis required for this type of identification is difficult to gauge and therefore CIBER does not recommend using portfolio management for service discovery at this time.
Once a service has been identified, it follows an iterative development process. Iterative processes use the feedback from subsequent phases to make corrections in previous phases based on lesson learned or issues that may crop up. Services are part of a greater distributed system however and any changes made can have an impact on other development efforts. Therefore it is important to be mindful of the effect change can have even during development of a service.
The Service Specification and Design phase produces at a minimum the specification for the service interface. This interface specification includes the semantics and data that the service supports. This represents the contract between the service and its consumers. Diligence should be applied to the design of this interface as changes to the interface have the greatest impact to subsequent phases. Changes to the service interface can impact both clients that may be developed in parallel, implementation of the service, as well as test plans that have been implemented to test the service. This does not mean that a service interface must never be changed once it is designed. Designs are not perfect and in today's IT environment it is not always possible to take the time required to produce an interface definition that is ideal. Therefore it is prudent to put processes in place that take into account that service interfaces may change. Whenever implementing a service take into account that the interface may change to the extent possible to minimize the impact of that change. This also applies to the implementers of consumers of the service. In some cases, consumers may want to wait until the service has been through some number of rounds of testing before starting their implementation. This allows for some experience to be developed with using the service thereby (hopefully) minimizing the possibility of change to the interface.
Once the service has been designed it proceeds to the Service Implementation phase. In this phase the service will be developed based upon architectural standards developed by the university OIT group. Any issues encountered trying to implement the design of the service should be fed back to the designer in order to refine the design if necessary.
Finally, service need to be tested before being deployed into production use. Testing of services involves four primary areas of focus:
Security testing is essential to assess the risk of a service with regard to vulnerability, data privacy and data integrity. Tests need to be developed to test boundary conditions which can assess the robustness of the service handling inputs outside the range of anticipated values. Tests should also be created that ensure the service performs as expected based on the roles as defined within the system.
This area of testing focuses on ensuring the service performs its function according to the requirement of the business process it support. Automated test suites should be developed to perform regression testing to quickly verify functionality during the life cycle changes that may occur.
This area of testing focuses on performance characteristics of the service including measurement of time to perform the service and load testing of the service. The output from this type of testing forms the basis of understanding how to configure and deploy the service in a production environment.
This area of testing ensures the service adheres to its service specification. Early identification of interoperability issues is key to integration of the service through exposure to university partners and clients. This type of testing is especially important when the service interacts with multiple data sources and/or systems.
Security testing is essential to assess the risk of a service with regard to vulnerability, data privacy and data integrity. Tests need to be developed to test boundary conditions which can assess the robustness of the service handling inputs outside the range of anticipated values. Tests should also be created that ensure the service performs as expected based on the roles as defined within the system.
One of the more important types of testing related to usage of services is regression testing. The more applications that depend on a given service, the more impact a change to that service can have on the environment. As such, when changes are made to services, regression testing must be undertaken to ensure that the service not only supports new or updated functions but all other functions upon which the service relies.
Once the service is ready it is deployed into production use. At this point the service enters the maintenance cycle as opposed to the development cycle. Services in production sometimes require change. The change may be due to a defect in the system or a request to add more functionality to the service. It is important to understand how this change should be handled based on its nature.
Defects that are a result of a fault in the underlying implementation of the service and do not change the behavior of the service may be handled by a Defect Remediation process. In this case it is desirable to apply a fix as quickly as possible to the existing service as it may be affecting multiple university processes and causing a disruption to the universities ability to perform its function.
Changes that modify the behavior of an existing service or its interface are best handled by identifying a new service or a separate version of the service. In this scenario versions are really new services that are separate from the original. Taking this route minimizes impact on consumers using the original service but can cause a proliferation of services within the enterprise. This may not always be desirable however and some effort should be put into defining conditions under which an existing service in production may be modified (for instance, only one business consumer is affected and the change is well understood). This should be documented so that everyone understands the conditions and ensure that changes are handled in a consistent manner.
Service Withdrawal is the final step of the lifecycle. Eventually services will start ballooning (especially when changing production services leads to the creation of new services) and some services will stop being used. Removing service can be problematic as a service may support multiple business process owned by different colleges and departments. A procedure needs to be adopted to define and orderly withdrawal of services from the system. Such a procedure may start with deprecating the service (with a note explaining why and some suggestion as to another service to replace it). Secondly, services can be monitored for use and the consumers identified. Finally, if the service is still being used the consumers should be contacted to discuss a solution. This should result in establishing a schedule for the consumers to switch over to another service so that the service may be withdrawn.
Consumers of a deprecated service may not have incentive to change. Making changes requires effort and possibly some risk to the consumer. This needs to be recognized when seeking collaboration from consumers to switch over to another service. In this case it will be necessary to be creative in coming up with a common understanding of the benefit of switching to a newer service so the deprecated one may be withdrawn.
Most applications and systems use some form of logging that stores messages to a persistent medium (DBMS, file, etc.). Logging provides the following benefits:
Identity Management is the ability to identify a requestor's (person or system) true identity and relationships between people and organizations (groups). The current policies and level of enforcement are expected to continue into Managed Services by using Microsoft's Active Directory as the source for identity management.
Confidentiality assures that during transport of the data it was never visible, accessible or viewed other than by authorized recipients.
There currently is no Smart Energy or Smart Grid requirement to secure messages for Confidentiality.
Authentication is also done within Microsoft's Active Directory and adds authorization policies to the verified identity.
Authorization is currently done at the application level. Within Managed Services (stage MS3) authorization will be performed to determine whether the requestor (person, application, service) is authorized to access the requested service, data and even the Managed Data Repository.
Integrity requires that during transport and even as read by the sender there were no unauthorized modifications of the content of the message.
Non-repudiation assures the sending service that the receiving service has received the intended message. Current Smart Energy and Smart Grid projected needs do not require message Integrity, but for sensitive messages (time, confidentiality, event, or priority) there is Non-repudiation.
Service Consumers are the end consumer/user of the services provided by an enterprise. The consumer has the flexibility to process and display useful and relevant information provided by Services. Service Consumers access Services through a consistent interface (or contract) exposed by that Service. Service Consumers can be:
Users can access enterprise services through variety of mechanisms, including Portals, web sites, or PDAs. A web site provides a web-based interface to enable users to perform daily, job-related tasks. A web site leverage the services created provided the organization, as a part of SOA implementation, and do not contain any business logic themselves. The site seamlessly integrates with back-end services (using SOAP/WSDL, REST/JSON) and business processes. The site can be secured by global security policies, but also can include role based authentication that limits access to only relevant information making it easier to manage.
A Portal is a web site that enables users to access highly personalized information and services. It can increase the productivity and effectiveness of employees within an organization through a consolidated view of available services and information. Typically the site is highly interactive, allowing the user to run a wide variety of tools and functions such as, global search queries, and custom dashboard and advanced business intelligence tools. Portals provide a unified entry point to the organization and provide common look and feel all across applications. Portals form the front end for business processes and custom applications created as composite applications. The site can also be utilized to "mash-up" other applications or services from 3rd party sites. Wikis, Blogs, RSS feeds, and content can all be made available within the site.
A PDA enables mobile users to access enterprise services. Like sites and portals, a PDA has no business logic of its own, but it allows the end user to interact with back-end services by seamlessly accessing the Web Services exposed by an enterprise. However, a PDA's interface and functionality is much more limited than that of a web site because of display and memory constraints.
An organization normally collaborates with external business partners such as suppliers and customers to achieve its business objectives. An external partner's B2B application (e.g., a web site or portal) will invoke a Web Service exposed by an enterprise, which in turn executes business functionality on behalf of the client.
A Business Process codifies and streamlines the rules, practices, and business s activities in an enterprise. Business analysts create Business Process Diagrams using the industry-standard Business Process Modeling Notation (BPMN) to document a set of coordinated tasks and activities that lead to an organizational goal. You can think of a business process as a graphical representation of a Use Case (RUP) or User Epic (Agile / Scrum) in that it shows normal and alternate flows along with exceptions encountered during processing. In SOA, a business process coordinates the business services (see section 3.2.1) developed by an enterprise. SOA architects and developers derive business services from the tasks and decision points in a business process diagram.
Business logic is used to form business flows as parallel tasks or sequential tasks based on business rules, policies, and other business requirements.
For information on tools and products, please the Workflow / Orchestration section (3.4.5).
Presentation Services define a common set of services to manage interaction with users or trading partners (to the extent this second interaction is needed). Presentation services are provided by: web servers, portal servers, and application servers that provide the capability to quickly create the front end of business processes and composite applications to respond to changes in user needs through channels, portals, rich clients, and other mechanisms. Presentation services integrate with other foundational services, such as security (e.g., single sign-on).
Users can access enterprise services through variety of mechanisms, including Portals, web sites, or PDAs. A web site provides a web-based interface to enable users to perform daily, job-related tasks. A web site leverage the services created provided the organization, as a part of SOA implementation, and do not contain any business logic themselves. The site seamlessly integrates with back-end services (using SOAP/WSDL, REST/JSON) and business processes. The site can be secured by global security policies, but also can include role based authentication that limits access to only relevant information making it easier to manage.
A Portal is a web site that enables users to access highly personalized information and services. It can increase the productivity and effectiveness of employees within an organization through a consolidated view of available services and information. Typically the site is highly interactive, allowing the user to run a wide variety of tools and functions such as, global search queries, and custom dashboard and advanced business intelligence tools. Portals provide a unified entry point to the organization and provide common look and feel all across applications. Portals form the front end for business processes and custom applications created as composite applications. The site can also be utilized to "mash-up" other applications or services from 3rd party sites. Wikis, Blogs, RSS feeds, and content can all be made available within the site.
A PDA enables mobile users to access enterprise services. Like sites and portals, a PDA has no business logic of its own, but it allows the end user to interact with back-end services by seamlessly accessing the Web Services exposed by an enterprise. However, a PDA's interface and functionality is much more limited than that of a web site because of display and memory constraints.
BI (Business Intelligence) / Reporting provides a high-level view of KPIs (Key Performance Indicators) to business stakeholders to enable them to make decisions and manage the business. Examples of KPIs include:
A BI product uses the information stored in a Data Warehouse to present it to the user. A BI tool (such as Business Objects) uses Key Performance Indicators (KPIs - e.g., sales conversion rate, in-force polices, market penetration, for example) to report on data, identify trends, perform data analysis, etc. to enable business users to make decisions and operate the business as efficiently as possible while advancing a business strategy.
BI increases business agility and shortens timeframes for decision-making. It gives companies the ability to identify and anticipate opportunities represented by seemingly unrelated events. It is a key enabler of strategic and tactical decision making.
A Business Process Management System (or Suite) usually includes the following:
Commercial BPMS products include (but are not limited to):
A Business Rules Engine warehouse is a software system that executes one or more business rules in a runtime production environment. Such rules might come from legal regulation, company policies or other sources.
Business rules change more frequently than other application code. Rules engines are pluggable software components that execute business rules that are externalized from application code. This allows business users to modify the rules without developer intervention.
A Business Activity can either be a business process that is orchestrated by Business Process Management (BPM) software, an ESB, or a business process that is a series of activities spanning multiple systems and applications. BAM is an enterprise solution primarily intended to provide a real-time summary of business activities to operations managers and upper management.
One of the most visible features of BAM solutions is the presentation of information on dashboards that contain key performance indicators (KPIs) used to provide assurance and visibility of activity and performance. This information is used by Technical and Business Operations to provide visibility, measurement, and assurance of key business activities. It is also exploited by Event Correlation to detect and warn of impending problems.
Although BAM systems usually use a computer dashboard display to present data, BAM is distinct from the dashboards used by Business Intelligence (BI) in so far as events are processed in real-time and pushed to the dashboard in BAM systems, whereas BI dashboards refresh at predetermined intervals by polling or querying databases. Depending on the refresh interval selected, BAM and BI dashboards can be similar or vary widely.
Commercial products include (but are not limited to):
Open Source products include (but are not limited to):
Please note that the Open Source BAM products mentioned above are still in their infancy. For now, one of the Open Source BI products (see section 3.4.12) such as Pentaho or JasperSoft may be a better option until the Open Source BAM market matures. 2.6.3.1 Points to Ponder (for Evaluation)
When evaluating business activity monitoring product alternatives, focus on the features that distribute alerts and display business indicators, the analytic engine that generates the information and the mechanisms to collect business events - For further details, please see https://01ec032.netsolhost.com/lit/BAM-selection_requirements.pdf.
SOA Governance is about setting and implementing policies and procedures to control the service lifecycle (see the Service Lifecycle Management section -
A Service Registry/Repository manages service descriptions and artifacts throughout the complete lifecycle of a Service, and is used to find, publish, manage and (enable consumers to) subscribe to services. The Service Registry/Repository decouples service consumers and service providers.
The Service Repository/Registry is a component that stores & manages metadata for a service throughout its lifecycle. Stakeholders can add and maintain service models and associated information. The meta-data enables consumers to search for services to satisfy a particular need, and obtain models and associated information concerning those services. Service Registry is a runtime catalog of services intended to be used by consumer services. The service provider publishes its capabilities to the registry. In order to use these capabilities, service consumers discover service capabilities and bind to them. Most Service Registries use UDDI (Universal Description Discovery and Integration - an XML-based standard) to specify the binding between a service name and its location.
This function manages and secures the services, applications and resources. IT enforces runtime governance.
Provides visibility into the architecture, monitoring business processes and slow of failing services. Often, this component is integrated into a network management system providing notifications to operators in case of outages.
A Service Manager is an integral part of runtime SOA Governance. It is responsible for service monitoring and management as well as SLA monitoring. A Service Manager is a monitoring aid that reports on all the services deployed in the enterprise. A system manager can use it to manage deployed Services and gather information:
Open Source products include (but are not limited to):
Please note that the Open Source Service Manager products mentioned above are still in their infancy, and that a commercial product will probably be the best choice until there are more mature Open Source options.
Governance management of the models used by the services, processes, rules, information and security functions.
Services governance includes both business and technical oversight. This function, though not a technical function, is a critical organizational function that must integrate with business process and architecture governance, business services governance, data governance, security governance and technical services governance, general IT and enterprise level governance.
Configuration management is the services function that is similar to the software configuration management that assures the interoperability of software components and resources.
Tracking and monitoring of the deployment of services, metadata, new versions of MS architecture components and the supporting data sets. Typically this function is integrated with the Deployment & Installation Tools and/or the data centers application monitoring tools.
Tracking and logging of the event occurrences and resultant processing including exceptions and failure conditions. Data should enable the analysis and planning of event logic, event processing and new events.
The exceptions function may include exception detection or reception of exception messages, the determination of the exception dispositions, execution of the dispositions, and logging.
Tracking and logging of the transactions and events occurring across the services life cycle.
Runtime tracking of the services' policies and contracts, and runtime compliance.
Services version tracking, compatibility matching, and deployment data for assurance that the deployed versions are correct, and where multiple versions are required that they can co-exist, are correctly invoked and executed.
An Enterprise Service Bus (ESB) is a key enabler for business agility. It enables faster accommodation of existing systems and reuse of existing functionality. An Enterprise Service Bus is a broker that supports synchronous and asynchronous service invocation. It also enables message transformation and event notification between applications. It helps consumers find providers and handles the details of communication, protocol mediation and message routing between them. All the services in an organization (usually Web Services) are made available over an ESB.
Many ESBs will also provide a Service Repository / Registry and a BPM / Workflow Engine.
Includes Repository, Registry, Mapping and Transformation functions for services and protocols
Message interpretation and validation of message format. This function is the first look at incoming messages to determine what other functions will be called within the ESB.
Within the ESB this function transforms data formats, data models (schema), and protocols. This function is required to convert between dissimilar services.
Message routing function performed by the Enterprise Service Bus typically. Includes the functions of Invocation (asynchronous or synchronous), Message Queuing, Reliable Messages, and Content Based Routing
Orchestration is a special type of workflow that is generally applied to the construction of business processes from composites of smaller (e.g., Common, Data, Utility) Services that do not require human interaction. Orchestration often includes a controller that manages the messaging between each part of the workflow.
A standards-based mechanism that defines how web services work together, including business logic, sequencing, exception handling, process decomposition, including service and process reuse. Orchestrations are long-running, multi-step transactions, almost always controlled by one business party, and are loosely coupled and asynchronous in nature.
Shared Services are those Services developed by a company for use throughout an organization:
Shared Services are usually deployed as Web Services to make it easier to access them and integrate with them. Please see the Web Services section (3.4.9) for available products.
Business Services are at an LOB (Line Of Business) or Business Unit level. These are services at a higher level of granularity that correspond to the organizational realities of the enterprise. Examples of Business Services include:
Business Services provide the following benefits:
Interface Web services used to connect to Consumer applications. These services may be wrapper services around an RPC, a CICS gateway service, or a common off the shelf software (COTS) interface services (like SAP or Oracle apps). These interface services can also be custom built.
Data CRUD services for both providers and consumers. These are atomic services typically, though for complex queries these may be low level composite services.
Data Services support the management and provision of data between different systems or application components. These services are particularly important in the environment due to the many underlying systems with separate data approaches and structures. Data Services provide a consistent approach to enterprise-wide data access by exposing Create, Read, Update, and Delete (CRUD) capabilities for entities in a particular business domain (which are usually stored in a database). Examples include Customer, Policy, Order, Claim, etc.
There are two options for designing and deploying Data Services:
There are two Open Source products that enable designers to build Data Services in a declarative manner (i.e., without coding) by enabling developers to match database tables and columns to XML elements and attributes, including:
Developers can also write their own code to build Data Services. Please see the Web Services section (3.4.9).
RESTful Web Service - This is a more modern way to build Data Services, and this lighter approach is well-suited to support web applications that need to use simple CRUD operations on data sources. With REST:
An Integration Service (usually a Web Service) acts as a proxy to an external service (see section 3.3 for details). Even though many 3rd party services provide a Web Service API, an Integration Service is needed for each one to hide the API details from the rest of the enterprise, and expose a Web Service that best fits the needs of the enterprise. Examples of an Integration Service include wrappers for:
These are the atomic services which access network, file, printing and shared resources.
Utility Services are fine-grained, and provide reusable wrappers to system-level functionality to the higher level Shared Services. Examples of Utility Services include Email, Logging, etc.
Interface Web services used to connect to Consumer applications. These services may be wrapper services around an RPC, a CICS gateway service, or a common off the shelf software (COTS) interface services (like SAP or Oracle apps). These interface services can also be custom built.
Access to external resources, both public and private, requires secure services that provide access across the corporate firewall while insuring the integrity and security of those access needs, these needs are provided by External Services.
For example these services would be required to expose Duke services to Infinisys at the Convergis hosted site, and should be used to interface with Infinisys services that are exposed by the hosted site. Another example are the DCI interfaces to Nuance.
An organization does not exist in a silo. A company usually has to collaborate with its business partners or with other applications inside the enterprise in order to get specific jobs done (e.g., credit authorization, shipping packages, etc.). This involves communicating with applications or services provided by other companies. A company will usually develop an Integration Service to insulate the rest of the enterprise from the details of communicating with 3rd party services - please see section on Integration Services for further details. There are two types of Service Providers:
Data Transport Services are exclusively used where large amounts of data or very frequent data transfers are required, and where the transport of this data through messages would overwhelm the ESB or other MCI components with very high levels of message content data volumes and/or message volumes.
These services use an ETL tool (Informatica or XML appliance) for the physical data movement, and services for exception detection, resolution, logging and notification.
The physical implementation baseline assumes use of Informatica (possibly XML appliance) for physical data movement, Cybermation for scheduling ETL and Managed Services for exception handling and disposition. However, a XML appliance may be used for some if not all of the physical data transport.
The services within the Managed Data Management System maintain a Master Data Repository that contains instance master data from data domains such as Party (Prospect, Student, and Faculty), Course, Transcript, Program, and Facility.
The primary functions are Interface Services, Lifecycle Management, Hierarchy & Relationship Management, MMD Event Management, Data Quality Management, Authoring, and Base Services for master data.
The canonical data model is the definition of a standard view of a particular subject, plus the mapping back to each application view of this same subject. Canonical data model minimizes dependencies between integrated applications that use different data formats. Managed Services use a CDM, and the same model is expected to support the IMA in delivering enterprise-wide business intelligence (BI) though they may not be identical. All messages and event notifications, which are published by applications, should be translated to the canonical data model thus decoupling data providers from consumers.
Relational databases that are data stores for the Managed Data Services Architecture components. This component does not refer to the transactional system/application databases or data stores used by the data hubs, catalog layer, or data marts.
A Data Hub is a collection of data for a specific business subject area and is the authoritative source of data for application integration, and the authoritative source of data for reporting and analytics. However, Data Hubs cannot be the system of record, only a system of reference.
A Data Warehouse (DW) is a single, centralized repository that holds an enterprise's data to support reporting and analytics, and to make it easier to find data that is important to the business. A data warehouse environment consists of the following:
A Data Warehouse greatly simplifies reporting and analysis. It is a key enabler of BI / Reporting and facilitates KPI monitoring. A Data Warehouse is usually in a separate data store from an enterprise's Operational Data Store(s) to avoid degrading the performance of production databases.
This function physically moves the messages from the sender to the recipients based on the message routing logic determined by the Routing function within the MCI.
The portal server is provides Web based portal access to Web applications, COTS portals (like MS SharePoint), analytics and reporting sites.
Relational database management system server that provides the data store CRUD functionality.
Life cycle management includes the functions of CRUD, archival and retrieval. The services data that this function manages includes all data found in the other Managed Services layers. Typically this function starts with the most mission critical, versioning and release critical data around services, then expands as the Managed Services capabilities are matured.
Load balancing can occur at many different levels with the Managed Services architecture from the initial Web server to MCI components to the applications server to data load management and to network traffic load management. An example of the latter is isolating the meter Data Transport Services on their own LAN segment to minimize the impact of their volume and frequency on the performance of other services.
This function physically moves the data from the provider to the consumer.
The physical movement can be done by a service or some other method such as an ETL tool like Informatica or a XML appliance.
Life cycle management includes the functions of CRUD, archival and retrieval. The services data that this function manages includes all data found in the other Managed Services layers. Typically this function starts with the most mission critical, versioning and release critical data around services, then expands as the Managed Services capabilities are matured.
The services Process Server function provides the process model and script interpretation function to execute the business logic and to invoke the business rules that are described for a business process and/or system-to-system process.
Interprets and delivers Web pages in HTTP.
The application server provides the functionality required to execute the software functionality logic of services, and the physical thread and server process management.
The following diagram shows how the SOA-related architecture documents relate to each other and how they drive toward a SOA Solution.
The architecture documents and outcomes are as follows:
While this may look like a lot of steps, please note that:
To make this process actionable and tangible, the development organization must determine how to define the SOA architecture and how to implement by using:
The main goals here are shared vision along with consistent design and implementation across the SOA solution.
Rufus SOA reference architecture. (2017, Jun 26).
Retrieved November 21, 2024 , from
https://studydriver.com/rufus-soa-reference-architecture/
A professional writer will make a clear, mistake-free paper for you!
Get help with your assignmentPlease check your inbox
Hi!
I'm Amy :)
I can help you save hours on your homework. Let's start by finding a writer.
Find Writer