Tuesday, 26 July 2011

Towards Next Generation Process Execution

A considerable part of my professional time is spent on advising people on discovering, redesigning, and --- if possible and feasible --- automating their business processes. Often, trivial, cumbersome, and predictable processes can be formalised and executed using a process engine (also known as BPMS, Business Process Management Suites). Process execution is usually combined with an enterprise service bus (ESB) or middleware layer, which supplies data sources and exposes business transactions to the process layer in an open, reusable, and interoperable fashion.
BPMS is by no means a new concept. Most contemporary BPMS platforms began as workflow engines and CASE (Computer-Aided Software Engineering) tools, which subsequently found valuable use in the rising Java EE and enterprise application integration (EAI) markets of the 90’s and onwards. This, combined with an increasing interest in information management and enterprise integration, spawned today’s plethora of repository based modelling tools, sophisticated middleware technology, and process execution platforms.  
Looking into the crystal ball of process automation, what automation technologies can enterprises expect to see in the years to come? Now, here, any average IT analyst would probably come up with three all-too-often-adopted shrinkwrapped concepts:
  1. Cloud computing
  2. X-as-a-service
  3. Agile
In order to actually coming up with something original or different, I have deliberately omitted these three words in this blog entry. That is not to say that these trends aren’t influential or important, but they have already been stated elsewhere in thousands of blog posts, whitepapers, and academic papers. In the following sections I will present my stance towards aspects of next generation process automation.

Closed-Loop Roundtrip Engineering
Several toolchains support the so-called ‘roundtrip’ between repository-based enterprise modelling tools and implementation level process development tools. Too often this is a one-way exercise in which business processes are modelled by the business analyst, approved by the process owner and then exported to execution by the solution architect. However, once the process model hits the implementation floor, governance, roundtrip, and traceability are cut off. The process model is now materialised as source code rather than a visual model.
An improved, closed-loop roundtrip approach would most likely solve this bridge by making the process integration point both ways. This calls for an improved bridging strategy between the two worlds so they ultimately merge into one. That is not to say that the designed process model must equal the executable process model --- the enterprise repository should still provide different, role-based views onto the same processes. The tipping point that I am arguing is that full traceability from model to execution demands to-way traceability and unified single-interface version control of all artefacts.

Light-weight Executable Process Models
Modelling standards such as BPMN 2.0 (Business Process Modelling Notation) claim to provide single, uniform language for modelling manual, semi-automated, and fully automated business processes. However, as several process practitioners have already emphasised, the notation is still far too rich and complex for non-technical professionals and businesspeople to fully comprehend. It is as if the notation, indulging in its own ambitions and adoption, has widened its gap too far and suddenly struggles to articulate all possible aspect of a process.
SOA practitioners struggled with the same complexity problems in the early 2000’s. Everywhere, new and half-baked service standards emerged, and some “standards” even offered duplicate functionality. For SOAP, what was meant to be a simple protocol for exchanging messages had now morphed into a wilderness of WS-* standards, policy documents, and pseudo recommendations. As a counter effect, REST (Representational State Transfer) was adopted as a viable, lightweight, and easy-to-implement alternative to the WS-* conglomerate. REST’s elegance was its simplicity, very similar to how the simplicity of the TCP and IP network protocols defeated complex, proprietary network protocols such as DECNET and Tymnet. Useful, open standards are easy to simple and easy to understand and communicate. WS-* was by no means a lightweight stack, just as BPMN 2.0 is too rich to be truly elegant.
What BPMS needs is a process modelling notation that is just as elegant as REST and TCP. The simpler notation, the easier it is for business analysts to pick up the notation and understand a particular model. Fewer moving parts and modelling exceptions also implies that the designed process is easier to execute. Consider the source code necessary for parsing a WSDL schema with surrounding WS-Security artefacts compared to lines of code necessary to retrieve and parse a JSON data structure across a TLS-encrypted wire. For execution, a light-weight process model format with different role-based process architecture views are necessary accommodate for easy-to-communicate and easy-to-execute processes models. 

Process Variations
My third idea is the notion of modelling and execution of process variants. Several enterprise modelling tools (such as ARIS) support the idea of variant artefacts, which allows for a configuration item to be traced back to its reference artefact. This is particularly useful when mapping a process model or architectural layer against a set of reference architectures, which in turn allows for quick discovery and gap analysis of compliance requirements.
However, for some reason this idea has yet not made its way to process execution land. The majority of BPMS platforms treat process models as isolated, transactional entities. References are done by related events or drilling into sub process models. Process layering and variation are completely unknown concepts in the world of execution, despite its inherent adoption in enterprise modelling. Many enterprises struggle with the need for selecting and executing a particular process variant depending on a set of pre-conditions, whilst still being able to reflect that the instance belongs to a particular group of variants. An executable billing process might vary slightly depending on the type of the client currently being billed, but the process is still the billing process. Integrating process variations in BPMS theory adds depth and context to the executable process models, as opposed to pure, isolated workflows.

Process Regulation and Self-Reference 
The research field of control theory and cybernetics has for long explored the properties of self-organising systems, which respond in a meaningful way to outside stimuli. Examples of cybernetic systems are everything from simple thermometers to complex jet fighter engines, which monitor and regulate their current state depending on the environment (such as temperature or altitude). Similarly, researchers in business process management (BPM) and process engineering have explored the idea of self-regulating processes: business processes that monitor, adjust, and control their own state, activity, and performance based on the general condition of the overall enterprise. In manufacturing this would be a process, which adjusts its production throughput automatically based on recent market trends received from the business intelligence system. Sales processes adjust their current inventory data based on market forecasts triggered by an external supplier. Car manufacturing robots do just-in-time adjustment of assembly line activity after observing a major slump in the stock market five minutes ago. The modern enterprise is event-driven, interconnected, and immediately responsive. However, in order for business processes to exploit this opportunity they need to become self-regulating or “self-aware.” Executable processes must be able to adjust their own complex states based on listeners and triggers from external events. This technology demands sophisticated complex event processing and a meta-process environment, which allows for easy and dynamic reconfiguration of process model layout, design, and performance based on external data. The change in state should not be limited to a pre-configured set of process patterns. Process models and metadata should automatically infer new possible process designs and subsequently select the most plausible design based on previous design choices, feedback, and execution data. 

To Be Continued
These considerations are only a small part of the ideas I have been collecting for next generation process automation, which could very well evolve into a general research programme on the future of BPMS. It is my opinion that we have reached a solid state of enterprise integration tools and middleware platforms. However, BPMS theory and practice is still in a state of flux: shiny new tools emerge every day, but fact is that we have very little experience with designing, deploying, and maintaining complex, large-scale process applications. Granted the general principles of software engineering and IS development theory still apply: effectively, most process applications involve enterprise systems in the large with a vast amount of moving parts and integration points. However, in order to successfully respond to the increasingly rapid markets and requirements change, we need faster, simpler, context-aware, and interconnected BPMS platforms driven by self-regulation and complex event processing. In my upcoming blog posts I will write more on this topic.




No comments:

Post a Comment