Wednesday, 21 December 2011

Common Pitfalls of Solution Architecture Governance

Framework-ism: Form and structure over content and good practice (not best practice). Best practice is all too often a cover for theoretically ideal, but practically non-functional solutions.
Boil-the-ocean-ism: All too generic frameworks intended for large-scale program transformation lack local focus and technical guidance.
Architecture astronaut-ism: Abstraction upon abstraction upon abstraction until everything is so conceptual that end-users have dissolved into TOGAF deliverables.

My mantra for solution architecture governance is:

“In IT governance land, it is OK (read: critical!) to be IT- and solution-centric. After all, it is called solution architecture, not conceptual framework theory.”

Sunday, 11 December 2011

IBM's Open Source Enterprise Generation Language

I recently discovered that IBM have released their Enterprise Generation Language (EGL) toolkit as open source in a donation to the Eclipse project. That is definitely a nice move by IBM who are already one of the core contributors to both the Eclipse development platform and the Linux kernel.

So what is EGL? Wikipedia describes EGL as follows:

"EGL is a high level, modern business oriented programming language, designed by IBM to be platform independent. EGL is similar in syntax to other common languages so it can be learned by application developers with similar previous programming background. EGL application development abstractions shield programmers from the technical interfaces of systems and middleware allowing them to focus on building business functionality. EGL applications and services are written, tested and debugged at the EGL source level, and once they are satisfactorily functionally tested they can be compiled into COBOL, Java, or JavaScript code."


Prior to IBM's release I had never heard of EGL. It seems to have been hidden in their myriad of Rational legacy tools including the VisualAge enterprise tools for code generation and model-driven architecture (MDA). Apparently, it is quite an abstract language, which compiles into a set of different target platforms including Java, COBOL, and Javascript, depending on the solution architecture. Technical middleware and deployment are "hidden" from the developer so that he/she can concentrate on getting the business logic right as opposed to defining the technical configuration. 


Now, what does this remind me of? Sun's (and now Oracle's!) Java EE and the Enterprise Java Beans (EJB) platform. Java EE (previously J2EE) was the call for an open platform specification, which sought to solve deployment and configuration for developers and free up time for modelling requirements and implementing business logic. Transactions, persistence, and scalability were "built in" to the platform. Unfortunately the first versions of the EJB specifications were so horribly complex (remember EJB 2.x?) that developers spent an increasing amount of time on understanding persistence mechanisms, transaction handling, and writing XML files for relatively trivial tasks. Only with version 3.0 did EJB become "human" and demonstrated its full power as a relatively simple, fully mature application platform. That is not to say that EJB was not mature before -- it was just horribly complex. The higher the degree of embedded complexity, the higher is the chance of unintended errors. I know this because I have developed EJB v2 and v3 applications myself. I honestly hope that EGL does not suffer from the same legacy problems as EJB and that it can be integrated with more "lightweight" approaches to application configuration such as convention over configuration, Don't Repeat Yourself (DRY), and fluid interfaces.


EGL is a refreshing toolkit for the open source enterprise community. Even though I now spend my working life as an architect (and no longer as a software engineer), I still follow the enterprise software landscape closely. I have just downloaded the EGL Eclipse package from the web site and will follow up with some comments here in my blog.

Wednesday, 7 December 2011

Clever Spam

A couple of days ago some random spammer sent me the following e-mail:


It actually looks exactly like the e-mail notifications I receive daily from Twitter -- that is, except for the link content. Apparently they were not clever enough to mask the content of the underlying link with a random URL shortener. :) This e-mail was actually caught by Gmail's spam filter. I must say that I am impressed by the speed and intelligence of their spam detection system. It is very rare that it actually catches ham.

The Modesty of Writing

Blogging, e-books, and social networks have all increased the speed at which we communicate—or, from a Luhmannian perspective, how we share our utterances with the communication. For DIY publishers, the Internet is an infinitely rich channel for publishing their own content and making it available to readers and consumers at exactly the same premises as previously professional channels. And that is awesome: the Internet has democratised the way we make available and publish our thought and ideas. Recently, I have become involved in a book on cybernetics and enterprise architecture. The book was initially thought to be available through a DIY publisher and furthermore freely available as an e-book. However, due to increasing interest from traditional publisher, the book will now be published through a traditional publication channel. To me, both opportunities are equally exciting.
However, the rapid speed of immediate “publication” comes at the cost of lack of modesty, patience, and maturity. Some people use blogs to quickly fabricate and churn out trivial variations on the same topic over and over and flood the public sphere with their own opinions in a synthetic, insubstantial manner. Proof-reading is completely unheard of; proper referencing to prior art and information sources is considered almost arcane. Blogging is supposed to be a quick, responsive medium. However, when people post entire book chapters or even book manuscripts through the same source and under the same preconditions, the form and shape of blogging have certainly moved in the wrong direction. The fundamental problem is that people with blogs tend to lack the modesty of traditional writers, academics, and publishers. For blogging “pracademics”, this is furthermore caused by the lack of patience for peer-reviewed publications. Preparing a good paper can take months before it is accepted and published. For the average blogger with lots of intentions, it is, of course, a lot easier to churn out one blog post after another with incoherent fragments of argumentation and structure. If post-modernism had a place in the history of literary shapes and forms, blogging would certainly be its most significant incarnation.
Writers, researchers, and bloggers alike must return to the tradition of when pages were sparse and publication a controlled, rigorous process requiring discipline and modesty. Only through modesty have the most purposeful, unique utterances, be they peer-reviewed publications or news items, been created.

Thursday, 15 September 2011

New Publication in Journal of Enterprise Architecture

My research paper on systems thinking, sense-making, and enterprise architecture planning in government has been published in the Journal of Enterprise Architecture, a quarterly, peer-reviewed journal published by the Association of Open Group Enterprise Architects (AOGEA). The paper includes a case study of architecture-enabled transformation of a government agency merger in NSW, which includes geospatial location awareness, service oriented architecture, and enterprise integration. The paper is titled: Processes of Sense-Making and Systems Thinking in Government Enterprise Architecture Planning.

My abstract is as follows:


This purpose of this article is to investigate the systemic properties of Enterprise Architecture Planning (EAP) in the Australian government sector. Based on a case study of the Land and Property Management Authority of New South Wales, the article examines and outlines the crucial necessity for including systems thinking, systems learning, and organizational sense-making in Enterprise Architecture (EA) theory and planning. The main argument is based on qualitative research into the limitations of capturing and modeling organizations using EA methodologies and modeling approaches.
The EA discipline, including its tools and methodologies, relies on the metaphor of engineering the enterprise and building stable taxonomies of knowledge and process. The practical reality that e-government programs are facing is technical, sociological, and messy. However, EA tends to operate within an engineering metaphor that assumes stability, predictability, and control. Here, the author highlights the necessity of an alternative, less positivist approach to EA planning in order to understand and articulate the tacit knowledge dimensions and messy, wicked problems of organizational life.
Soft systems thinking, socio-technical theory, and sense-making are introduced as theoretical and practical frames to overcome these limitations and produce a better, more viable and realistic model of planning in government enterprises. These concepts are finally amalgamated into a general, integrative model of EA planning.

You can download the paper from here (AOGEA membership is required).

Tuesday, 26 July 2011

Towards Next Generation Process Execution

A considerable part of my professional time is spent on advising people on discovering, redesigning, and --- if possible and feasible --- automating their business processes. Often, trivial, cumbersome, and predictable processes can be formalised and executed using a process engine (also known as BPMS, Business Process Management Suites). Process execution is usually combined with an enterprise service bus (ESB) or middleware layer, which supplies data sources and exposes business transactions to the process layer in an open, reusable, and interoperable fashion.
BPMS is by no means a new concept. Most contemporary BPMS platforms began as workflow engines and CASE (Computer-Aided Software Engineering) tools, which subsequently found valuable use in the rising Java EE and enterprise application integration (EAI) markets of the 90’s and onwards. This, combined with an increasing interest in information management and enterprise integration, spawned today’s plethora of repository based modelling tools, sophisticated middleware technology, and process execution platforms.  
Looking into the crystal ball of process automation, what automation technologies can enterprises expect to see in the years to come? Now, here, any average IT analyst would probably come up with three all-too-often-adopted shrinkwrapped concepts:
  1. Cloud computing
  2. X-as-a-service
  3. Agile
In order to actually coming up with something original or different, I have deliberately omitted these three words in this blog entry. That is not to say that these trends aren’t influential or important, but they have already been stated elsewhere in thousands of blog posts, whitepapers, and academic papers. In the following sections I will present my stance towards aspects of next generation process automation.

Closed-Loop Roundtrip Engineering
Several toolchains support the so-called ‘roundtrip’ between repository-based enterprise modelling tools and implementation level process development tools. Too often this is a one-way exercise in which business processes are modelled by the business analyst, approved by the process owner and then exported to execution by the solution architect. However, once the process model hits the implementation floor, governance, roundtrip, and traceability are cut off. The process model is now materialised as source code rather than a visual model.
An improved, closed-loop roundtrip approach would most likely solve this bridge by making the process integration point both ways. This calls for an improved bridging strategy between the two worlds so they ultimately merge into one. That is not to say that the designed process model must equal the executable process model --- the enterprise repository should still provide different, role-based views onto the same processes. The tipping point that I am arguing is that full traceability from model to execution demands to-way traceability and unified single-interface version control of all artefacts.

Light-weight Executable Process Models
Modelling standards such as BPMN 2.0 (Business Process Modelling Notation) claim to provide single, uniform language for modelling manual, semi-automated, and fully automated business processes. However, as several process practitioners have already emphasised, the notation is still far too rich and complex for non-technical professionals and businesspeople to fully comprehend. It is as if the notation, indulging in its own ambitions and adoption, has widened its gap too far and suddenly struggles to articulate all possible aspect of a process.
SOA practitioners struggled with the same complexity problems in the early 2000’s. Everywhere, new and half-baked service standards emerged, and some “standards” even offered duplicate functionality. For SOAP, what was meant to be a simple protocol for exchanging messages had now morphed into a wilderness of WS-* standards, policy documents, and pseudo recommendations. As a counter effect, REST (Representational State Transfer) was adopted as a viable, lightweight, and easy-to-implement alternative to the WS-* conglomerate. REST’s elegance was its simplicity, very similar to how the simplicity of the TCP and IP network protocols defeated complex, proprietary network protocols such as DECNET and Tymnet. Useful, open standards are easy to simple and easy to understand and communicate. WS-* was by no means a lightweight stack, just as BPMN 2.0 is too rich to be truly elegant.
What BPMS needs is a process modelling notation that is just as elegant as REST and TCP. The simpler notation, the easier it is for business analysts to pick up the notation and understand a particular model. Fewer moving parts and modelling exceptions also implies that the designed process is easier to execute. Consider the source code necessary for parsing a WSDL schema with surrounding WS-Security artefacts compared to lines of code necessary to retrieve and parse a JSON data structure across a TLS-encrypted wire. For execution, a light-weight process model format with different role-based process architecture views are necessary accommodate for easy-to-communicate and easy-to-execute processes models. 

Process Variations
My third idea is the notion of modelling and execution of process variants. Several enterprise modelling tools (such as ARIS) support the idea of variant artefacts, which allows for a configuration item to be traced back to its reference artefact. This is particularly useful when mapping a process model or architectural layer against a set of reference architectures, which in turn allows for quick discovery and gap analysis of compliance requirements.
However, for some reason this idea has yet not made its way to process execution land. The majority of BPMS platforms treat process models as isolated, transactional entities. References are done by related events or drilling into sub process models. Process layering and variation are completely unknown concepts in the world of execution, despite its inherent adoption in enterprise modelling. Many enterprises struggle with the need for selecting and executing a particular process variant depending on a set of pre-conditions, whilst still being able to reflect that the instance belongs to a particular group of variants. An executable billing process might vary slightly depending on the type of the client currently being billed, but the process is still the billing process. Integrating process variations in BPMS theory adds depth and context to the executable process models, as opposed to pure, isolated workflows.

Process Regulation and Self-Reference 
The research field of control theory and cybernetics has for long explored the properties of self-organising systems, which respond in a meaningful way to outside stimuli. Examples of cybernetic systems are everything from simple thermometers to complex jet fighter engines, which monitor and regulate their current state depending on the environment (such as temperature or altitude). Similarly, researchers in business process management (BPM) and process engineering have explored the idea of self-regulating processes: business processes that monitor, adjust, and control their own state, activity, and performance based on the general condition of the overall enterprise. In manufacturing this would be a process, which adjusts its production throughput automatically based on recent market trends received from the business intelligence system. Sales processes adjust their current inventory data based on market forecasts triggered by an external supplier. Car manufacturing robots do just-in-time adjustment of assembly line activity after observing a major slump in the stock market five minutes ago. The modern enterprise is event-driven, interconnected, and immediately responsive. However, in order for business processes to exploit this opportunity they need to become self-regulating or “self-aware.” Executable processes must be able to adjust their own complex states based on listeners and triggers from external events. This technology demands sophisticated complex event processing and a meta-process environment, which allows for easy and dynamic reconfiguration of process model layout, design, and performance based on external data. The change in state should not be limited to a pre-configured set of process patterns. Process models and metadata should automatically infer new possible process designs and subsequently select the most plausible design based on previous design choices, feedback, and execution data. 

To Be Continued
These considerations are only a small part of the ideas I have been collecting for next generation process automation, which could very well evolve into a general research programme on the future of BPMS. It is my opinion that we have reached a solid state of enterprise integration tools and middleware platforms. However, BPMS theory and practice is still in a state of flux: shiny new tools emerge every day, but fact is that we have very little experience with designing, deploying, and maintaining complex, large-scale process applications. Granted the general principles of software engineering and IS development theory still apply: effectively, most process applications involve enterprise systems in the large with a vast amount of moving parts and integration points. However, in order to successfully respond to the increasingly rapid markets and requirements change, we need faster, simpler, context-aware, and interconnected BPMS platforms driven by self-regulation and complex event processing. In my upcoming blog posts I will write more on this topic.




Announcing Upcoming Book: Systems Thinking in Enterprise Architecture

Dr. John Gøtze and I have announced the forthcoming publication of a new book titled: Systems Thinking in Enterprise Architecture. The book, which targets the intersection of practitioners and academics, explores the important, notional relationship between Enterprise Architecture (EA), systems thinking, and cybernetics. A wide array of authors have been invited to contribute to the book resulting in a total of 20 chapters on the topic. 

Systems Thinking in Enterprise Architecture is still the working title of the book and it expected to change once all chapters have been reconciled. The book will be published in the Systems Thinking and Systems Engineering Series by College Publications, thus marking the successor to the remarkable volume 1: A Journey Through the Systems Landscape by Harold "Bud" Lawson, who is also one of the joint contributors to our book.

In order to read more about the book, please refer to the ITU Enterprise web site for deadlines, list of contributors, and publication guidelines. If you are interested in contributing, please do not hesitate to send us a draft manuscript!

Saturday, 9 April 2011

Rhizome: On Dilemmas in Enterprise Architecture Planning


In the field of IS and management we often put forward a certain conception of the organisation, the social. In contemporary business consulting and management academia, the organisation is often conceptualised as a hierarchical open system with a certain body of knowledge supplying the management system with rational decision making. Other alternative, academic approaches are influenced by literature studies and Gadamer’s hermeneutics (Gadamer, 1975), promoting the need for understanding and context, the particular, rather than the universal and manageable. Emerging from these two spectra, each fighting for their own conception of subjectivity and objectivity, Anglo-Saxon and continental philosophy, each defining the criteria for truth and meaning, one uncovers systems theory and cybernetics which proposes a model for generalising structures and properties between different phenomena in the world: Bertalanffy (Bertalanffy, 1969) suggests a unified cybernetic model for living, mechanic, and social systems, whereas von Foerster (Von Foerster, 2003) and Luhmann (Luhmann, 1995) suggest a second order model based on constructed observation and interpretation. In organisation studies, first and second order systems theory each postulate their own conception or construction of social reality: Parsons defines the social as actions or events referring to each other within a structural organising of social functions, whereas Luhmann flips the tin can with a functional organisation of social structures based on communication, reproducing and sustaining itself through Maturana and Varela’s concept of autopoiesis (Maturana & Varela, 1980).


Amidst IS management’s--and thus EA’s–attempt to establish a common, trans-disciplinary foundation for research, there appears to be an ontological schism of what the social is and organisations really are. Is it a collective intelligence or logic of rational decision making? Is it a reactive, intersubjective collective attempting to make sense of the world in hindsight through history and culture? Or is it a system or a construction of a system that organises, structures, or communicates through constant adaption and recursive reproduction only by reference to its own recursion and reproductivity? The latter approach dissolves the former two boundaries by creating a boundary of distinction even more important than the understanding subject itself at the edge of every possible system. It is the distinction between system and environment that generates or fabricates meaning and truth, but it comes at the cost of reducing our very own processes of cognition and sensemaking to a set of vibrating antennas or satellites mounted at the fragile surface of every human system.  

An ontology of the social is thus far from complete. Enterprise Architecture (EA) seeks to address this by building layers of abstraction and control, thereby assuming that static systems models of socio-technical relations yield manageability and transparency. Accountability is achieved by linking formal role descriptions to process models and system landscapes, often positioned in a well-defined hierarchy and stored in a database repository for later reference and reuse. In order to reuse ‘best practices’ and assure a certain level of maturity in framework and methodology, enterprises often implement their architecture practice against existing reference frameworks and enterprise meta models. Frameworks such as FEAF even include a CMMI-like maturity model for EA, which assesses the success of architecture program by measures such as completeness and integration. OMB, the US Federal Office of Management and Budget, has furthermore published a set of measures of architectural completeness for evaluating US Federal Agencies. The highest achievement, level 5, is the architectural utopia in which the organisation practicing EA corrects its own business failures by architectural inspection. Architecture is here synonymous with optimising an organisation.

Given the above reflections on what the social really is, is it really philosophically reasonable to suggest that a stable, decomposable, hierarchical model, which most enterprise meta-models really are, is capable of building a comprehensive model of the social? Is it really meaningful to stretch virtually any organisation, be it government or private, along a five-level diagram and measure it by how well-described architectural elements are? And what happens when the Federal agency hits the ceiling after 5? Those are clever and important questions that information and organisation science ought to ask. Unfortunately, that is seldom the case. Maturity models, in the classic form of a five-step ladder, are an inherent part of any contemporary management/IS theory: process maturity, architecture maturity, service maturity, integration maturity. The five-step Capability Maturity Model (Paulk, 1995) has its roots in systems engineering carried out by engineers building space shuttles for NASA. As universal as it may be, the problems, issues, and solutions faced by modern organisations are far more muddy, messy, an ill-defined than those originally faced by defence contractors and DoD bureaucrats. Such fast-paced, deep problems are also characterised as wicked problems (Rittel & Webber, 1973):
  1. Wicked problems have no definitive formulation. One can infer that the problem exists, but will never be able to fully document the problem.
  2. The solution to a wicked problem is “good or bad” not “true or false”.
  3. Every possible solution is a one-shut operation as every solution attempt will leave a trace which cannot be undone.
  4. Each wicked problem is unique and may eventually be the symptom of another, underlying wicked problem.
Through my previous research, I have suggested a systems theoretical approach towards understanding and explaining EA. Systems theory is helpful towards describing the messy complexity of social and communicative structures. Second order systems theory adds a rich, dynamic theory for understanding communication in- and outside organisation by describing the exchange of utterances between human actors in search of meaning (Jensen, 2010). I believe, however, that these two key conceptions of enterprise planning and governance can furthermore be extended into a general theory of EA by including Deleuze's theory of the rhizome.


Deleuze (Deleuze & Guattari, 1988) describes the rhizome structure (Deleuze & Guattari, 1976) as a meaningful alternative to uncovering complex structures, be they social or biological. Western society, Deleuze explains, has built its historicity and philosophy on the basis of binary structures: true-false, yes-no, top-bottom, maturity-immaturity. Contemporary EA frameworks are, in fact, highly binary: layers separated by clear boundaries, processes with a start and end, structured organisation charts and capability maps with a top and bottom. The rhizome is a viable alternative since it assumes an inherent complexity of what it is intended to describe. The rhizome is constantly transforming and morphing itself, making it virtually impossible to map out its structure completely at any point in time. This is exactly how wicked problems occur. Wicked, messy problems could, in fact, be described as rhizomatic structures. The rhizome structure applies well to the socio-technical nature of organisations as well, as the dissipative relationships between humans, technology, and organisation structures form a complex, dynamic, and transforming entity with no clear, formal, or necessarily logical order. This rhizomatic relationship is probably best explained in the field of technology adoption and diffusion in private enterprises where traditional positivist approaches to management and innovation struggle to explain how and why technology trends emerge and behave. This reflection on Deleuze leads to the following important claim:

Organisations are complex, dissipative structures constantly transforming complex, human knowledge and social relationships. A rhizomatic systems model satisfies such conception of organisational reality. Hence, Enterprise Architecture, in its search for whole-of-enterprise views, should adopt rhizomatic theory for uncovering and understanding the true messiness of organisations as socio-political habitats.

Understanding Enterprise Architecture as a rhizomatic systems practice, however, must come at the cost of killing certain darlings. The first darling is the idea of organisations as stable structures operating on explicit, verifiable knowledge, which in turn can be divided into clear architectural layers and segments. The second darling is the conception of a universal maturity model explaining the natural progression towards “EA nirvana”. There is no such one.
  1. Layers, segments, and hierarchical models depart from a Westernised, binary view of the world. Layering suggests decomposability and abstraction of organisational complexity. A rhizome does not have such properties. The messy, social facets of organisational life cannot be decomposed or functionally abstracted. The social does not have a single function and thus cannot be functional. Wicked problems, as they emerge from social interactions and organisational problems, are rhizomatic and cannot be explained fully through rationalist models.
  2. Maturity models are inherently binary. They suggest a natural progression towards the optimal stage 5 somewhat similar to a tree as it stretches its branches towards the rising sun. The rhizome is the exact opposite of a tree structure as its roots and shoots grow and form in any direction, shrouding and shifting its original structure. Social structures, apart from the general statistical patterns uncovered by social psychology, do not follow universal laws of transformation or branching---and hence it is impossible and meaningless to suggest a generic, universalistic maturity model of social behaviour in EA adoption and planning. There is no such nirvana of Enterprise Architecture---and if there ever were, it would be constantly shifting and transforming depending on the current managerial climate, problems of planning, and struggle for control inside the organisation. Exactly this relationship of management, planning, and control is rhizomatic as well.

For Enterprise Architecture to fully explain these sacrifices, it must adopt a view of the enterprise as a non-linear, interconnected multiplicity, for which structures can only be meaningfully traced and described in hindsight. Traces always remain interpretations. Enterprise modelling involves tracing organisation structures, but as these structures are traced and interpreted, they suddenly shift and transform into a different multiplicity. Enterprise Architecture is thus a semiotic practice of tracing and interpreting organisations as complex signs. The output, the long-term plans, roadmaps, and meta-models, are merely simplified pictures of these dissipative signs. Only by accepting these aspects of enterprise reality, can Enterprise Architecture truly characterise the challenges and solutions in strategic planning and enterprise management.  

References:
Bertalanffy, L. v. (1969). General System Theory; Foundations, Development, Applications. New York,: G. Braziller.
Deleuze, G., & Guattari, F. (1976). Rhizome : Introduction. Paris: Éditions de Minuit.
Deleuze, G., & Guattari, F. (1988). A Thousand Plateaus : Capitalism and Schizophrenia. London: Athlone Press.
Gadamer, H.-G. (1975). Truth and method. London: Sheed & Ward.
Jensen, A. O. (2010) Government Enterprise Architecture Adoption: A Systemic-Discursive Critique and Reconceptualisation. Copenhagen Business School.
Luhmann, N. (1995). Social Systems. Stanford, Calif.: Stanford University Press.
Maturana, H. R., & Varela, F. J. (1980). Autopoiesis and Cognition : the Realization of the Living. Dordrecht, Holland ; Boston: D. Reidel Pub. Co.
Paulk, M. C. (1995). The Capability Maturity Model : Guidelines for Improving the Software Process. Reading, Mass.: Addison-Wesley Pub. Co.
Rittel, H. & Webber, M. (1973), `Dilemmas in a General Theory of Planning', Policy Sciences 4.
Von Foerster, H. (2003). Understanding Understanding : Essays on Cybernetics and Cognition. New York: Springer.




Thursday, 7 April 2011

On the Brink of Structure and Chaos: Framework Prescription and Real-World Flexibility

An often returning discussion with my clients is the practical applicability, boundaries, and presumptions of enterprise architecture (EA) frameworks. Following a recent debate on LinkedIn, some EA practitioners suggest that dynamic systems models, such as Stafford Beer’s Viable Systems Model (VSM) (Beer, 1994) , should actually replace existing prescriptive framework practices. In my own thesis (Jensen, 2010), I furthermore discuss the options of integrating cybernetic and second order systemic models in enterprise architecture planning and management on the basis of a thorough analysis of Australian government EA and SOA practice.
However, in this debate, it is important not to frame systems thinking as the silver bullet, which can magically and immediately fix all problems of management practice, be it process management or government architecture planning. Systemic thinking in itself may, actually, be overly prescriptive through reductionist assertions of how organisations function, e.g. through the claim: “organisations are systems” as opposed to “organisations behave as systems in particular situations.” Assuming that the world consists of systems as an ontological pre-given only paves the way for yet another prescriptive theory.
I usually tell my clients that there are two key observations around healthy framework and methodology practice. This basically boils down to admitting the following two claims:
  1. Best practice frameworks and reference models provide a great starting point and rational structure for beginning an architecture endeavour. Obviously, you don’t need to reinvent the wheel, and the frameworks provide a common language and conceptual platform for sharing ideas and implementing projects. Any good framework provides the tools, templates, and management structure for relatively quickly deploying the skeleton of an architecture practice. Prescription gives guidance, direction, and shared understanding of EA purpose and planning.
  2. However, prescription may also heavily limit the creativity and flexibility of rapid transformation projects. EA is often put in place in order to rapidly change the organisation for the better --- or what Hjort-Madsen (2008) calls architecting for better outcomes. If the method is too inflexible or does not allow people to make shortcuts and rapid improvements for the better, it is a sign of too heavy prescription. Architecture turns into a handicap as opposed to providing a viable management reporting function. Architecture has transformed into a heavy-weight documentation exercise producing massive stacks of documents that nobody will ever return to again.


Good framework practice comes from finding the right balance between structure and flexibility. The framework must guide, direct, and support the architects at work so they can share and reuse where possible. On the other hand, the framework should promote and highlight the appropriate degree of managerial buffer in order to easily overcome problems and churn out creative solutions as opposed to locking down framework users by applying layers of managerial review and approval bureaucracy for the sake of (often unnecessary) traceability. As my thesis research indicates, an Australian state government agency chose to temporarily ignore their EA modelling and documentation framework whilst transitioning from current to future state.  Technological, political, and managerial changes occurred so often that the existing reference framework was simply too prescriptive. After reaching a point of reasonable stability in terms of business requirements and change in demand, it was then feasible to return to the structural stability and rigour of the framework.
My key message is therefore: be careful when implementing your architecture practice in your organisation. Your people need structure and guidance but also flexibility and buffer for overcoming changes rapidly in an elegant fashion.

Beer, S. (1994). Brain of the Firm (Classic Beer Series). Wiley.
Hjort-Madsen, K. (2009), Architecting Government: Understanding Enterprise Architecture
Adoption in the Public Sector, Phd doctorate.

Friday, 18 February 2011

Quality Criteria for BPMS Programs

Recently, I have been initiating a Business Process Management System (BPMS) implementation for a major client in the financial sector. This made me wonder: how can I come up with a set of BPMS-specific, practical quality criteria for successfully implementing a flexible, repetitive, and fast BPMS platform? However, some people are not aware of the important distinction between BPM (Business Process Management) as a management philosophy discipline and BPMS as an enterprise IT program:
  • BPM is defined as the overall management philosophy, which frames the enterprise as consisting of a set of end-to-end business processes. These processes, in combination, deliver value to the organisation's clients. Improving the enterprise means architecting, improving, and managing these processes for better outcomes. 
  • BPMS is a collection of software tools and systems engineering practices based on process management thinking. BPMS realises the modelling, automation, execution, and monitoring of business processes on a common, single-instance runtime platform, often through the use of open, well-defined service contracts (such as web services). 
Some vendors equate workflow tools to BPMS, which is not necessarily the case. An enterprise BPMS often involves implementing a workflow. However, the key notion is that modelling and executing business transactions as processes yields predictability and transparency. 

My work has inspired me to pursue writing a pracademic paper on common quality criteria and heuristics for enterprise BPMS implementations, particularly concerning roles, values, expected outcomes, actual outcomes, pitfalls, and general performance improvements. What worked and why? What didn't work out as expected? What business criteria should the CIO and solution architect focus on when planning or building the business case for BPMS? How did can one ensure a higher quality BPMS? 
Granted, several IT governance frameworks already provide generic measures and KPI's. What I am interested in is the process- and IS-specific criteria and measures relating directly to BPMS programs.

Thursday, 27 January 2011

Architectural Control as a Managerial Paradox

In recent years, IS academia has argued that EA is increasingly becoming a strategic management tool or a high-level business function for long term planning and execution. It has been a healthy evolution for EA to question its IT heritage and adopt a broader perspective of how commercial enterprises navigate and gravitate in their respective marketplaces. This healthy evolution has particularly been articulated by Turner, Gotze, and Bernus (2010) in their paper Architecting the Firm: Coherency and Consistency in Managing the Enterprise, which argues the key role of architecture in executive management. Architected organisations are said to achieve better, more consistent results since the strategy is aligned and architected against operations, processes, and the technology portfolio.

As I have previously argued in my writings on strategic management, the assumed reality of strategic long term planning has to a large extent ignored the socio-political side of human and organisational behaviour. Planning is volatile and subject to rapid change in turbulent environments. As Rittel & Webber (1973) argue in their 1973 article on government policy and planning, assuming that any problem can be solved by rational planning will ultimately lead to confusion and failure. In their view, assumed rationality leads to unexpected ambiguity. Here, it is crucial to ask the question: if EA really is such a strategic discipline driven by the need for informed decision making, how can the first and always at-the-top-of-the-framework-pyramid component called strategy rely on such a naïve view of how human planning works in practice? This discussion is by no means new; already in the late 90ies, Mintzberg (2002) argued the need for an organic, emergent view of strategy by elaborating on Simon’s (1997) concept of bounded, contextual rationality. 

Let us for a short moment forget about strategy as an applied, deliberate package of human reason and rather think of strategy as inherently equivocal. Management concepts, as they often arrive straight out of the business scholar’s first year text book on business strategy, are, in fact, paradoxical and ambiguous despite the strive for precision and forecasting. This is explained in the following:

  1. The first paradox concerns the relationship between assumption of control vs. human and environmental ambiguity. The more one attempts to control and superimpose predictability onto reality, the more imprecise and irregular reality, in fact, becomes. This classic paradox of the manager as an assumed homo oeconomicus is discussed in depth by Kallinikos (2004).
  2. The second paradox is three-sided: the short, very generic nature of corporate vision and mission statements vs. the corporate search for control and manageability vs. the complexity of the business ecology surrounding the enterprise. The first facet is short and simple, where the second facet strives for precision, detail, and consistency, both which in turn neglect the ecological complexity and institutional pressure (the third facet) of the organisation’s environment.
  3. The third, most noticeable paradox is the fact that the implicit equivoque of high-level mission/vision statements fosters organisational resilience. The more loosely or ill-defined the strategy, the better will the official policy document fit into the actions and immediate strategies (what Weick (2001) denotes just in time strategies) deployed by employees to fulfil or achieve certain goals and expectations (Astley & Zammuto, 1992). The more ambiguous the official strategy or policy articulation, the more free hands for the individual employee to appropriately navigate the socio-political problems of the business environment. Despite the intended precision of a strategy document, the more possible interpretations of a strategic policy or plan, the more organisational resilience and responsiveness (Weick, 2001).


However, it is too simplistic to assume that corporate ambiguity per se triggers organisational resilience and flexibility. In that case, any old plan would do.The paradoxical nature of precision and ambiguity is better understood as a second order systems theoretical concept (Luhmann, 1995 & Luhmann, 2000), in which any system, be it social or biological/ecological, applies certain reductionisms in order to reduce the outside complexity the environment. In Luhmann’s theory of symbolically generalised media within social systems, phenomena such as scientific truth, politics, sex, and power are applied by different institutional systems in order to reduce the complexity and ambiguity of modern society. Similarly, organisations as social systems deliberately deploy equivocal mission statements and simplistic strategies in order to interpret and cope with a hyper-complex, fast-paced business environment. Despite the claim of predictability, strategic long term plans are thus put in place in order to reduce societal complexity and constraints to static architectural maps and prescriptive policies. The reduced conception of reality is by no means successful or comprehensive enough to account for all important details simultaneously, but it makes reality manageable until the actions taken and plans made are reasonable enough (see  reasonableness as a criterion for strategic success in (Weick, 2001)). 

As Teubner (2000) writes, strategic planning fosters productive misunderstandings: business strategies have to be misunderstood (compared to what was originally intended by e.g. senior management) and reinterpreted in the particular reality and bounded rationality of the individual. The final, synthesised (mis-)understanding of strategy, mission, and vision serves to build and sustain resilience and organisational responsiveness---but only with reference to the organisation itself. As Luhmann’s sociology tells us, the end result would never achieve the same results in the outside reality. This also explains why replicating or adopting existing patterns of strategy will most likely lead to a bad result without adopting, contextualising, and productively misunderstanding the presumed strategic rationality. Good strategies are misunderstood, self-referential and contextual whilst fostering resilience and adaptability.

It is my conception that EA must adopt such view of strategic thinking as a self-referential, ambiguity-producing human practice in order to successfully navigate the needs and requirements of tomorrow’ adaptive and flexible virtual enterprises.

References
Astley, W. G. & Zammuto, R. F. (1992), `Organization Science, Managers, and Language Games', Organization Science 3(4), 443{460.
Kallinikos, J. (2004), `Deconstructing Information Packages: Organizational and Behavioural Implications of ERP Systems', Information Technology & People 17(1), 8-30.
Luhmann, N. (1995), Social Systems, Writing Science, Stanford University Press, Stanford, California.
Luhmann, N. (2000), The Reality of the Mass Media, Stanford University Press, Stanford, California.
Mintzberg, H., Ahlstrand, B., & Lampel, J. (2002). Strategy safari (2nd ed.). LT Prentice Hall. 
Rittel, H. & Webber, M. (1973), `Dilemmas in a General Theory of Planning', Policy Sciences 4.
Simon, H. A. (1997), Models of Bounded Rationality, Massachusetts Institute of Technology, MA.
Teubner, G. (2000), `Contracting worlds: Invoking discourse rights in private governance regimes', Social and Legal Studies 9, 399-417.,
Turner, P., Gøtze, J., Bernus, P. (2010) Architecting the Firm: Coherency and Consistency in Managing the Enterprise. In Bernus et al (2010) Enterprise Architecture, Integration and Interoperability: IFIP TC 5 International Conference, EAI2N 2010, Held as Part of WCC 2010, Brisbane, Australia, September 20-23, 2010, Proceedings. 
Weick, K. E. (2001), Making Sense of the Organization, Blackwell Publishing.

Sunday, 23 January 2011

Bank Branches need Process Improvement

I am generally a happy client with my personal bank here in Australia. Their online services are good, they have excellent phone service, always reply to my enquiries with a smile, and service fees are kept at a minimum. However, two weeks ago I had an experience I thought belonged to the pre-Gorbachev era. My wife and I went to the bank in order to integrate consolidate our bank accounts and personal finances. Also, we wanted to migrate a single credit card into both our names as shared card holders. With many couples consolidating their banking business every day, this really shouldn't be a complex process, even though it turned out to be a dull three hour experience. Forget about going to the bank in your lunch break to finally get those accounts sorted out: you will need at least an afternoon to get things right.
Let's investigate the processes involved in this relatively client request: 
  1. Verify client identity
  2. Open x new client bank accounts
  3. Transfer funds from existing bank accounts to new bank accounts
  4. Close y old client bank accounts
  5. Update client and card holder details for MasterCard credit card
  6. Verify details and finalise client request
For a bank (and the branch officer), these processes should be relatively trivial and simple. Following Porter and Harmon (Harmon, 2007), these are core bank service processes. I had expected 30-45 minutes including a couple of signatures, a few mouse clicks, and maybe an internal phone call to a service desk. How could that result in three hours? These were the main process flaws:
  1. The poor customer service officer spent more time looking for the correct paper forms and subsequently printing these out for further processing. The bank has a comprehensive Intranet site providing access to all client request forms with an enormous tree folder structure, and she even tipped over the screen so I could have a look for the right place to find the form. All relevant forms for a account closure and account opening were located in completely different folders, completely detached from the actual client task. Furthermore, all forms were given long, cryptic names, which made the whole operation even more painful (who would have guessed that HD30 is the code for an account transfer request?).
  2. Lack of task and process orientation: the user interface on the service officer's screen was data-driven and not process-focused. For each step in the processes, she had to navigate through a hierarchy of different Intranet pages to find the right forms and data entry screens. Some of these screens (or web based wizards) were task oriented, but only at the most local level. One client request thus consisted of invoking six different wizards dispersed over three different application systems each with its own user interface, constraints, and terminology. I would recommend looking in to consolidating end-to-end business processes inside the same process engine operating on the same account data.
  3. Basic process tasks were delayed and not properly formalised. It is crucial for a bank to verify the identity of its clients to prevent fraud, also for people walking in from the street. Verifying the client's identity should this be a basic condition for proceeding with any further steps in the above processes. But only after 30 minutes spent on printing out five-six different request forms and allocating all of our bank accounts did the service officer actually ask for our driver's licenses in order to confirm that I was actually me. Now, what if I had forgotten my driver's license and I had to come by another day? That could have saved both her and I for 60 wasted minutes, although I fortunately had remembered my license. It is probably the most basic conditional task (or in BPMN terms: decision gateway) for all basic client processes. All requests demand confirming the correct identity. Move all conditional, security and identity related decision points to the earliest possible point in your banking process. It might sound like common sense, but with multiple entry points for invoking a client request (online, in person, phone call), it is crucial to use integrate all authentication in a uniform fashion across all business processes.
  4. No visual graphic of process path and progress---neither for the service officer or client. A visual progress overview generally makes people more patient and clarifies the tasks at hand (imagine having a non-technical EPC or BPMN model of all of the above processes for the customer service officer to follow).
  5. No process Key Performance Indicators or client feedback integrated into the workflow. The friendly, but frustrated customer service officer could have ended the (albeit very lengthy) transfer process by asking me: how would you rate this transaction? Is there anything we can do better? Process improvement demands learning and systems feedback being fed back into the next process cycles. Only from incremental improvement cycles can significant process improvements be won (Seddon & Caulkin, 2007). This demands a balance between quantitative (e.g. processing time, number of mouse clicks, processing/systems exceptions) and qualitative process indicators (client satisfaction, that friendly smile on the face, welcoming clients in the branch, offering them a glass of water whilst the banking transaction is cooking) (Zokaei et al., 2010).
These recommendations and conclusions are by no means revolutionary or unique. The important difference is that ideas emerged from a---at least in the blueprints---simple visit to my local bank; a visit that in the end brought a lot of frustrations and ultimately improvement ideas on the table. With the billions of dollars each year spent on IT portfolio management and business analysis in the financial sector and a GFC finally returning to the historical wastelands of our capitalist society, one would assume that significant improvements---and (systems) learning---have been discovered. Paradoxically enough, the budgets are apparently still spent on building cluttered unintuitive user interfaces and manually integrating disparate data sources at the cost of redundancy, whilst ignoring the fundamental structure of systems learning and feedback.

References
Harmon, P. (2007), Busines Process Change, 2nd edn, Morgan Kaufmann Publishers.
Seddon, J. & Caulkin, S. (2007), `Systems thinking, lean production and action learning', Action Learning: Research and Practice 4(1), 9-24.
Zokaei, K., Elias, S., O'Donovan, B., Samuel, D., Evans, B. & Goodfellow, J. (2010), Lean and Systems Thinking in the Public Sector in Wales (Report for the Wales Audit Office), Technical report, Lean Enterprise Research Centre, Cardi Business School, Cardi , Wales, UK