Friday, 9 April 2010

On Enterprise Taxonomy Completeness

Yet again it is time for an update on my thesis progress. After revising my initial introductory chapter with my supervisor (John Gotze), I decided to revamp the chapter completely and go forward with a more tangible problem: the failure of strategic IT and process redesign programs in government. Enterprise Architecture claims to hold the cure for these failures, but is it really that simple? 


In order to get deeper into that discussion, I started out by reviewing Roger Sessions' Enterprise Architecture framework comparison on MSDN (A Comparison of Top Four Enterprise-Architecture Methodologies), in which he compares some of the most contemporary approaches to architecture: Zachman, TOGAF, FEAF, and Gartner. Sessions, albeit very practical in approach to evaluation, poses a few very interesting problems in his analysis: 

  • Basic lack of analytical coherence and empirical evidence: his measurements or rankings of each framework stem entirely from personal use, and no no empirical foundation is included: "Keep in mind that these ratings are subjective. I'm sure most people would disagree with at least one of my ratings." Still, a finite scale of methodology ranking emerges out of the blue, with no coherent explanation of how the measures were chosen, nor discussing why other measures were left out. In my opinion, it is interesting how much time is spent on talking about frameworks rather than doing an empirically well-founded research effort to find out how efficient they actually are. Honestly, such a research effort might actually provide better informed decisions than an arbitrary soup-de-jour selection of framework criteria.
  • Secondly, Sessions introduces an interesting parameter: taxonomy completeness, which "refers to how well you can use the methodology to classify the various architectural artefacts. This is almost the entire focus of Zachman. None of the other methodologies focuses as much on this area." It certainly is an ambitious contention to provide a finite taxonomy for the enterprise, but how is such completeness achieved? Complete in terms of what? Complete compared to what? Is it because the methodology, framework, or modeling language is capable of expressing any possible state or snapshot of an enterprise at any point in time? Organisations are ambiguous, interpretative systems, not engineered machines that we can model in its entirety with a rigorous charting tool. Imagine expressing the many facets of corporate life, the politics of process standardisation, and the controversy of introducing cross-departmental cost centers in UML 2.0 or ArchiMate: if an ontology or taxonomy is to be complete, it would have to comprehend exactly that snapshot of the enterprise, but the difficulty (or ridicule?) of fulfilling that task hopefully requires no further elaboration.
By no means do I want to commit the fallacy of ontological or radical constructivism, but there seems to be some serious problems with how informed our choices of and trust in frameworks and tools really are. Granted, there are lots of benefits from choosing a structured approach to enterprise architecture and engineering, but all too often the we ignite a heated discussion of the specific framework in itself rather than examining the realistic benefits from applying a structured approach and systems thinking in an organisation. Applicability and practical benefits should be our point of departure, and not how well a framework (documented in a very thick manual) maps to another framework, when it is obvious that both methods provide an excellent structured body of knowledge and best practices for applying enterprise engineering. Again, I want to refer to the GERAM methodology developed by the IFIP-IFAC Task Force on Architectures for Enterprise Integration -- it provides a unified platform for capturing the essentials of a reference architecture and presents it in a pragmatic way, without claiming an ontological monopoly such as taxonomy completeness. 

Of course, having a reference architecture, a corporate dictionary, and a common taxonomy is very useful -- especially as a communication tool. But having a dictionary of important models is still very far from having a complete depiction of the enterprise -- what Sessions claims to be an important measure for methodology selection. Similarly, GERAM discusses the importance of having a common taxonomy -- Generic Enterprise Modeling Concepts (GEMCs) -- but the IFIP-IFAC Task Force never claims the ability to capture everything in one model, but instead discusses the applicability of ontologies at different levels of ambitiousness and rigor. 

Stay tuned in the following weeks, as I will be exploring and discussing the relationship between enterprise engineering, systems science, and 1st order cybernetics. 

2 comments:

  1. “A wit has said that one might divide mankind into officers, serving maids and chimney sweeps. To my mind this remark is not only witty but profound, and it would require a great speculative talent to devise a better classification. When a classification does not ideally exhaust its object, a haphazard classification is altogether preferable, because it sets imagination in motion.” [Kierkegaard]

    ReplyDelete
  2. I agree that usefulness is a more interesting criterion than completeness, but I think the investigation of usefulness would probably have to be a hermeneutic exercise rather than an empirical one.

    You talk about "an empirically well-founded research effort to find out how efficient they [the frameworks] actually are". But surely this would entail regarding the frameworks as a kind of instrument for achieving some set of system outcomes. But as I outlined in my paper on Reasoning about systems and their properties, there are some serious methodological problems in evaluating such instruments properly. In this particular case, I can't see how you could compare the efficiency of different frameworks without having some framework-neutral framework to describe the outcomes. The CRIS conferences (IFIP WG 8.1) struggled with this issue in relation to design methodologies, and came up with some practical compromises, but I have seen no references to this work in the EA literature.

    For me, the important methodological question about EA taxonomies comes from Lakatos' notion of research programme. Can the development of EA frameworks and taxonomies be regarded as a progressive research programme ("marked by the discovery of stunning novel facts, development of new experimental techniques, more precise predictions, etc.") or as a degenerating research program ("marked by growth of a protective belt that does not lead to novel facts")?

    If (as I strongly suspect) the latter, then Zachman and TOGAF are closer to the mediaeval scholastics (from ibn Arabi to John "Dunce" Scotus) than to modern science.

    ReplyDelete