Thursday, August 7, 2008

Separating the data and the form of representation (models)

(based on recent discussion with clients looking at how to operationalise an EA model in an enterprise context)
For a solution to work in an enterprise context it is important to be able to separate the enterprise data from models which references the data. This is for two reasons – the nature of models and nature of organisations.

Models - Models are very useful particularly for design and planning (where often visual representations are very powerful). The way something is modelled depends almost entirely on the purpose of the model i.e. what you expect to learn from the model. Models consist of data elements related in fairly complex ways (object, properties, relationships between objects, calculated values etc,). As models are always designed to achieve a purpose i.e. no one model can achieve all purposes – so data-elements will usually exist in many models (each model with a different purpose).

Organisations - In an enterprise many people will be interested in things from many different perspectives (economic, operations, organisation, product, market, contracts, etc.) and at different times (now, next year, in the future). Usually most individuals while having a broad interest in many aspects on an enterprise only have a detailed interest in, and definitive knowledge of, the specific subset of the enterprise they are focused on. So while many people may be interested in the services an enterprise provides some will be interested in these at purely a business level and others interested in the technical aspects, some will be interested in costs associated with a service, some with how the service is delivered (procedures, policies, information), some more with who/what performs the service, some with what agreements are associated with the services. So while many people may be interested in the service – few people will have definitive knowledge about all aspects.

Often the data-elements may first be considered, and captured in a structured and semantically explicit way e.g. when something is being conceived of, planned or being changed. Obviously while the model suits this initial purpose (design, planning) the data-elements will be reused latter in different ways i.e. other models, reports etc. Eventually when things are operationalised the data-elements will be updated by different groups with different interests.

Therefore the model form (which relates to its purpose, and the modellers specific design or implementation) will militate against most people (other than the author or designer of the model), interacting with the underlying data-elements e.g. updating data associated a service object type. It is not that most people are not able to understand the model if it is explained to them – it is that they don’t want to understand a model created for a purpose different to the purpose they have (i.e. they don’t want to invest the time and energy learning about data and relationships that are, from their perspective, extraneous).

So for most people, only interested in a small subset of the data contained in most complex models, the model will be too complex for them to grasp, or contain too much extraneous data (i.e. it just seems like complex monolithic assemblage of data not necessarily well suited to harvesting the data for reuse). This is especially the case if the people are not modellers by nature and or don't use the modelling tool/technology employed.

Therefore it is important that people have other mechanisms for updating the data-elements that will be represented in a model. Ways that suit their interest and their level of understanding.

Thursday, July 24, 2008

Focus on EA is inversely proportional to the need for EA

Focus on EA seems inversely proportional to the need for EA.

The potential value that enterprise modelling and knowledge management (EMKM) can brings to decision making is dependent on the size of the enterprise, the complexity of the enterprise and the rate of change.

If the enterprise is very small e.g. a couple of people the knowledge that exists is held in a few heads (often one or two). If the enterprise is very simple the knowledge can be held in a few heads. However for most large enterprises the knowledge actually resides with many people.

If the rate of change is very slow one has the time to try and slowly bring together the knowledge from the many people e.g. and produce a 5 year planning (or perhaps an annual plan). Communist states 5 years plans demonstrated that this method of planning doesn't respond well to rapid change.

The actual value that EMKM brings to decision making is a function of the scope and range of use. The function is probably a product function (rather than a sum function). The range of use can be considered to be the percentage of the people in the enterprise with knowledge who contribute. The scope of use can be considered to be breadth of the areas of topics of use.

The problem that occurs in large complex organisations is that they develop bureaucracies and silos.

EMKM for decision making is only required if there is to be change. If the status quo is to remain few decisions are required.

An instance of a bureaucracy is an organism and like any organism it seeks self preservation. Change could jeopardize the particular instance of a bureaucracy e.g. make it unfit for purpose. So change is in general to be avoided by a bureaucracy – unless the bureaucracy changes itself 1st to suit the future state. Unfortunately this is usually slow for real world change. As knowledge is power when it comes to decision making - the bureaucracy wants to have that power i.e. retain the knowledge. So a bureaucracy doesn't want knowledge to exist independent of the bureaucracy. Fortunately for the bureaucracy large organisations tend to be hierarchically structured with increasingly specialized functions – which effectively form silos of knowledge. These silos can easily be kept isolated if documents are used to retain as much knowledge as possible (e.g. powerpoints, word documents etc.).

Now if there is a slow of range – then people have the time to think, to consider things, including how to better manage the knowledge they need to make decisions. If on the other hard the rate of change is high – people have an imperative to act – and they don’t have the time to think about how to make better decisions. They need to decide what to do immediately.

So we can see that the organisations that most need EMKM – large, complex with high rates of change are precisely those organisations who will resist better approaches to EMKM.

Where there is little need for EA because things are fairly static - there is a reasonable focus on it (because key people have time to think). Where there is a great for EA because there is a lot of change being envisaged (growth or shrinkage, investment or divestment, risks) there is no time for it because people just want to act.

Wednesday, June 18, 2008

EA as a profession

(prompted by discussions of ethics)

Many people claim to be IT professionals. Most Enterprise Architects I have met see themselves as IT professionals. It always makes me wonder what they mean by the word professional.

The word comes from "profession" in the sense of a public declaration, such as an oath -- so the word had an ethical sense from the start. In most of the traditional professions this was intended as a guarantee that the professional (priest, lawyer, doctor, etc.) would act according to a code of ethics and provide advice consistent with that code, rather than whatever was best suited their self-interest or misrepresenting the truth to suit a paymaster.

It seems that people now use the word in several different ways now, for example:
  • People, such as athletes and sex workers, who get paid for what is normally a pastime and no particular ethical sense is implied. In the former case gamesmanship replaces sportmanship (winning is more important than honest effort); in the latter play acting replaces genuine passion.
  • Profession to a code of ethics, such as medicine, law, or architecture, where there remains an ethical sense of someone who professes to the code.
  • Being a member of a professional body: this implies a level of training (e.g. academic), a set of recognised approaches or methods, and a professed set of standards that are adhered to. This is the sense more commonly used in trades.
The problem in IT is that people are never very clear on what they mean by the word professional.

Many EA's would recognise a common body of recognised practice, and many when questioned know what they should be doing, and how they should be advising people - though they don't bother offering this advice (because they know it won't be well received). They are often asked to do things they know doesn't make sense (suiting a paymaster's goals e.g. political, personal, immediate, rather than doing what makes sense for the enterprise or the long term) and they do what they know to be poor practice (wrong methods, wrong tools, wrong skill sets, wrong focus, etc.). This is a challenge that needs to be addressed.

See: Justifying EA

Wednesday, May 7, 2008

Architects need professional tools of trade.

Why is it that people are quite happy to have architects, strategists, etc. who will typically cost them in-excess of $500k over 3 years. Get them doing "architecture" and arm with with substandard tooling.

Any cursory analysis would show only a small improvement in productivity and efficiency (5%) would produce a substantially better return of this $500k investment.

No one would suggest an accountant should be expected to do without an accounting system (server based solution) and a spreadsheet (numeric modelling and design tool). No one would surely suggest a project manager - should have a project planning tool. Why then do people arm architects and strategists with tools manifested unsuited to the needs of the enterprise and then marvel at their failure.

When one suggests that a suitable class of tools is used one is often tarred with the brush of promoting a particular product. Yet if an accounting professional suggested a suitable class of tools (e.g. spreadsheet and accounting systems are required for this work) the statement would be taken at face value.

Of course what does not help is "professionals" in this area asserting that EA can be effective when the right class of tooling is not used (despite decades of evidence to the contrary).

See also:

EA knowledge bases need to be active

(prompted by people using old approaches with new tools)

The idea of publishing data in a static format is counter to the efficacy of EA.

What one wants in EA is to allow the ownership of the data to remain where it currently resides (i.e. associated with what business function, role currently owns it) - and to be able to see and analyse aggregations of the data (i.e. the data that represents the enterprise).

People should be able to update the data as a natural by product of their day to day work (function, role etc.). As almost everyone in an organisation is responsible for (i.e. owns) some data - this means that eventually almost everyone should be able to update some aspect of the data i.e. the function of maintaining the data is progressively distributed and decentralised (obviously this requires access management, data integrity policies etc.).

This is true of most other enterprise solutions (i.e. all users can update some things, and many things they just reference)

Actually typically people already do update some aspect of this data (i.e. update of the EA is already distributed and decentralised) - usually in a plethora of disconnected and unstructured documents (that suit each individual) but nothing for the enterprise (or the others in the enterprise) . Much of it would be in documents such as: business cases (applications, systems, business processes/products and services, business goals and strategies etc.), operational and procedural documents (holding roles, business processes, rules, products and services etc.), systems diagrams (applications, technology infrastructures), definitions of bodies of work e.g. project charters, statements of work, technical design documents, SLAs, BCP and DR documents etc.

The idea of publishing implicitly involves an unnecessary and function i.e. someone/something has to "publish" it. Where as in fact it should live (active data). The old model is like someone writing the encyclopaedia Britannica and "publishing" it. This just doesn't typically work for EA (it involves rework, it is too slow, the owner can quickly update or correct data, it is counter to the actual process of operating the enterprise).

Having said this in the initial stages of establishing an EA (perhaps much of the 1st year). It is reasonable that some people with a particular interest in a set of data central to many things e.g. business functions, application services etc. may establish a base of data, and some ways of organising the data, to make it easier for everyone else to collaborate.

Using an active EA as if it was is as static EA - is like using a nail gun as if it were a hammer (and then wondering why it takes so long to nail in the nails, and noting that a nail gun costs more than a hammer).

Tuesday, April 29, 2008

What we learned about EA implementations

(based on many people asking why I recommend the implementation approach I do)

The fact is that most EA implementations fail i.e. they fail to produce sustainable value and a useful asset i.e. one that can be used to: answer questions (e.g. impact analysis), maintain knowledge, support business operations and business change, etc. Over almost two decades I have seen dozens fail (often undertaken by some of the brightest, most able, and most diligent people I have met).

I have seen EA initiatives in many types of organisations (forestry, health, insurance, government, finance, health and safety, eCommerce, retail, education, telecommunications, etc) and using many different generic frameworks and methods (e.g. Zachman, TOGAF, etc.) - and the quality of the result does not seem to be associated with either the type of organisation or which of these generic approaches is used.

It is true that in many cases inadequate tooling is the root cause, and EA "consultants", "experts" are quite happy to continue using office documents and essentially manual approaches because for service companies it is far more profitable to do the fishing, than sell a rod and teach the person how to fish for themselves.

Where poor tooling is not the cause what I usually have seen is work on an EA implementation commence:
  • lead by people who only understand about 10-15% of what the technology being used can do (and have a partial understanding of its intended use) or lead by people who understand the tooling reasonably well but who have very limited domain expertise (i.e. in enterprise architecture, ICT strategy, business/IT transformation etc.).
  • without a clear understanding of what is sought (requirements)
  • without a clear understanding of how the solution will be implemented (all the components, how design will be undertaken, how operational procedures will be defined etc.)
  • without a well defined (tried and tested) implementation plan
  • without an understanding of the organisations change impedence issues (the organisational behaviours that impede change) associated with improving touchpoint processes and roles.
Unsurprisingly these implementations seldom proceed well and usually erroneous conclusions drawn from what is delivered (from those who "don't get it" or sometimes "don't want to get it").

If one doesn't start with a clear view of the outcomes being sought - it is unlikely the benefits will effectively be achieved (i.e. if you don't understand the requirements it is unlikely the solution you will deliver something that achieves them).

Therefore what is required upfront is a focus on what outcomes should be sought, i.e.
  • who are the stakeholders and/or beneficiaries ?
  • what are the scenarios of use they have ?
  • what are the business questions they need answered ?
  • and in roughly what stage of an implementation ?
Some EA approaches aim to achieve this but they are so generic (i.e. they don't make an assumption that suitable tooling will be used) so they can't be precise enough about the nature of the requirements captured, or how the design will proceed.

This is why we have developed a detailed implementation approach. This defines all the major areas of work, the key roles, logical milestones etc. that is specific to the implementation technology we use and can be instantiated quickly for a specific project

People often think that there must be a single detailed approach to developing an EA that will suit every organisation. There is a common meta-approach - but the detailed approaches differs based on the organisation and the goal. The problem is that different organisations are focused on making different types of changes e.g. new products or new geographies, mergers, risk reductions (including regulatory compliance, DR, BCP), cost reduction etc. so their focus is naturally different e.g. I have seen EA implementations with many different orientations e.g.
  • Business and/or technology strategy (usually aiming at getting an explicit consensus as to what changes should be initiated and why).
  • Business process redesign (adjusting the way the business operates, as a precursor to looking at technology changes)
  • Application architecture (usually focusing on rationalisation, as a result of merger of unmanaged proliferation)
  • Service architecture (usually focusing on developing approaches for service governance)
  • Integration architecture (often now related to SOA initiatives, but in the past oriented around ESB, EAI initiatives).
  • Security architecture (unfortunately usually as an after thought).
  • Meta data and data architecture (unfortunately usually disconnected from the processes the data exists to support)
  • Rules architecture (usually oriented at understanding how strategies and policies are implemented in processes, procedures and systems)
  • Technology platform architecture (usually focused on getting better utilisation from servers, and/or planning for building extra server capacity)
  • Storage architecture (usually focused on preparing for the addition of extra storage capacity).
  • Standards management (usually oriented at governance)
  • Programme management (usually oriented at understanding how a set of capital initiatives, or business change initiatives relate to how the business seeks to operate in future).
  • Service level management (usually oriented at SLA templates, contracts and delivery exception management)
  • New product or channel definition (usually products that involve systems at the heart of their delivery)
  • Disaster recovery planning (usually leveraging off existing information about processes, applications, platforms and teams)
  • Business continuity planning (as for DR planning)
  • Requirements management (in the context of how the business seeks to operate)
  • Package implementation management (usually focusing on understanding how business operates, and how this relates to the package in question - so either the package can be changed or the business can change how it does things (or both).
  • Compliance management (understanding how regulatory or legislative compliance is being achieved)
  • IT Skills management (understanding what skills are required for what systems, usually to allow rationalisation or as a risk management exercise).
If one looks around an IT organisations one will see information about all of the above being created, collected, used and disposed of (whether that is intent or not). It is usually presented in a diverse set of office documents (business plans, business cases, project charters/statements of work, technology documents, software design documents and models, infrastructure design documents and models, contracts, requirements documents, business continuity planning documents). This suits each person producing the specific artefact admirably and keeps lots of staff and consultants employed. To change this situation take time and needs to be done incrementally and with purpose.

Often when you ask an organisation what they want to focus on they will say: applications, infrastructures (servers, storage, intregration) etc. but be unclear about how they will make intelligent decisions about these things without understanding: business/technology strategy, products/services, and the business processes, information, rules, organisations etc. that these solutions are designed to support. Or you will get organisations that say they want to focus on all the above - but struggle to work out what will the initial focus should be so that the stakeholders with greatest immediate need will be provided answers to the burning questions they have, while knowledge is gathered and organised in such a way that it is reusable in down stream phases to answer questions that are currently of secondary or tertiary importance (or not even thought of).

The art of a successful implementation is prioritising - based on a complete understanding of what the tooling can do, a good understanding of EA best practice and clear understanding of what the organisations current goals and challenges are.

What we sought in a tool for business/technology transformation and EA

(based on many people asking what the requirements are for a tool)

People often ask me what lead us to working with the tool sets we use for enterprise architecture and business transformation (EA/BT). The following is a short background.

Obviously the overall objective is to improve an organisation's effectiveness (reduce costs, improve income/outputs, ensure regulatory compliance, build asset value etc.). This requires managed change. I had focused how to improve the quality of transitions from RHE's inception (the best people, methods and tooling). We had tried using a wide range of OTS tools (e.g. CASE, Ontology, Document management and of course Office suites) and building tools to allow us to manage this information. After many years of trying to determine how EA/BT work could be done (in a way produce sustainable value for clients) and a long term focus on methods (including frameworks) we determined the tooling was inadequate (and the tool limitations fundamentally affected the method).

In 2002 we reassessed what tools were available that could be used to support:
  • business strategies: drivers, plans, markets, products/services, locations, organisations, resources, change/transitions, projects/initiatives;
  • business operations: communciations, services, processes, rules, information, organisation etc;
  • technology architecture: enterprise, solutions, components/systems, etc. Supporting any: styles, topology pattern, interaction models etc. And integrating detailed design models (e.g. UML, ER, BMPL/N, SOA) etc. and integrate data from other sources (e.g. CMDBs);
  • business and technology alignment: relationships and dependencies between the business and technology;
  • requirements (business/systems): at various levels of detail supporting an understanding of acceptance/contracts/terms and design/delivery;
  • change over time: time based views of all of the above (as-is, to-be, transitional);
  • change initiatives: programmes, projects (e.g. costs, risks, resources, timeframes) and business cases (e.g. benefits, fit to current environment etc.);
I wanted to ensure that knowledge in all areas (usually oriented at different audiences, for different purposes, but ripe with opportunities for reuse) could be inter-related i.e. so that information doesn't just reside in isolated silos (e.g. a plethora of office documents, or isolated models). I also wanted a focus on models that record explicitly the essential semantics (rather than just how pretty the pictures, are or how elegant the wording is, as often happens in free form documents and diagrams). This knowledge hub, with various interchange information mechanisms, would:
  • manage the semantics: model any object, and relate it to any other object; for a predefined library of object we would have a predefined set of properties, methods and relationships to other objects; it would be easy to change the modelling language (objects, relationships, properties, methods, appearance, nesting etc.); and we would be able to structure the knowledge to support any framework (taxonomy e.g. Zachman, FEAR, TOGAF, eTOM, etc.) so these could be tailored to specific customers/project needs etc.)
  • communicate: create visulations based on any sets of objects with the appearance, level of detail, layout, type information displayed, etc. changed to suit the audience/purpose; allow all authorised people to see the subset of the information they are interested in and annotate/revision-mark that information; allow publishing in any format e.g. office documents, images etc. and via customisable web interfaces (e.g. to support mashups etc.)
  • integrate: with other sources so we could exchange data with modelling tools; databases/applications; systems tools (e.g. CMDBs); portfolio management tools; programme management tools; dedicated modellers (e.g. project plan, UML, BPM, Data/ER, SW architecture analysis tools); etc.
  • easy to use and automatable: allow users to very quickly and efficient create models with very little knowledge of the tool (and very little training), to support workflows associated with maintenance and use of the knowledge.
  • open and extensible: to be standards compliant and implementation technology and technical architecture agnostic. To be able extensible so it can be customised to exactly what we or a client believes is needed.
Eventually (in 2003) we found tooling that provided a client visual modelling capability and a simple server side repository. During the initial implementations it became apparent that to be successful a combination of visual modelling, forms (web forms/dashboards) would be required to make implementations effective for large organisations (and to overcome organisational impedance). It was also recognised that essentially what the solutions represented was effectively a data warehouse /BI solution for the CIO (and IT) and that typically this was the one area of the enterprise that was not served well by IT. So in addition to the above what was needed was:
  • powerful analysis and reporting: to be able produce reports (using standard reporting tools) in whatever form (visual, textual, dashboards) and type (Word, PDF, web forms etc) is required and access these reports from portals and client tools so the knowledge can be visible to all stakeholders in a way to suits there specific needs, roles at each point of interaction.
  • tool to ensure the quality and currency of information: so we could automate the collection of data (where there is another source of record e.g. ERP, CMDB, etc.), to maintain policies on data (that allow data quality to be monitored/assessed), to allow easily ad-hoc updates by many users (who can do so as part of their day to day job) and proactively survey/poll users (when it is believed the data is out of date).
  • secure, scaleable and accessible: to support a large number of ad-hoc users (many that will only interact with the systems for a small % of the time but all of whom have an interest or are guardians of bits of the jigsaw puzzle.
  • tailored to specific purposes: solutions (based on tailored metamodels, user interfaces, system interfaces) to support specific jobs e.g. enterprise strategy or architecture; business cases and investment plans; technical architecture e.g. solution/integration/service/data/storage; or managing requirements, metadata, design, acceptance, programmes of work, security, business continuity
  • sophisticated management of multiple states: as in reality large complex enterprises have many initiatives operating each of which is expected to change the current state (at different points in time) and automate the way the changes will ripple through those states.
So in summary our current requirements could be summarised as:
  • suited to business (and technology) orientation - so ideally one could start with the business side e.g. able to manage business drivers and requirements and solutions.
  • flexible, extensible and customisable (language of modelling, user interfaces, data interfaces, datamarts, analysis and reporting) - so modelling can be done in terms/language suited to the business
  • powerful visual analysis and representations (for large amounts of data, power users, and design)
  • supporting a broad and diverse community of users (with suitable roles and security controls for those different groups of users)
  • automated data collection
  • communicating in various ways: diagrams, charting &  analytics, reporting etc.
  • multiple states and transitions between states
Having resolved the tooling issues – we then focused back methods – because it was clear that many initiatives in this area failed because people did not have a well developed implementation methodology. See (EA implementation)