When taking on a Business Process Modeling exercise, it's good to outline the purpose up-front. Some of the key perspectives that you want to consider are:
-
Business Process Re-Engineering (BPR)You want to examine the Business Process to make improvements - for example, increase automation, reduce duplication, streamline and parallelize.
-
ExecutionIntent is to take the Business Process and execute it on a technology platform, such as IBM WebSphere Process Server. Often this is part and parcel of increasing automation, or streamlining the technology.
-
InstrumentationIntent in this case is to design around measuring and monitoring a process. You might want certain customer orders or interactions resolved in a timeframe. Or you may want certain items to be escalated after a critical time period has elapsed. Or you might want to be recording data points that can be accumulated and mined for patterns after-the-fact.
-
User InteractionAim here is to model the user interactions with a process. Central focus in these models is naturally the users, their team structures, skills and locations. There are a number of reasons for doing this; skills and role realignment. Building and refining escalation structures. Ensuring privacy and clearance compliance. Optimising team structures. Optimising and perhaps consolidating locations. Undertaking outsourcing or offshoring.
If you draw a business process from one of these perspectives, most of the time they will look completely different from the others. This can present a potential pitfall for process models. For anything of any complexity it's nigh on impossible to get a model that incorporates all of these concerns adequately. Conversely, if your problem is simple, then these approaches are probably overkill.
An example might help position this better -- If your aim is BPR, you might put each human task in sequence, even though these are effectively done by a single person all at once. The reason you put them in sequence is that you have data that tells you how long each individual piece takes, and you also know that they are usually done together. When you simulate, you get an accurate picture of how long the macro pieces take, and where critical paths exist in the process. You also get some measures of complexity - a major one being the number of unique paths through the process.
(As an anecdotal aside - I heard of a major utility that mapped their provisioning process - and the number of unique paths exceeded the total number of customers)
So, someone processing an order might check the customer's credit, validate their address, check the shipping costs, check stock levels, enter the order and then submit it to be fulfiled by the warehouse. However, it's unlikely that anyone will do those tasks in that exact order. There might be dozens of reasons of this, a common one simply being the order of papers in a pile.
This is a trivial case, but it could be significantly more complex with something like processing a mortgage application, or a business loan, where there can be dozens (or indeed hundreds) of fragments of information.
The intent in this kind of process modelling is usually to uncover overlaps and efficiency opportunities -- in processing a new customer order, you might be validating a customer's address numerous times. This could ideally be reduced to once; or twice if you have a Quality Assurance stage.
The problem is this doesn't necessarily represent the process in a way suitable for other objectives - such as execution, user experience or instrumentation. I've seen this happen before. The process gets defined and then forces a user to do a sequence of tasks in a strict order -- when the reality is the user is sitting with a pile of paper in front of them and probably wants to do them in whatever order is convenient. Worst case scenario is this macro task gets formalised as numerous minor tasks that must be checked in and out of work queues, or end up as a horrendous sequential "wizard style" User Interface.
Since the original intent of the exercise was to
re-engineer the process to make it better, this is a counter-intuitive outcome. However, without going into that detail and making those assumptions, you couldn't have assembled and simulated the process.
In a similar vein, this process implies that you can instrument the "validate address step", whereas the reality is that this step may well be embedded in person shuffling through some paperwork. It's not possible to get data around this individual step; not in any practical terms anyway. Going even further, all of this might be completely irrelevant from an instrumentation perspective -- the key KPI might be customer satisfaction, which is likely measured in a completely different way.
This is not to say that Business Process Modeling doesn't have significant value. Part of the issue is the hubris that surrounds Business Process Management (BPM) software, which really pushes this as a "new paradigm". The idea is that you sketch out a Business Process and then the software is capable of (magically) executing the process. However, this is really impractical. Tooling can help significantly, but it's a means to an ends. Mapping and understanding your process is the intrinsic value; software enhances or amplifies that.
An approach led by Business Process Modeling can be a significant advantage to the deliver of software projects. It's an excellent means of driving out requirements and outcomes. Just be clear about the purpose up-front, and don't get fixated on auto-magic tooling. Even if you sketch your process on paper, and then code it from scratch, you'll be getting many of the core advantages. Add tools and technology on top to maximize the advantage, not define it.
Jon