Friday, August 27, 2010
The Global Justice XML Data Model
THE PROBLEM
The Global Justice XML Data Model - (GJXDM - Wikipedia Entry) is an XML interchange format used by law enforcement and other justice agencies in the United States.
It's a replete standard - it contains over 400 complex types and around 150 simple types with a total of around 2,000 associations (properties). Almost half of these focus on the Activity area (such as an Arrest) and on Personal Details.
Actual adoption of the model will vary depending on the context. This is due to expected variances in implementation, but also to the context of the application. GJXDM is implemented across a diverse range of institutions at different levels of government, each with different concerns and underlying objectives. The GJXDM has also been adopted in geographies outside the United States. Naturally these implementations have requirements that go beyond the available concepts, but also find large sections of the model inapplicable for their region.
The challenge is how to adhere to the entrenched standard whilst also accommodating the necessary variations on specific implementations, and keeping in sync with any updates to the standard over time.
HOW GRAFT HELPS
We believe Graft offers a lot to users of GJXDM. The first benefit is that it allows users to visually navigate GJDM as a domain model.
A key feature of Graft is the ability to extend other data models - we allow this extension in two modes: Active or Passive.
All of the elements of a passively extended model appear "greyed out" or "ghosted" in the modeling tool. You can then opt to selectively bring each of these elements into your model, even potentially over multiple releases of your implementation. To get an idea, you can extend the GJXDM Model yourself and experiment with drawing forward the parts of the model that are of relevance to you. Your modifications remain private until you explicitly choose to make your extended model public.
This allows you to cherry-pick the elements you need as they are implemented. Instead of handing out a schema with thousands of elements, you can produce a schema (by visiting the export tab of your model) that shows the elements and relationships actually being used, whilst still remaining consistent with the source schema.
An ancillary side-effect on this is performance. Large schemas can introduce a parsing bottleneck; reducing schemas to only include elements actually in use can make a big difference.
GRAFT KEEPS CUSTOMIZATIONS IN SYNC
Another key benefit of Graft is the control over extensions and modifications to the model. If you actively extend the source of GJXDM or passively extend the source of GJXDM, you are not left stranded on a standalone branch of the model. Your model is actually kept as a delta from the source model, allowing you to easily upgrade to future standard changes.
As an example of extensions and modifications being kept in sync with changes to the source GJXDM model, you might rename an element like LocationPostalCodeID - particularly if you're implementing the model in a very specific geography. This name change is stored as a delta, you can then update the underlying source GJXDM model and this change would still apply whilst still enabling you to get all of the updates to the underlying model.
Changes that consumers of GJXDM make can even be re-incorporated to the original model by the original model's administrators. A future update may transparently include the updates from derived models.
This is a very powerful outcome. In many modeling exercises tools encourage you to take a model, customize it, and effectively create an island. Graft encourages a different approach - Instead of grabbing a model and morphing it - Take an existing model, only use the pieces you need, focus on the changes you need for this release alone, and then keep taking the advantage over time.
YOU CAN LEVERAGE FROM MULTIPLE MODELS
Graft doesn't limit you to extending just one model. You can extend and integrate any number of models. It's unlikely that GJXDM is the only model you need to use. There may be other local and international standards, as well as in-house and bespoke representations for areas that are perhaps not covered or not covered appropriately for your needs in GJXDM. Graft lets you bring all of these models together, integrating and leveraging elements wherever required.
The Specify Tool in Graft lets you specify any number of Active and Passive Extensions, you can even modify these "on-the-fly" - for example, by replacing an existing Active Extension with a later release, or even with a different implementation.
We'll be working a lot more in and around Industry Standards such as the Global Justice XML Data Model. If you're working in this space, we'd love to hear your thoughts, comments and feedback.
Thanks,
jon@jodoro.com
Sunday, August 22, 2010
Business Process Modeling - What's Your Purpose?
When taking on a Business Process Modeling exercise, it's good to outline the purpose up-front. Some of the key perspectives that you want to consider are:
- Business Process Re-Engineering (BPR)
You want to examine the Business Process to make improvements - for example, increase automation, reduce duplication, streamline and parallelize.
- Execution
Intent is to take the Business Process and execute it on a technology platform, such as IBM WebSphere Process Server. Often this is part and parcel of increasing automation, or streamlining the technology.
- Instrumentation
Intent in this case is to design around measuring and monitoring a process. You might want certain customer orders or interactions resolved in a timeframe. Or you may want certain items to be escalated after a critical time period has elapsed. Or you might want to be recording data points that can be accumulated and mined for patterns after-the-fact.
- User Interaction
Aim here is to model the user interactions with a process. Central focus in these models is naturally the users, their team structures, skills and locations. There are a number of reasons for doing this; skills and role realignment. Building and refining escalation structures. Ensuring privacy and clearance compliance. Optimising team structures. Optimising and perhaps consolidating locations. Undertaking outsourcing or offshoring.
If you draw a business process from one of these perspectives, most of the time they will look completely different from the others. This can present a potential pitfall for process models. For anything of any complexity it's nigh on impossible to get a model that incorporates all of these concerns adequately. Conversely, if your problem is simple, then these approaches are probably overkill.
An example might help position this better -- If your aim is BPR, you might put each human task in sequence, even though these are effectively done by a single person all at once. The reason you put them in sequence is that you have data that tells you how long each individual piece takes, and you also know that they are usually done together. When you simulate, you get an accurate picture of how long the macro pieces take, and where critical paths exist in the process. You also get some measures of complexity - a major one being the number of unique paths through the process.
(As an anecdotal aside - I heard of a major utility that mapped their provisioning process - and the number of unique paths exceeded the total number of customers)
So, someone processing an order might check the customer's credit, validate their address, check the shipping costs, check stock levels, enter the order and then submit it to be fulfiled by the warehouse. However, it's unlikely that anyone will do those tasks in that exact order. There might be dozens of reasons of this, a common one simply being the order of papers in a pile.
This is a trivial case, but it could be significantly more complex with something like processing a mortgage application, or a business loan, where there can be dozens (or indeed hundreds) of fragments of information.
The intent in this kind of process modelling is usually to uncover overlaps and efficiency opportunities -- in processing a new customer order, you might be validating a customer's address numerous times. This could ideally be reduced to once; or twice if you have a Quality Assurance stage.
The problem is this doesn't necessarily represent the process in a way suitable for other objectives - such as execution, user experience or instrumentation. I've seen this happen before. The process gets defined and then forces a user to do a sequence of tasks in a strict order -- when the reality is the user is sitting with a pile of paper in front of them and probably wants to do them in whatever order is convenient. Worst case scenario is this macro task gets formalised as numerous minor tasks that must be checked in and out of work queues, or end up as a horrendous sequential "wizard style" User Interface.
Since the original intent of the exercise was to re-engineer the process to make it better, this is a counter-intuitive outcome. However, without going into that detail and making those assumptions, you couldn't have assembled and simulated the process.
In a similar vein, this process implies that you can instrument the "validate address step", whereas the reality is that this step may well be embedded in person shuffling through some paperwork. It's not possible to get data around this individual step; not in any practical terms anyway. Going even further, all of this might be completely irrelevant from an instrumentation perspective -- the key KPI might be customer satisfaction, which is likely measured in a completely different way.
This is not to say that Business Process Modeling doesn't have significant value. Part of the issue is the hubris that surrounds Business Process Management (BPM) software, which really pushes this as a "new paradigm". The idea is that you sketch out a Business Process and then the software is capable of (magically) executing the process. However, this is really impractical. Tooling can help significantly, but it's a means to an ends. Mapping and understanding your process is the intrinsic value; software enhances or amplifies that.
An approach led by Business Process Modeling can be a significant advantage to the deliver of software projects. It's an excellent means of driving out requirements and outcomes. Just be clear about the purpose up-front, and don't get fixated on auto-magic tooling. Even if you sketch your process on paper, and then code it from scratch, you'll be getting many of the core advantages. Add tools and technology on top to maximize the advantage, not define it.
Jon
- Business Process Re-Engineering (BPR)
You want to examine the Business Process to make improvements - for example, increase automation, reduce duplication, streamline and parallelize.
- Execution
Intent is to take the Business Process and execute it on a technology platform, such as IBM WebSphere Process Server. Often this is part and parcel of increasing automation, or streamlining the technology.
- Instrumentation
Intent in this case is to design around measuring and monitoring a process. You might want certain customer orders or interactions resolved in a timeframe. Or you may want certain items to be escalated after a critical time period has elapsed. Or you might want to be recording data points that can be accumulated and mined for patterns after-the-fact.
- User Interaction
Aim here is to model the user interactions with a process. Central focus in these models is naturally the users, their team structures, skills and locations. There are a number of reasons for doing this; skills and role realignment. Building and refining escalation structures. Ensuring privacy and clearance compliance. Optimising team structures. Optimising and perhaps consolidating locations. Undertaking outsourcing or offshoring.
If you draw a business process from one of these perspectives, most of the time they will look completely different from the others. This can present a potential pitfall for process models. For anything of any complexity it's nigh on impossible to get a model that incorporates all of these concerns adequately. Conversely, if your problem is simple, then these approaches are probably overkill.
An example might help position this better -- If your aim is BPR, you might put each human task in sequence, even though these are effectively done by a single person all at once. The reason you put them in sequence is that you have data that tells you how long each individual piece takes, and you also know that they are usually done together. When you simulate, you get an accurate picture of how long the macro pieces take, and where critical paths exist in the process. You also get some measures of complexity - a major one being the number of unique paths through the process.
(As an anecdotal aside - I heard of a major utility that mapped their provisioning process - and the number of unique paths exceeded the total number of customers)
So, someone processing an order might check the customer's credit, validate their address, check the shipping costs, check stock levels, enter the order and then submit it to be fulfiled by the warehouse. However, it's unlikely that anyone will do those tasks in that exact order. There might be dozens of reasons of this, a common one simply being the order of papers in a pile.
This is a trivial case, but it could be significantly more complex with something like processing a mortgage application, or a business loan, where there can be dozens (or indeed hundreds) of fragments of information.
The intent in this kind of process modelling is usually to uncover overlaps and efficiency opportunities -- in processing a new customer order, you might be validating a customer's address numerous times. This could ideally be reduced to once; or twice if you have a Quality Assurance stage.
The problem is this doesn't necessarily represent the process in a way suitable for other objectives - such as execution, user experience or instrumentation. I've seen this happen before. The process gets defined and then forces a user to do a sequence of tasks in a strict order -- when the reality is the user is sitting with a pile of paper in front of them and probably wants to do them in whatever order is convenient. Worst case scenario is this macro task gets formalised as numerous minor tasks that must be checked in and out of work queues, or end up as a horrendous sequential "wizard style" User Interface.
Since the original intent of the exercise was to re-engineer the process to make it better, this is a counter-intuitive outcome. However, without going into that detail and making those assumptions, you couldn't have assembled and simulated the process.
In a similar vein, this process implies that you can instrument the "validate address step", whereas the reality is that this step may well be embedded in person shuffling through some paperwork. It's not possible to get data around this individual step; not in any practical terms anyway. Going even further, all of this might be completely irrelevant from an instrumentation perspective -- the key KPI might be customer satisfaction, which is likely measured in a completely different way.
This is not to say that Business Process Modeling doesn't have significant value. Part of the issue is the hubris that surrounds Business Process Management (BPM) software, which really pushes this as a "new paradigm". The idea is that you sketch out a Business Process and then the software is capable of (magically) executing the process. However, this is really impractical. Tooling can help significantly, but it's a means to an ends. Mapping and understanding your process is the intrinsic value; software enhances or amplifies that.
An approach led by Business Process Modeling can be a significant advantage to the deliver of software projects. It's an excellent means of driving out requirements and outcomes. Just be clear about the purpose up-front, and don't get fixated on auto-magic tooling. Even if you sketch your process on paper, and then code it from scratch, you'll be getting many of the core advantages. Add tools and technology on top to maximize the advantage, not define it.
Jon
Subscribe to:
Posts (Atom)