This blog is the second in the series “Top 10 Steps to Ensure a Successful PLM Implementation.” In last week’s blog we identified that first you must:
1. Recognize that you already have PLM
2. Define your Vision for PLM
3. Drive Intentional Continuity and Alignment
4. Gather your Core Team
5. Be Outcomes Focused
With those steps in mind, let’s continue where we left off.
6. Create a Strategic Roadmap for PLM
Improvements take time, efforts, and resources. The practicality of even a few things happening at the same time rarely exists. Do not let ongoing improvements for your company be throttled by an ambition for massive efforts or all-or-nothing approaches. Look at the key outcomes that have been identified and start laying them out on a roadmap taking into account priority (that should be influenced by 2 and 3 by the way), benefits, risks, practicalities, and dependencies. This should let you choose certain outcomes (or even elements of outcomes) to place on the roadmap for a shorter time to value and start generating momentum.
A side note on this… I like go big or go home. I am a believer in aiming high and persevering to hit a big long term goal, but I have seen the sad reality of support for big goals shifting or disappearing and then completely undermining any progress at all. Depending on your corporate inertia, you may have to start very small to build any momentum at all or be at great risk of some other mass that can knock everything over. In the end, that risk may always be there, but do what you can to enable progress and build momentum while keeping the big vision in tact.
7. Don’t let Quick Wins undermine Real Improvements
This is the counter-balance or reality check for #6. In addition to seeing too big result in nothing at all, I have seen the quick win mentality coupled with short and limited budget cycles result in many “successful” projects; however, all of those projects add up to a lot of expense with little or no meaningful improvement over time. This can even result in negative progress via the proliferation of point solutions that not only may never work together to solve bigger problems, but add infrastructural inertia and a tangling of system complexities that are hard to unravel. To top it all off, sometimes these point solutions go as far as duplicating, in whole or in part, other solutions for the same or very similar problems. Depending on the way things are prioritized and funded, this might be a real challenge.
So back to #5 and #6… Real outcomes on the roadmap should help with achieving meaningful improvements. Connected outcomes as part of a good business architecture blueprint should go even further where the total impact of achieving a collection of outcomes (even over time) is greater than the sum of it’s parts.
8. Be Disciplined and Intentional with the Solution Design and Implementation - Remember Dogs and Tails, Horses and Carts, Forests and Trees
When it comes to the actual solution design and implementation, discipline with respect to #3, #5, and #4 is critically important; however, it is not easy. The first place the breakdown starts is in short cutting the analysis of needs - the outcomes. If these are not identified or if they are ignored, then the design tends to fall back to mimicking old processes and viewing solution elements from within the trees. Tremendous time and energy are spent on solving minor or unimportant things, while the big picture is neglected.
It is also possible to spend a tremendous amount of time in analysis, but completely miss the important outcomes. Reinforcing why it’s important to be outcomes focused (#5): The as-is to-be approach is laden with tails, carts, and trees. Following #6 will help identify the important outcomes, but discipline is required to ensure outcomes are neither forgotten nor ignored.
The next breakdown is in lacking a design guide that captures the outcomes based intent (remember the need for continuity). A configuration spec serves little purpose when there is no traceability as to why certain configuration settings have been chosen. For example, without a clear design guide that captures outcomes based intent; when it comes to testing and verification users may view anything that is different from the way they do things today as a “gap”. This in turn results in a reactionary push for little customizations, prolonged discussions about the late found showstoppers, rushed decision making, and terrible scope management.
It is far better to spend the time to capture design intent referencing important outcomes (also aligned to a roadmap) than to bypass this and implement a reactionary and ill defined solution. This all sounds like common sense; however, if the discipline is not applied from the beginning, the combination of old habits and inexperience tend to enable bad practices justified by the illusionary path of least resistance.
9. Carefully Evaluate Automations/Customizations
The initial deployment of a new PLM system can be greatly hindered by over automation. Each automation must go through verification cycles. The break/fix activities can add up quickly, put schedules at risk, and consume resources; further, the potential is always there for corner cases that will not be found before go-live.
Well intentioned, little automations may collectively result in much greater system complexity.
If automatons and customizations are limited or avoided for initial deployments, then there are fewer risks and shorter time to value (and/or less cost) in terms of deploying your solution.
These are largely technical reasons to avoid automations or customizations with very little attention paid to identifying truly needed automations or customizations. This tip topic alone could be its own series of blogs or a book. In fact, another recent blog “The PLM State: Beware of Humpty Dumpty Syndrome” discusses types of customizations, why they are used, and some approached to creating them. So I’ll try to be brief with just a few more statements or challenges about the methodology behind defining and establishing automations articulated with some specific do’s and don’ts.
Do Not automate what cannot be done manually first. If someone cannot walk through the steps of what is to be automated including identifying and following related decision making logic then the likelihood of automating correctly is not very high.
Do challenge each request for automation and set the bar high for inclusion in an initial release, but do not dismiss the legitimate need or utility of some automations.
Do make the outcomes, not the user interface, the focus.
Do consider delaying and implementing automations in a post go-live phase. The additional advantage to this strategy is that your business users, having had the experience with the system, will be in a much better position to identify the potential areas for automation that will have the greatest impact. They will also be better enabled to satisfy the first rule above.
Further, analysis of the records in the system can provide valuable input about where there may be data issues and stats on the frequency of such issues and the activities where they arise. If an activity in the system is tedious, but infrequent, it may not be worth the investment to automate - the need may even be reconsidered altogether.
Do Not think that all business rules need system driven enforcement. Sometimes it is far better to have a written policy and to rely on people to implement effectively than to try to have the system enforce all of the rules, especially in early implementation.
There is common evidence of where system control and policing has been attempted and failed: Many required fields that have “NA” or “other” as an acceptable value or a field that enables a complete override of system rules.
Do Not let “It’s too hard for users to make the right selections” dictate the need for automation. If it is so hard for the users who own that aspect of information to make a selection, how is it then that an IT analyst is going to be able to define system rules that account for all of the business nuances that might occur and that those rules will last? The reason people are involved in these activities is to have their value added that comes from a person’s skills, experience, and ability to think and make decisions. An objective should be to allow the user to add their value as efficiently and effectively as practical - to let them focus on their job. Sometimes their job is to know how to make the right selections and sometimes it is not their job at all. If it is hard for a user to make the right selection, ask why… ask what outcomes are being enabled based on the selection at hand. It might be hard for users to make the right selections because the system is requiring it of the wrong person or at the wrong time. It’s not uncommon for downstream system driven field requirements to be pushed upstream onto users who have little or no knowledge of or need for the information, yet they and the outcomes needed are encumbered with irrelevant requirements.
Instead of focusing on system enforced rules that often ignore the timing aspects of information relevance, the focus should be on trying to enable getting the right information at the right time. The system should enable the outcomes that are needed when they are needed.
Do challenge complexity. If something is so complex that it must be automated to get it right, maybe it should not be so complex. Perhaps the business needs have been wrapped around the axle of an old process. Take a step back and, again, look at the outcomes that are needed. Ask tough questions.
Do use automation to help with data integrity and data consistency where appropriate and effective. Automations can help drive productivity and consistency into the tedious, mundane, and especially repetitive tasks. They can allow resources to focus their efforts into the system in the areas where they provide the most value.
To this last point we have even productized some key automations where the outcomes needed are consistent and the business rules that govern the outcomes have a well defined construct that is easily configured.
10. Build a Solid, Relevant, and Rational Plan; then execute to the plan and keep the plan up to date
Being outcomes focused applies to planning as well. The most important part of the plan is what needs to be done. The when part of the plan is the most fragile and, frankly, the part of the plan the will change most often. Accept that there may be changes, but be proactive and identify where the risks are in the plan.
Recognizing that there will be changes is the first step in dealing with it. If the plan is solid in terms of what needs to be done, the when is a derivative of the content of the work and dependencies in the plan. Plan failures occur for two big reasons: 1) The plan was not relevant to the work that was needed. 2) The plan was not kept up to date.
In reality, it is the first that is most often the case. Incidentally, if the plan is not relevant it will also either not be kept up to date or, even worse, it will be kept up to date but the updates will have nothing to do with actual progress against the actual work. Be willing to correct the plan and make it relevant.
Please note: A solid and relevant plan does not mean an Nth level detail plan. It simply means that the elements of work and dependencies have been captured in a way and at a level that makes the plan both realistic and manageable.
Proper and effective planning takes time and effort. Managing the execution of, and updates to, the plan take time and effort too.
When Implementing PLM, makes sure that project management is included - good project management. Unfortunately, simply assigning a PMP certified PM does not mean that the principles will be put into practice. Rather than spin this off into a top 10 tips for effective planning and project management, I’ll stop here with a final piece of advice: Do not commit to dates that have no connection to a rational plan.