The next phase in my quest for health and fitness involves making sure that I have the ability to actually finish a triathlon. This means that my body needs to be conditioned to take on the activities I am planning. There is again a parallel in Product Lifecycle Management (PLM). If the physical environment for PLM is not properly set up any outcomes expected will be impacted. Specifically, this article will cover what I consider to be the minimum requirements from a system level for PLM including hardware, backups, networks and the impact the availability of the cloud and SaaS has on PLM systems.
One of my resources for my triathlon training is a book titled The Triathlete’s Training Bible. In one if the chapters the author states, “training should be purposeful and precise to meet your unique needs.” He goes on to challenge the reader to ask the question “What is the purpose of this workout?” IT resources and decision makers inside of organizations need to do the same thing. The first question should be do we have the resources and staff to support PLM internally? This generally means that your internal is staff comfortable with server class hardware and networks. The requirements for PLM have gotten fairly straightforward and are not any more excessive than any enterprise class application including CRM or ERP but as companies continue to downsize IT or outsource to external companies it can become an issue. Obviously, the size of the company and the number of users will dictate hardware requirements to a certain extent and most larger companies already have sufficient infrastructure to handle PLM due to the other systems they support. This can become a problem in itself. While PLM does share system requirements with ERP this does not automatically mean the requirements are identical. ERP systems usually are more involved and offer more opportunities for tweaking and performance tuning. PLM is usually more self-contained and functions better in its own environment. If you are a CIO or Director of IT do not automatically assume PLM is exactly the same as ERP and be prepared to make allowances for some of the nuances required. These include only integrating through the API (no direct table access), setting up independent systems for the application server and possibly avoiding clustering. Also backup and disaster recovery mechanisms may need to be separate depending on what you are using. Typically, PLM systems cannot be easily copied to a new environment due to how the software is installed.
Smaller companies and even some larger ones sometimes elect to use hosting companies to run private clouds using both physical servers and virtual environments to house their enterprise systems including PLM. Some hosting companies specialize in enterprise systems but again remember that PLM is somewhat unique. Installation into these environments requires a level of access that these hosting companies sometimes balk at and they are not equipped to install software themselves for PLM despite what they may claim. This can be negotiated around but be prepared to support your vendors through this process and allow for delays. Another consideration with external hosting is performance. PLM unlike ERP usually involves physical files which can be large especially if engineering data is part of your PLM footprint. Most PLM systems allow for local file servers to be deployed at each physical site to address this. You should make sure this option is available before you adopt any PLM system whether it is on premise or SaaS because this will create adoption issues and affect productivity. We have experienced good success running PLM tools on Amazon’s cloud and a large number of our clients use third party companies to host some or all of their servers. If you choose this option your external bandwidth will need to be fairly robust and will need to be closely monitored. Even if you run on-premise network bandwidth is critical given the amount of information that will be flowing to and from the system. If you have multiple facilities your WAN architecture will experience more traffic and this must be considered since the cost of upgrading will not be trivial.
From a physical perspective SaaS and private cloud are pretty much the same. Most SaaS based PLM tools (Arena and Autodesk360) house all their clients in a single environment. This means that when they make changes to the system all customers are affected. This can be a positive or a negative depending on how well they do their jobs. You just need to understand going in you will not be in full control of your environment or data. This compromise is usually offset by shorter ramp up times and lower cost. Private clouds offer the best of both worlds in that you can use virtual machines or collocated physical servers to house a specific instance of your PLM system. This means you control the configuration and the data. You will not be impacted by other people’s traffic on the PLM system and if you have any issues you can always upgrade the processors and memory. We have seen very few differences between virtual machines and physical servers from a performance perspective and vm’s are definitely more cost effective. You can run a number of virtual machines on one server assuming your user load is appropriate and we see no downside to leveraging this technology for any enterprise class application.
PLM is a critical path application and should be treated as such as you provision resources for the system. This means you should plan on housing multiple instances of the system for change control and redundancy. At a minimum you should plan on having a test system in place to use to validate any modifications you make to your PLM environment including security or workflow modifications. Having a test system also makes upgrades easier to pull off while keeping production systems online. Most companies will have a test and development system for PLM in addition to production. This provides fail-safes in case data is lost or systems go down. With tools like VMWare setting up mirrored sites for redundancy is very simple and should be leveraged as a disaster recovery strategy. Most PLM systems also allow data dumps to be extracted to provide further backup capability. If you use this approach these dumps should be tested periodically to make sure they are viable.
There are many scripts and tests that can be run to evaluate the performance of your physical environment and these should be leveraged quarterly to make sure you are running optimally. Having a plan up front to meet your organization’s requirements is a key component for PLM fitness. Many companies make the mistake of lumping PLM in with other enterprise systems and this can create issues. Others fail to anticipate the additional load PLM can put on a network. Overall with proper planning PLM is not that difficult to deploy but like anything else it requires forethought. With all the options available today you should do your homework and come up with an infrastructure that will allow for optimal performance and stability while providing redundancy and flexibility. Get physical with your PLM and you will ensure a stable environment to support your business objectives. Hopefully my plan for the triathlon with serve me equally well.