It is very common for a new software selection to be determined by a scoring system. I am sure most of you have seen this. The selection team identifies the various considerations and assigns them a weight. Then, usually through demonstrations, the team assingns a ranking to each of the considerations It might look something like this:
|
This was actually an RFI scoring matrix we used to whittle down vendors. An actual PACS scoring matrix should be much more complex. But, you get the idea. Yes, we did choose Amicas and we are very happy with the decision.
The best thing about this approach is that it makes the selection more objective. Approving bodies like executive teams and boards love that.
But, I often find that this kind of scoring system can fail a selection team. I frequently see these scores end up to be within a couple of percent of each other. Because the process of weighting is based upon gut, they really present subjective results in an objective looking way.
In my mind the ideal selection chooses the least expensive option that meets all of the critical capabilities. That is, I think price should be balanced against the feature function scores.
It is common in ERP selections to see vendors chosen based upon bells and whistles that will never be implemented. Remember, we use less than half of the features of the software that we buy. Be sure not to make your purchase decision based upon those unused features. Especially if you are paying good money.
Will, this is great. Would you mind explaining how you came up with the weights?
Thanks!
Spot on, as usual. End users – particularly clinical staff will never take the time to understand all of the different features and bells and whistles. I’d be surprised if we used anywhere close to half of the features in the products we buy. The smartest institutions I’ve seen are creating innovative uses for systems that never would’ve crossed the vendor’s mind.
Why not just note critical functions as planned to be there, not there (e.g., 1, 0, -1)? With weights, people can play games and the “objectivity” quickly becomes subjectivity and get ready for a CEO, COO, CFO or Board Member end-around. Other non-critical functions can be noted in a separate table for those who feel the passion behind their requirement and want to see it tracked. Anything on the critical list MUST be on a project plan with resources and costs to implement it… even if it takes 10 years. That is my solution. It usually shuts-down the noise from those who pontificate but who don’t want to put skin in the game for the implementation.
Items on the non-critical function form the basis for future Strategic and Annual Plan implementations. Maintaining this stuff in a database also can form the basis of documenting duplicative features and functions in multiple systems. Down the road, I see this as a repository to review if one plans to work with their vendors to write services for duplicative functions or for other integrations (biomedical equipment, alerts to phones).
Anonymous:
I agree. Weights can easily introduce political games into the picture, if not computed objectively. It is a critical point too, b/c weights heavily influence the outcome of the analysis. You could utilize market research techniques to arrive at the weights properly, though.
What market research techniques? I am unfamiliar with this application of the tool.
Thanks for all the great comments. I love hearing additional views.
anonymous:
The technique I am referring to is called conjoint analysis. It is used to predict and simulate consumer (user) preferences (example: pharma companies use it to determine how to package drugs).
In this case, you can interpret the software package as a “product” and criteria as “attributes”. You will then be able to identify various “levels” of each attribute, for example for “Alignment with Ministry preferred platforms” the levels would be the actual details of what that alignment means to Ministry (i.e. integrates with a, b, c, d, etc).
Based on this you’d then create questionnaries to determine the user’s preferences in the form of “I prefer A over B” where A and B are levels of each attribute. You’d collect the responses from the users and will run regression on that data, which will give you utilities (a number, “score” if you will) for each level.
Having this data will enable you to package various levels of various attributes back into “product” and judge precicely how acceptable this “product” if going to be to your users (respondents) vs. other products (i.e. other combinations). Naturally, in this case, you’ll create the “products” to match what the various vendors are actually offerering.
In addition, you will also be able to find out which attributes/levels are deemed most important.
Weights are important. The approach of 1/0/-1 assumes that all criteria are equally important (similar to the fatal assumption made in RVU procedure costing method – all procedures use the resources equally and linearly; no wonder most hospitals have no idea what their _true_ costs are ); it’s just not the case.
The question, similar to costing, what is the true weight?
Igor
will:
Thanks for the blog! I only wish I found it sooner.
I am sure you dont remember, but we met back in the summer of 2000.
Does the name “CyberView” ring a bell?
i
Will, your point on weighted attributes with rankings leading selection teams to the wrong conclusion is right on. Weights and rankings assigned to attributes of a product for selection purposes usually do not have a sound basis and are purely subjective in nature. This is to give some scientific bases to a decision based on emotion or desire. Often the team members cannot even rationally discuss why a particular attribute received the ranking or weight it did. Often end users involved on the selection team have preferences based on a comfort level in skill sets or technologies that bias their weighting and rankings, which results in an inappropriate selection.
Cost of course is the primary driver after meeting all essential business requirements. The key is to define accurately which attributes meet the needs of the business today and to forecast what those needs might be in the future.
As you know, the cost associated with product selection is more then just the license and hardware costs. Often implementation, training, and testing costs can make what looks like a low cost solution turn into a high cost solution.
There is a product category in IT infrastructure that has a cost range from 3K to 100K with the products on each end of the spectrum performing a minimal set of requirements equally well. However, the costs associated with a move from the low cost solution to the higher cost solution to meet advanced business needs would exceed the cost of the higher end solution.
Thanks for the opportunity to post on your blog.
To add to the comment above, I believe it is really the following driving the selection:
– Satisfaction of critical requirements in alignment with architectural tenets and cost/value (for ex: codification of select data elements for decision support, order entry in one application only)
The cost of many modules is just not worth it these days – not only from a pure cost of acquisition but one must also consider the cost of building the content, building the interfaces, conducting the process design, training, testing, deployment, and maintenance of the app, content, and interfaces. Often times I find healthcare organizations don’t have the energy to change to fully implement and fully-utilize another module once the base application is installed, then you’ve ended up purchasing something that is never implemented or fully utilized.
There is a set of interesting articles on application erosion on Technologyevaluation.com. Check them out – this is another factor to consider. See another succinct link re: the topic here:
http://www.refresher.com/!oterosion.html