Partner Qualification and Performance Reviews
In the previous two blogs (Part 1 and Part 2), we introduced the concept of a Partner Relationship Lifecycle, and showed how different roles within our organization lead or are involved in different phases of the lifecycle. In this letter, we will discuss some best practices for qualification of the partner and periodic performance reviews.
The most robust way in which to qualify potential partners is to use a formal framework that breaks the qualification down into specific areas, and attach a score along with reasoning for each axis. As an example, suppose that your project is looking for a software development partner who can work well with your fast-track system project. You are searching for a company that uses Agile or Scrum methodology to match your fast-paced team and flexible market requirements. The first thing your Procurement Specialist should do is to meet with the project team to determine the most important characteristics they are looking for in a partner. An example of such characteristics for this case might be:
- Programming Competency: Demonstrated ability to program and control the code in a language that you specify
- Agility: Experience and track record running Scrum development processes
- Test and Debug Competency: How bug-free is their code and how fast do they get there?
- Documentation: Completeness of self-documented modules
- Cost: Your total cost of ownership of the modules developed, including development, support, and other costs
- Business Stability: Will the partner be there for your company in the long term?
There could be other parameters in addition, or quite different ones. We recommend, however, that you chose a minimum of 5 and a maximum of 12. The reason has to do with choosing a good granularity that can de-emotionalize any discussion, and yet simple enough to see what the overall picture is in a glance. A good way to visualize the data is to use a “radar chart” or “spider chart” as shown in Figure 4.
The scoring for each axis is on a 0 to 5 scale, where:
0 = Supplier has no concept
1 = Supplier tries, but is not competent (or other negative attribute)
2 = Supplier has below average competence
3 = Supplier is above the average in competence
4 = Supplier is excellent at the competence
5 = Supplier is a recognized world leader in this area
Furthermore, we make the decision that if any important parameter is below a score of 2, the supplier or partner is unacceptable or disqualified. Conversely if the contending supplier has all scores above the 3.5 range, they may be considered as a preferred supplier.
In this sample chart showing the results for two potential partners, you can see that company A is very competent and agile, but seems to be extremely costly (and thus a low score on “Cost”), whereas company B is much more reasonable cost-wise but has only mediocre programming capabilities. These kinds of analyses allow the group to make the best choices and decisions.
After a partner has been chosen, the same metric system can be used for periodic performance evaluations and reviews. Each axis should be assigned to one of the people managing the partner for scoring. For example, the Programming Competence and Agility axes might be assigned to the Tech Lead (TL) or Tech Specialty Lead (TSL) managing the day-to-day partner work, the Test and Debug and Documentation axes might be owned by a different TSL or perhaps a Quality manager, and the Cost and Business Stability axes may belong to the Procurement Specialist (PRO).
The PRO should call this team together periodically (quarterly is typical) to do the scoring and document reasons for each score. One fun and quick way to do the evaluation is called “Scoring Poker” (similar to “Planning Poker”): Give each person on the evaluation team playing cards with the numbers 1, 2, 3, 4, and 5. Pick an axis to be evaluated and ask each team member to think about what the partner’s score should be; and then have everyone throw his/her card down on the table face up simultaneously. If there are wide discrepancies, ask the people with the highest and lowest number to explain their thinking, and then do another round of poker. Within two or three rounds, all the cards should be within 1 point of each other and you can just take an average. The PRO (or her designate) should document the reasoning for each score.
The PRO, TL and EPM should then meet with their counterparts in the partner organization to go over the scores. We have found that this kind of meeting is usually very fruitful and well-received by suppliers and partners. It sets the expectations very clearly, and lets them know where they stand and what they need to do to improve without emotionalizing or going on “feelings” of individuals. Using the same set of criteria consistently over time allows the supplier to demonstrate improvement and avoids introducing new or undefined expectations.
In this blog and the previous two parts (Part 1 and Part 2), we took the concept of a Partner Relationship Lifecycle and showed some tools and best practices for leading the relationship through each phase, including tools for qualification and periodic review of the results. These methods are best practices for any kind of partner relationship, but what about projects using Agile or Scrum methodologies? Are these methods sufficient? To answer those questions, the next blog (Part 4) describes the principles of the Agile Manifesto and how those principles may be used to work with partners on software and many kinds of hardware product development.