Pages

Crossing The Cloud Services Chasm - Delivering Successful Cloud Services in a Rapidly Shifting Market

If you’ve been around the business world long enough, especially as an entrepreneur, you’ve come across your fair share of books on the subjects of taking products to market and disrupting industries. Being ‘Lean’ is all the rage these days, especially in Tech, and you’re scum if you can’t scrum. The area I love to focus on, however, is the services side of the equation. Geoffrey Moore’s seminal work, “Crossing the Chasm” talks about what the various customer adopter segments are for a given product and how to sell to all of them. In the process, there are a few gaps that a product company must cross, the largest being called ‘the chasm.’ This is the most difficult hurdle to overcome, and a keen strategy is necessary to cross this chasm. In terms of a high-tech product (I like to just use ‘tech’), the services component is often what bridges that chasm. In this post, I’ll give a quick intro to the concept of crossing the chasm and then detail how this applies to a new services model, specifically, ‘cloud professional services.'

Defining the chasm and market segments




As you can see from the graphic, the chasm comes immediately after the early adopters jump on board with your product. Once you cross the chasm, the transitions between the market segments become easier. The idea here is that you want to build a solid group of reference customers within each group, starting with the visionaries, then use that group to grab customers in the next market segment. 

Getting the early adopters on-board is usually easy, as these people are natural visionaries and become champions for your product within their organization. They question the status quo and eschew the calcified thinking that accompanies “this is the way its always been done.” They often see better ways to enable their core business, while avoiding the trap of “I want to build/do everything.” They are seasoned technologist leaders, and this is what you need when you are bringing a disruptive product to market, especially when it comes to fighting internal political battles within large organizations. You cannot succeed without these people, so be sure to seek them out and help them in any reasonable way possible.

Once you have established a solid pool of reference customers in the early adopter segment, it is time to storm the early majority sector. To do this, you must now cross the chasm. In another life, I was taught that “features tell and benefits sell.” I have found this to be the truth in just about every aspect of life, from pitching yourself to a potential employer or client, to being pitched on a lifestyle product. How will you make my life easier? That is the question I always ask of a salesperson, and that is what I expect to address with each customer. If I can solve a pain point for you and make your life easier, I have accomplished what I set out to do.

There are many potential benefits to your product, but in the eyes of the adopters, certain benefits sell over others. For example, with the early adopters, the primary purpose of their foray into new disruptive products is to gain a considerable competitive advantage. This can be quantified in a reduced time to market for a software product or the development of an entirely new (early adopter customer) product line that was not previously possible without your product in their arsenal. The early adopter will understand that such a radical change agent comes with a price in terms of stability and bugs. It is the cost of doing business when you want to remain ahead of your competition no matter what. This level of passion will closely match yours in the early days, and often you will build synergistic relationships, even personal friendships, that can last a lifetime. That is no exaggeration. The outcome of these services engagements will greatly impact your ability to win over the next market segment, so pay attention to specific ‘wins’ like efficiency gains, cost reduction, evolution of your product over time that shows stability and reduced risk. This is key when talking to...

The early majority are pragmatists as depicted in the diagram above. They often have well established lines of business with commensurate processes. They are seeking to enhance their operations, not radically alter them. They want a faster horse as opposed to a horseless carriage. This is not to say that they do not have long term plans for an automobile, but their primary purpose for your product is a short-term gain in efficiency with longer-term gains possible. This is a win-win inside most large organizations, so key your benefit discussion around this point. This market segment also expects a stable product that has minimal issues. They want to be able to buy support for the product and ensure that management has “one neck to choke” should problems arise. They are naturally risk-averse, as any serious interruption to the status quo can mean millions of dollars in lost revenue. They do understand that all software has bugs, but they expect them to be few and minor before they will accept your product in a production scenario. This is why you must be able to establish a stability track record within the early adopter pool as soon as possible. Your reference customers must be able to speak to the stability and safety in using your product. Try to cultivate this as much as possible in your early adopter pool, and your job becomes easier in winning over the pragmatists.

The good side of this segment is that once they are won-over, they become loyal customers who push your product all over their enterprise. They will attempt to standardize around your product, and they will fight the political battles as needed for mass adoption of your product. They will also speak highly of your product to their colleagues and professional acquaintances, who are often dispersed throughout their industry. They will be the catalyst that pushes your product into the market leader category, and they will use the success of the projects (based on your product) to increase their visibility within the organization. They will often move into higher positions of leadership within their organization as a result of these successes, and they will always remember that your product got them there. This can become important in the future, when you have launched the 'next great startup' (TM). 

Moving through the remainder of the market segments is much easier at this point. Your product should have evolved to a state of extreme stability by this point. In addition, market competition (in your product space) should either normalize pricing or reduce it outright. Your product has probably evolved in such a way that aspects of the original offering are now considered ‘commodity,’ and new features add more value in areas such as ease of installation, operation and performance metrics. Your product is more ‘turn-key’ than not, and your licensing model reflects amazing value in a fully-bundled offering with great pricing. More than likely, your product does one thing extremely well and eschews the gimmicks or feature bloat of competitors who are flailing to grab market share. You are the established market leader in your product segment, and you product is seen as a whole-solution to a problem as opposed to a partial solution. Last but not least, this market segment prefers to purchase software and services via well established relationships with their VARs (Value Added Resellers), so your channel partner program must be strong by this point. Never do anything to upset the channel. They will always have your back if you have theirs.

What this means in the Cloud world...


In the cloud realm, the early adopters have already made their play. Both public and private cloud providers have harvested the low hanging fruit in terms of customers. These early adopters primarily consist of startups and small agile companies. Even some small, agile business units within large enterprises have darted down the ‘shadow IT’ path and made their mark. The industry is now turning its attention to the large enterprise sector. This comes with many unique but exciting challenges. If you reference the diagram above, these enterprises are almost entirely in the mainstream market segments. They have deeply entrenched workloads that are currently on bare-metal or virtualized infrastructure (most likely VMware.) How you proceed as a cloud services organization is key here as there are lessons that have been learned the hard way up to this point.

Do not ‘lift and shift.'

No matter how big a potential customer engagement might be in terms of revenue, if their primary purpose is to “get rid of VMware”, red flags should go up immediately. Most of these customers have not done due diligence in understanding the ‘cloud way’ of doing things, specifically refactoring or re-developing applications. The are looking for an easy migration method for legacy applications that often entails copying VMs from VMware to a cloud. We call this ‘lift and shift’ and it is a guaranteed recipe for failure. If you see this line of thinking in a potential engagement, try to educate the customer on properly designed cloud applications. If they still insist on doing the ‘lift and shift’ thing, even if “only as an interim solution”, politely decline the opportunity. I have never seen a single one of these engagements succeed, and they are a massive drain on resources for both you and your customer. You have to know when to turn down an opportunity in the cloud space, and this is one of those times. If, however, the customer is open to the larger discussion of application refactoring, distribution and resiliency, then...

Do refactor or re-design.

The primary reason that the ‘lift and shift’ method fails is that in a cloud, the basic tenet of application design is that ‘everything will fail - now design an application that can survive.’ The only way to achieve this is via a properly designed, fully distributed application. Legacy monolithic apps will not survive the outages in a cloud as they were designed to run on robust infrastructure that ‘did not fail.’ Cloud-native apps distribute components and deal with application persistence in a different way. They are able to have one or more instances of a component fail without complete failure of the application stack. They use load-balancing liberally and allow for an application to scale out and in dynamically based on workload. These applications are often designed to be cloud-native from the get go, but many applications (especially web-based applications) can be refactored into cloud application patterns with a reasonable amount of work. You should always advise customers down the proper cloud application development path, especially early in the discussions. This puts all the information on the table and prevents misunderstandings or oversights from happening in the design process. Great developers will see refactoring as a unique challenge that they want to solve, so partner early with these people and use them as champions for ‘doing things the right way.’ 

Not every application is a fit for cloud.

As much as a cloud evangelist, like myself, dreams of the day that the cloud can handle all workloads, this is simply not the case today. Performance, application design, compliance requirements, SLAs and various other things can prevent a workload from becoming a cloud-native workload today. This is not to say that this will always be the case, but you must be realistic, especially in the large enterprise segment. You must fully understand the customer use cases for the cloud, understand their workloads and be prepared to offer a hybrid solution should the need arise. For example, many large enterprises began the march toward web-enabling (or portalizing) the majority of their apps several years ago. In this process, many took the time to decouple the web, application and database tiers of these applications. This is a huge step toward refactoring for cloud. Distributing the application components is the next step toward a bona fide cloud application pattern. It is often the case that the web tier is able to be distributed already, behind load balancers. In some cases, even the app tier can be distributed. The database tier often becomes the one tier that is not cloud ready and cannot readily be ported, especially if it is on Oracle RAC or Exadata. As a cloud services organization, you must understand that this is not only OK, but it is the natural progression of large enterprise applications in this space. It is absolutely acceptable to put the web and app tiers in the cloud, while your database tier remains on big iron. Developers will need time to assess their database options in the cloud. They may elect to use a distributed database like Cassandra, or they may choose to wait until database as a service (DBaaS) is a reliable option like RDS is in AWS. Always look for areas where a compromise in the application pattern does not violate basic tenets of cloud and allows a customer to leverage existing investments until they are able to bring a laggard component fully into the cloud. Speaking of leveraging existing investments...

Enterprise storage is not a villain.

Sure, we would love all of our customers to adopt the newest cloud storage products, because we know that deep down, these products are excellent and scale out well. Ceph and Riak are awesome, but there is nothing wrong with a customer using a NetApp to back their image store while they wait for technologies to mature or for competitors in the space to shake out. This is what large enterprises do. Will there be a penalty in terms of performance and scale? Perhaps. For large enterprises, however, stability and reliability are key, and NetApp delivers in those areas. Also, there are operational considerations such as finding and staffing the talent to run a large Ceph or Riak farm. As these technologies are in their infancies, the talent pool is not as large as it is for traditional big iron storage. It may take time for large enterprise customers to trust and then staff for the transition. Again, look for areas of compromise that do not violate the basic tenets of cloud design.

Large enterprise does not necessarily mean service provider.

This is a huge sticking point for most people in the cloud space. I often hear the argument that storage system A will not scale as far out as cloud storage product X. This is likely true, but what is the true scale that the enterprise needs? Often, their level of scale is well within the limits of a traditional storage vendor. If they are a service provider, that is a different discussion. Do they need the absolutely blazing performance of an all SSD array, or can a traditional tiered SAN meet their needs? Do they understand how to use an object store? Do they have a use case for it? Do they need to use overlay networks or an SDN, or is the upper-bound of their use case within the limits of a traditional network design? I could go on and on here, but the point I am trying to make is that just like instances should be right-sized for the application components they contain, a cloud should be right-sized for the customer’s intended use cases. Use case should always dictate the design and architecture of the underlying cloud platform, not the other way around. The ‘best' way to fail in a cloud services engagement is to design something that is rigid, expensive and not supportive of the customer’s intended use cases.

Do not be dogmatic.

Lastly, always remember that you are not here with a solution in search of a problem. There is not only one right way to solve a problem. You will inevitably encounter challenges that you never expected, so have an open mind. You are here to provide a solution to a given set of problems as defined by your customer. If you approach every cloud services engagement from that perspective, you will ultimately succeed and become a trusted adviser to your customer for a long time to come. 

No comments:

Post a Comment