Pages

Beware the Ides of March

It has been almost a year since the last post on this blog. As a matter of fact, the last post was written on the Ides of March, 2015. Suffice it to say, I have been beyond busy in the world of Enterprise Cloud Computing. Perhaps that is a topic for another blog post, but this one is to discuss the rumored upcoming Apple event next month. March is normally a very exciting month for me as it is Easter Jeep Safari time in Moab, UT, but this March, there will be a second reason for excitement, and it almost seems as though the timing is more than a coincidence.

Traditionally, Apple holds the large press event (usually an iPhone event) in early September, and the new iPhones go on sale in late September. There has been a separate event for iPads farther in the past, usually in October (I believe). It was no surprise that as that product line matured and normalized, Apple would eventually combine it with the iPhone launch event, even if iPad releases were a little further out on the horizon as opposed to the iPhone. Keep in mind, though, that there are many other product lines at Apple, not least of them is Apple Watch. In my humble opinion, there are too many to cover at a single yearly event, and even if they could, it makes no sense operationally to release all products at the same time of the year. Apple really needs to get a solid cadence going with respect to public events that is predictable and spaced out enough to allow the supply chain to ebb and flow in a more even manner.

So, if we know the main yearly event is in September (mainly because iPhone is the largest line of business at Apple today), March makes perfect sense for a second yearly event as it is exactly six months away. I am venturing out on a limb here to posit that Apple may be making this a pattern as opposed to a one time event. We'll know next March I suppose. What interests me most about this March event is the rumored new smaller iPhone that is going to launch.

Bear in mind that all of the following is speculation at this point, but it is interesting to me none the less. The iPhone 5se, as it will be called, will be a smaller iPhone with a 4-inch screen. It seems that this phone is designed to meet the demands of a market that prefers smaller phones. The current iPhone 6s and 6s+ are 4.7 and 5.5-inch respectively. While I do know a very small number of people who prefer small phones, my own experience has been that the larger phones are much better. I moved straight to a 6+ from my 5s and skipped the 6 as I have rather large hands, and the small screen on the 4 and 5 lines was a common complaint from me. Initially, the difference was a major adjustment, but within a week or two, I couldn't imagine going back to the smaller phone. Today, I wish I had an even bigger screen, perhaps 6 inches, but I worry that it would enter the realm of ridiculousness, especially the few times I actually raise it to my ear to take a call like a normal phone.

Aside from the size difference, the phone is rumored to be based on the 5s chassis, with upgraded internals what will help run the newest iOS versions and modern apps. It should also be getting the same front and rear cameras as the current iPhone 6, additional sensors, an NFC chip for Apple Pay, upgraded A and M chips from the 6 line (not sure which yet, but A9/M9 are likely due to economies of scale in manufacturing), upgraded LTE/Wifi/Bluetooth antennas and chips, and same colors as the current 6/6+ line. This all sounds great for someone who really wants that 4-inch screen, but it does leave some unanswered questions.

My main question here is what is Apple's angle with this phone? I do understand that some people want smaller phones, but is it really that many? Is there a large enough market to justify making another phone to address that market? Are there enough "hold-outs" refusing to upgrade from old iPhone 4s and 5s that Apple sees a strategic opportunity to cater to them with this smaller phone? Or, is this a possibly play into a lower profit margin area of the overall mobile device market? I'm fairly certain Apple would not price this so low that it would compete with the plethora of junk Android devices littering that end of the market, but maybe they can price it low enough that the Apple name and quality will draw more people up from that segment into a premium segment of the market.

The other interesting angle here could be Apple changing its product mix and market approach. Growth for the iPhone has been phenomenal, but everyone, including Apple, knows this growth cannot be sustained forever. They need to expand into other market segments. Historically, when a new iPhone model is released, Apple takes the current one, lowers the price, and offers it as the economical option. The new line gets new hardware, including new chips, and the old models keep their current hardware. At the scale Apple has to manufacture at today, it may be becoming a problem to maintain two separate supply chains for older and newer models, not to mention two supply chains within those for the larger and smaller phone. It may make sense at this point to increase the product mix to include three phones (small, medium and large) which share many or most of the same internal components, thus simplifying the supply chain and shrinking the manufacturing delta between the models.

I suppose we will find out on March 15 when we see the real specs on the phone and can compare all of the components with the others in the current 6s and 6s+. We will also see if Apple discontinues the sale of the only the older 5s, or also the 6 and 6+. I'm also interested to see if there will be any other announcements, perhaps a new Apple Watch?

Sling TV and HBO Now are the first cracks in the dam of the cable and satellite hegemony.

As a long time streamer of media via Netflix, I have become accustomed to watching content on my schedule. As most of you know, I travel a lot for business, so watching my favorite shows when the content providers want me to is a non-starter. I’ve also begrudgingly held a cable subscription for one show I like to watch on Sunday evenings. I have known for a while know that it is just a matter of time before over-the-top programming becomes the norm, and that a la carte programming is the future. Naturally, the major cable and satellite companies have been fighting to prevent this, but their hegemony is losing power at an escalating pace.

I was quite intrigued and elated when I saw that Sling TV launched an over-the-top service in February of this year. The original package of channels was nothing to write home about, but it included some major names like ESPN and CNN. Rumors were also circulating about AMC coming to Sling TV as well. For me, this meant that I could cancel my cable subscription as AMC is literally the only channel I care about on regular cable. (FYI, AMC is now officially in the lineup for Sling TV) At the Apple event this past Monday, the CEO of HBO announced a new over-the-top offering, called HBO Now, that does not require a cable or satellite subscription like HBO Go. The launch and ultimate success of these two services are merely the first cracks in the dam of the hegemony which is cable and satellite TV. 

Overall, this is great news, but there is still one more wall that needs to be knocked down in order for this new world of services to thrive...

Let’s put aside the fact that cable companies have been given near monopolies in most cities, billions of tax payer dollars to build out networks for ‘the common good of the people,’ and hidden behind vague interpretations of FCC rules to avoid being regulated as common carriers. The biggest scam the cable industry has wrought on the public is that of the infamous ‘bundle.’ With these bundles, the cable companies tell us that we can save money over individually prices services, most often cable content, internet and phone service. The problem isn’t in the bundle itself, but rather in the dismantling of said bundle. You see, now that Netflix, Hulu, Sling TV, HBO Now and other over-the-top offerings are becoming the de facto way to watch content, the cable bundle doesn’t make sense any longer.

Most people use mobile phones for all of their voice communications (especially younger people), so residential phone service is redundant if anything. With older and newer streaming options, many people are ditching cable and satellite altogether. These forward thinking individuals are called ‘cord cutters’ as they are cutting the cable cords that have bound us for nearly three decades. This means that for many people, myself included, only the internet connection matters. If a typical cable bundle is $99 for cable/internet/phone, it would make sense that each should cost about $33. Perhaps it costs more to offer cable service as opposed to internet or phone, so that ratio can fluctuate, but this is where the insidiousness of the cable companies really surfaces. If I want to drop cable and phone service, my internet service ‘miraculously’ jumps from about $33 a month to $79 a month. There is no increase in speed or class of service. So does that mean that cable and phone service are only $20 a month (for both) in the bundle? Of course not! This is simply the cable companies’ way of making internet service, by itself, seem unattractive from a fiscal perspective. They know the streaming services are aiming to dismantle the monopolies granted to them by municipalities over the years, and by having the dual position of offering paid content and the very lines that the streaming services must transmit across, they can manipulate prices to keep their paid offerings in front. 

This is why the recent move by the FCC to regulate these broadband providers under Title 2 is so critical. First and foremost, it will mandate a level playing field between the streaming services and the cable companies’ own over-priced content offerings. It will also prevent cable companies from introducing barriers to entry for other new content providers. In addition, it allows competing broadband offerings to enter the market without the legislative and legal red tape the cable companies have used to maintain their monopolies. Services like Google fiber are already having a huge impact on the broadband market in the US. In the few cities that it has been deployed, the cable providers somehow found a way to offer gigabit broadband at competitive (to Google fiber) prices after years of saying that it was not possible. Even in cities where Google fiber has yet to arrive, cable companies are doubling internet speeds ‘for free’ and touting new gigabit offerings where it was ‘impossible’ before according to these same cable companies. Do you see a pattern here? Competition is forcing these cable companies to compete in a free market as opposed to enjoying the protection of a monopoly.

Eventually, the price of broadband internet will plummet to a more realistic $20-$50 a month for 100 megabit to 1 gigabit. This coupled with the increasing amount of content available via streaming will finally break the monopolies that cable companies have enjoyed for so long. It is so apropos that both of these issues (Title 2 regulation and over-the-top content offerings) came at just about the same time. It was a beautifully executed one-two punch to the head of the cable industry, and only good can come of it. I can’t wait to see how this benefits consumers over the next few years. Indeed, the initial cracks in the facade of the dam are showing, only this time, the inevitable flood will wash out the filth that has fleeced consumers for decades.

Crossing The Cloud Services Chasm - Delivering Successful Cloud Services in a Rapidly Shifting Market

If you’ve been around the business world long enough, especially as an entrepreneur, you’ve come across your fair share of books on the subjects of taking products to market and disrupting industries. Being ‘Lean’ is all the rage these days, especially in Tech, and you’re scum if you can’t scrum. The area I love to focus on, however, is the services side of the equation. Geoffrey Moore’s seminal work, “Crossing the Chasm” talks about what the various customer adopter segments are for a given product and how to sell to all of them. In the process, there are a few gaps that a product company must cross, the largest being called ‘the chasm.’ This is the most difficult hurdle to overcome, and a keen strategy is necessary to cross this chasm. In terms of a high-tech product (I like to just use ‘tech’), the services component is often what bridges that chasm. In this post, I’ll give a quick intro to the concept of crossing the chasm and then detail how this applies to a new services model, specifically, ‘cloud professional services.'

Defining the chasm and market segments




As you can see from the graphic, the chasm comes immediately after the early adopters jump on board with your product. Once you cross the chasm, the transitions between the market segments become easier. The idea here is that you want to build a solid group of reference customers within each group, starting with the visionaries, then use that group to grab customers in the next market segment. 

Getting the early adopters on-board is usually easy, as these people are natural visionaries and become champions for your product within their organization. They question the status quo and eschew the calcified thinking that accompanies “this is the way its always been done.” They often see better ways to enable their core business, while avoiding the trap of “I want to build/do everything.” They are seasoned technologist leaders, and this is what you need when you are bringing a disruptive product to market, especially when it comes to fighting internal political battles within large organizations. You cannot succeed without these people, so be sure to seek them out and help them in any reasonable way possible.

Once you have established a solid pool of reference customers in the early adopter segment, it is time to storm the early majority sector. To do this, you must now cross the chasm. In another life, I was taught that “features tell and benefits sell.” I have found this to be the truth in just about every aspect of life, from pitching yourself to a potential employer or client, to being pitched on a lifestyle product. How will you make my life easier? That is the question I always ask of a salesperson, and that is what I expect to address with each customer. If I can solve a pain point for you and make your life easier, I have accomplished what I set out to do.

There are many potential benefits to your product, but in the eyes of the adopters, certain benefits sell over others. For example, with the early adopters, the primary purpose of their foray into new disruptive products is to gain a considerable competitive advantage. This can be quantified in a reduced time to market for a software product or the development of an entirely new (early adopter customer) product line that was not previously possible without your product in their arsenal. The early adopter will understand that such a radical change agent comes with a price in terms of stability and bugs. It is the cost of doing business when you want to remain ahead of your competition no matter what. This level of passion will closely match yours in the early days, and often you will build synergistic relationships, even personal friendships, that can last a lifetime. That is no exaggeration. The outcome of these services engagements will greatly impact your ability to win over the next market segment, so pay attention to specific ‘wins’ like efficiency gains, cost reduction, evolution of your product over time that shows stability and reduced risk. This is key when talking to...

The early majority are pragmatists as depicted in the diagram above. They often have well established lines of business with commensurate processes. They are seeking to enhance their operations, not radically alter them. They want a faster horse as opposed to a horseless carriage. This is not to say that they do not have long term plans for an automobile, but their primary purpose for your product is a short-term gain in efficiency with longer-term gains possible. This is a win-win inside most large organizations, so key your benefit discussion around this point. This market segment also expects a stable product that has minimal issues. They want to be able to buy support for the product and ensure that management has “one neck to choke” should problems arise. They are naturally risk-averse, as any serious interruption to the status quo can mean millions of dollars in lost revenue. They do understand that all software has bugs, but they expect them to be few and minor before they will accept your product in a production scenario. This is why you must be able to establish a stability track record within the early adopter pool as soon as possible. Your reference customers must be able to speak to the stability and safety in using your product. Try to cultivate this as much as possible in your early adopter pool, and your job becomes easier in winning over the pragmatists.

The good side of this segment is that once they are won-over, they become loyal customers who push your product all over their enterprise. They will attempt to standardize around your product, and they will fight the political battles as needed for mass adoption of your product. They will also speak highly of your product to their colleagues and professional acquaintances, who are often dispersed throughout their industry. They will be the catalyst that pushes your product into the market leader category, and they will use the success of the projects (based on your product) to increase their visibility within the organization. They will often move into higher positions of leadership within their organization as a result of these successes, and they will always remember that your product got them there. This can become important in the future, when you have launched the 'next great startup' (TM). 

Moving through the remainder of the market segments is much easier at this point. Your product should have evolved to a state of extreme stability by this point. In addition, market competition (in your product space) should either normalize pricing or reduce it outright. Your product has probably evolved in such a way that aspects of the original offering are now considered ‘commodity,’ and new features add more value in areas such as ease of installation, operation and performance metrics. Your product is more ‘turn-key’ than not, and your licensing model reflects amazing value in a fully-bundled offering with great pricing. More than likely, your product does one thing extremely well and eschews the gimmicks or feature bloat of competitors who are flailing to grab market share. You are the established market leader in your product segment, and you product is seen as a whole-solution to a problem as opposed to a partial solution. Last but not least, this market segment prefers to purchase software and services via well established relationships with their VARs (Value Added Resellers), so your channel partner program must be strong by this point. Never do anything to upset the channel. They will always have your back if you have theirs.

What this means in the Cloud world...


In the cloud realm, the early adopters have already made their play. Both public and private cloud providers have harvested the low hanging fruit in terms of customers. These early adopters primarily consist of startups and small agile companies. Even some small, agile business units within large enterprises have darted down the ‘shadow IT’ path and made their mark. The industry is now turning its attention to the large enterprise sector. This comes with many unique but exciting challenges. If you reference the diagram above, these enterprises are almost entirely in the mainstream market segments. They have deeply entrenched workloads that are currently on bare-metal or virtualized infrastructure (most likely VMware.) How you proceed as a cloud services organization is key here as there are lessons that have been learned the hard way up to this point.

Do not ‘lift and shift.'

No matter how big a potential customer engagement might be in terms of revenue, if their primary purpose is to “get rid of VMware”, red flags should go up immediately. Most of these customers have not done due diligence in understanding the ‘cloud way’ of doing things, specifically refactoring or re-developing applications. The are looking for an easy migration method for legacy applications that often entails copying VMs from VMware to a cloud. We call this ‘lift and shift’ and it is a guaranteed recipe for failure. If you see this line of thinking in a potential engagement, try to educate the customer on properly designed cloud applications. If they still insist on doing the ‘lift and shift’ thing, even if “only as an interim solution”, politely decline the opportunity. I have never seen a single one of these engagements succeed, and they are a massive drain on resources for both you and your customer. You have to know when to turn down an opportunity in the cloud space, and this is one of those times. If, however, the customer is open to the larger discussion of application refactoring, distribution and resiliency, then...

Do refactor or re-design.

The primary reason that the ‘lift and shift’ method fails is that in a cloud, the basic tenet of application design is that ‘everything will fail - now design an application that can survive.’ The only way to achieve this is via a properly designed, fully distributed application. Legacy monolithic apps will not survive the outages in a cloud as they were designed to run on robust infrastructure that ‘did not fail.’ Cloud-native apps distribute components and deal with application persistence in a different way. They are able to have one or more instances of a component fail without complete failure of the application stack. They use load-balancing liberally and allow for an application to scale out and in dynamically based on workload. These applications are often designed to be cloud-native from the get go, but many applications (especially web-based applications) can be refactored into cloud application patterns with a reasonable amount of work. You should always advise customers down the proper cloud application development path, especially early in the discussions. This puts all the information on the table and prevents misunderstandings or oversights from happening in the design process. Great developers will see refactoring as a unique challenge that they want to solve, so partner early with these people and use them as champions for ‘doing things the right way.’ 

Not every application is a fit for cloud.

As much as a cloud evangelist, like myself, dreams of the day that the cloud can handle all workloads, this is simply not the case today. Performance, application design, compliance requirements, SLAs and various other things can prevent a workload from becoming a cloud-native workload today. This is not to say that this will always be the case, but you must be realistic, especially in the large enterprise segment. You must fully understand the customer use cases for the cloud, understand their workloads and be prepared to offer a hybrid solution should the need arise. For example, many large enterprises began the march toward web-enabling (or portalizing) the majority of their apps several years ago. In this process, many took the time to decouple the web, application and database tiers of these applications. This is a huge step toward refactoring for cloud. Distributing the application components is the next step toward a bona fide cloud application pattern. It is often the case that the web tier is able to be distributed already, behind load balancers. In some cases, even the app tier can be distributed. The database tier often becomes the one tier that is not cloud ready and cannot readily be ported, especially if it is on Oracle RAC or Exadata. As a cloud services organization, you must understand that this is not only OK, but it is the natural progression of large enterprise applications in this space. It is absolutely acceptable to put the web and app tiers in the cloud, while your database tier remains on big iron. Developers will need time to assess their database options in the cloud. They may elect to use a distributed database like Cassandra, or they may choose to wait until database as a service (DBaaS) is a reliable option like RDS is in AWS. Always look for areas where a compromise in the application pattern does not violate basic tenets of cloud and allows a customer to leverage existing investments until they are able to bring a laggard component fully into the cloud. Speaking of leveraging existing investments...

Enterprise storage is not a villain.

Sure, we would love all of our customers to adopt the newest cloud storage products, because we know that deep down, these products are excellent and scale out well. Ceph and Riak are awesome, but there is nothing wrong with a customer using a NetApp to back their image store while they wait for technologies to mature or for competitors in the space to shake out. This is what large enterprises do. Will there be a penalty in terms of performance and scale? Perhaps. For large enterprises, however, stability and reliability are key, and NetApp delivers in those areas. Also, there are operational considerations such as finding and staffing the talent to run a large Ceph or Riak farm. As these technologies are in their infancies, the talent pool is not as large as it is for traditional big iron storage. It may take time for large enterprise customers to trust and then staff for the transition. Again, look for areas of compromise that do not violate the basic tenets of cloud design.

Large enterprise does not necessarily mean service provider.

This is a huge sticking point for most people in the cloud space. I often hear the argument that storage system A will not scale as far out as cloud storage product X. This is likely true, but what is the true scale that the enterprise needs? Often, their level of scale is well within the limits of a traditional storage vendor. If they are a service provider, that is a different discussion. Do they need the absolutely blazing performance of an all SSD array, or can a traditional tiered SAN meet their needs? Do they understand how to use an object store? Do they have a use case for it? Do they need to use overlay networks or an SDN, or is the upper-bound of their use case within the limits of a traditional network design? I could go on and on here, but the point I am trying to make is that just like instances should be right-sized for the application components they contain, a cloud should be right-sized for the customer’s intended use cases. Use case should always dictate the design and architecture of the underlying cloud platform, not the other way around. The ‘best' way to fail in a cloud services engagement is to design something that is rigid, expensive and not supportive of the customer’s intended use cases.

Do not be dogmatic.

Lastly, always remember that you are not here with a solution in search of a problem. There is not only one right way to solve a problem. You will inevitably encounter challenges that you never expected, so have an open mind. You are here to provide a solution to a given set of problems as defined by your customer. If you approach every cloud services engagement from that perspective, you will ultimately succeed and become a trusted adviser to your customer for a long time to come.