Wednesday, February 6, 2013

Why Cloud Apps are Perfect for Small Biz

If the thought of new software for running your field service business brings on pain, remorse, indigestion, or any other unpleasant malady, this blog post is for you.  Software that is well designed and delivered to you as an Internet application (or cloud based application) should not bring on any of these symptoms of distress.  Using a software application to help run your business should not require any real information technology expertise.  If you have PCs, Macs, laptops, tablets and/or smartphones connected to the Internet, you have everything you need.  Let me explain how cloud applications provide an enormous benefit for small service businesses.

Read more at my ServiceTrade blog.

Wednesday, January 9, 2013

Introducing ServiceTrade

ServiceTrade is a new software company dedicated to helping facility maintenance and repair companies deliver better service, grow faster, and be more profitable. Our aim is to provide a terrific mobile and cloud based application that improves field technician productivity, enhances customer collaboration, and streamlines the administrative processes associated with running a service business. The application will be easy to use and easy to buy - no software or servers to manage, no upfront fees, no sales pressure. Try it first and buy it only if it works for you.

We believe that maintenance businesses are a terrific market to serve because they are so underserved by current software offerings. Intuit’s QuickBooks is the only widely deployed software product in this space, and service management software is generally only used by the very largest firms. According to analyst firm Gartner, the current annual spending on service management totals is in the neighborhood of $330M. For a business segment that represents approximately $900B in annual commerce according to the US Bureau of Labor Statistics, this amount of spending is a terrible indictment of the current slate of software targeted to this market. The current offerings are hard to try, hard to buy, and hard to use while hiding behind a sales person that insists it isn’t so. ServiceTrade is different.

We will post our pricing on the website. Free trials are not only free, they are available without getting a sales person’s permission. If you need help, we will help you get started so that you can experience our value. If the application helps you and your technicians, you can buy it and use it without any long term commitments. That is how great technology is adopted and consumed. We have enough confidence in our application that we don’t need to hide it behind a sales person. You can judge the quality of what we are doing all by yourself.

The application will not be perfect, however, and it will always be better next week when compared to this week. There is a never ending list of cool things that should be available to help technicians deliver more work, report on the job from the field, and engage the customer in meaningful collaboration. We have only scratched the surface with our initial product offering which is currently in beta testing. The initial release of the application in March will have capability geared toward fire protection companies, and functionality for more trades such as HVAC/mechanical, electrical, and plumbing will follow shortly.

We believe ServiceTrade can do something great by serving maintenance and repair businesses with a world class app that is easy to use and easy to buy. If you are interested, please follow us on Twitter, or LinkedIn to keep up with our progress.

Wednesday, June 6, 2012

PlotWatt? Plot What? Plot Everything!

To improve, you must first measure. And measuring is getting easier with cloud computing.

The problem with a lot of software solutions is that the medicine is often more toxic than the disease - implementing and maintaining the system is so ugly that the benefit never emerges.  The best applications are those that provide value without much investment in setup and inputs.  Witness the SaaS revolution and the market value of  And even their software requires quite a bit of “out of band” data entry to begin handing back benefits - but the friction for technology set-up is minimal.

With the cost of hard goods rising every day, and the cost of computing, data collection, and analysis getting lower every day, more and more applications are going to appear that use the power of the ever cheaper computer to lower the start-up barriers to software value.  I think PlotWatt is a great example of one of those new applications.

I met with Luke Fishback, the CEO of PlotWatt the other day, and we share a common mind on many topics regarding how to provide value with cloud computing.  The PlotWatt application uses the nearly free computing cycles and data storage at Amazon AWS to analyze time series data reported by smart power meters.  By watching the data and asking a few questions of the user, the application “learns” the profiles of the appliances that are consuming power.  With further watching and analysis, the user begins receiving intelligence on which appliances are consuming more power than they should.  The application essentially provides a Pareto analysis of opportunity for saving money on your power bills.

So what is special about PlotWatt?  Well, historically the customer would have to wire up every appliance with sensors (or purchase expensive “smart” appliances and building control systems) to measure current and report the information to the application.  Instead, PlotWatt is using computing power and lots of data storage and analysis to eliminate the start up costs and maintenance associated with all of those sensors and proprietary building control systems.  The customer can save capital on appliance deployment (i.e. deploy dumb appliances) and still get the benefit of smart power consumption.  Cloud computing once again replaces capital with an extraordinarily affordable, variable expense approach.

Now, for the really interesting part.  Imagine correlating all of that power consumption time series data with equipment service record time series data.  Yep, you guessed right.  The application can tell you when to call the service guy.  Or it can call the service guy for you.  It can tell you if the service guy did the job right (i.e. better performance/lower power after service).  Or not.  Imagine moving beyond power to gas and water.  The plumber shows up to fix the leaky toilet before you knew it was leaking (and washing your profits down the drain).

If you can see information and data without extraordinary costs (and aggravation) for collecting it, the potential for improvement in facility maintenance costs is amazing.  Stay tuned for lots more info on how the cost of plotting everything is going to go to zero.

Monday, April 30, 2012

The Big Picture - Service Trade Jobs

I just read a very intriguing article on the decline of manufacturing jobs in the US and the current calls by some in Washington to enact special incentives to slow the flow of these jobs to lower cost labor locales.  The author, Gary Becker, professor of economics at the University of Chicago and a Nobel Laureate, references the decline of agricultural workers in the US beginning in the early twentieth century as a mirror, and rightly points out that agricultural subsidies did nothing to stem the decline of workers.  Steve Jobs also pointedly told President Obama at a dinner meeting “Those (manufacturing)  jobs are not coming back.”  This line of thought, along with a poke from Ashley Halligan, a facilities management software analyst, on careers in skilled service trades and facility maintenance, got me thinking about why this field of service trade work is so compelling as a source of good wage jobs. Service trade jobs are not at risk for offshoring (although technology driven productivity might shrink the quantity, as in agriculture), and they are becoming more interesting each day as the information and analysis content of the work increases with the increasing ubiquity of the Internet, mobile devices, and “smart” equipment.

Service trade work - the installation, maintenance, repair, and upfit of facility equipment (lighting, HVAC, refrigeration, cooking equipment, electrical, plumbing, fire safety systems, elevators, etc) - is certainly blue collar work.  Technicians are often in a truck, on a roof, or in a basement or utility room wielding multi-meters, wire cutters, pipe wrenches, duct tape, pressure gauges, screwdrivers and sometimes (when all else fails) a hammer.  The diversity of the environment and the satisfaction that comes from solving problems is actually what makes these jobs gratifying.  And the information and analysis content of these jobs is about to go WAY up - which makes them even more interesting.  And these jobs are not subject to offshoring because you cannot source a refrigeration chiller repair in Chattanooga to a technician in China.

Why are these jobs are going to become more interesting?  Because the information and analysis content is about to increase dramatically.  The cost of information collection, storage, transmission, and processing is getting lower every day.  Camera megapixels are cheaper.  Megabytes of storage are cheaper.  Gigahertz of CPU throughput are cheaper.  Megabits of transmission capacity are cheaper (on most networks, anyway).  Smartphones are cheaper.  And the price of steel, gasoline, and labor gets higher every day.  If facility maintenance is to get cheaper, the savings must come from the analysis of information.  Translation - do smarter maintenance through information technology.

How are we going to do smarter maintenance?  First, all technicians will (ultimately) have smartphones at their disposal - a remarkable breakthrough.  Why is this critical?  Smartphones provide a platform to communicate rich information (voice memos, photos, video) about what is happening on the roof, in the basement and the utility room to anyone else that can benefit from that information in supporting maintenance decisions.  The tech can show what she knows without being held accountable to a low bandwidth written report on a coffee stained piece of paper that may or may not make it back to the office in the course of a week’s work.  Communicating just became fun.

Second, and the other side of the smartphone coin, is that unstructured data (notes, photos, voice memos, videos) is going to surge to the forefront as the productivity driver for service trade work.  Historically, practically all maintenance decision support information (if it existed at all) was highly structured data - text or numerical fields in a relational database.  Structured data is necessary, but far from sufficient.  It is the language of accounting and control - not the language of collaboration and problem solving.  Collaboration and problem solving happens with stories.  As humans, we are wired to learn through stories.  The structured data must support the story, but the story is the breakthrough.  Techs will soon benefit from this capability to communicate and consume stories as part of their work cycle.  Show the customer a video of a piece of equipment that is exhibiting a pending failure mode with the tech narrating the reason for the imminent breakdown along with the steps for remediation, and you will get an informed customer ready to strike a check instead of a skeptic asking for two or more repair bids.

Third, all of these critical facility systems, from food processing to elevators to HVAC equipment to grease traps, are about to become connected - with the ability to transmit key information for maintenance decision support.  What is the temperature in the cooler relative to the amperage of the compressor motor relative to the ambient temperature?  What is the static pressure drop across the filters in the air handler?  What is the fuel consumption of the broiler relative to the temperature relative to the historic norm?  What is the amount of grease in the grease trap relative to the water level and how much sludge is at the bottom?  This information is going to break free of proprietary building control systems and be readily available for analysis by anyone due to the miracle of Internet protocols and low cost sensors.  What the service trade industry needs to decide is who is going to take the lead in configuring, collecting, and analyzing the information?  The last outcome we should seek is that some offshore locale becomes the harbor for the information and its analysis and US service trade labor is relegated to turning the wrench.  I doubt this will happen because the US leads the world in information innovation.

As labor costs rise, it is inevitable that the labor content of any good or service will be displaced either by lower cost labor or technological advances.  It doesn’t matter if it is agriculture, manufacturing, or service trade work.  In the case of service trade work, however, the labor content cannot be delivered from afar - the wrench must turn where the pipe is located.  The goal should be to deliver smarter labor (and less of it) based upon innovations in information that support better decisions.  Smart labor in a connected world sounds like a recipe for good jobs to me.

Thursday, March 8, 2012

What's a Picture Worth?

Apparently a whole bunch if DunnWell's experience is an indicator.

DunnWell got started as a Kitchen Exhaust Cleaning company with a single differentiating factor – digital photos validating work done in the middle of the night inside of a duct system that is difficult to inspect. Now DunnWell is a very large and growing company specializing in a full range of Fire Protection services and serving over 12,000 locations nationwide. And digital photos on ServiceNET remain a key differentiating factor for the company.

We have nearly 3 million photos on ServiceNET for validated work or defects requiring repairs. These work records serve as comfort to a customer that a job was done right (or DunnWell, ha, ha) or that a pending repair is actually necessary or critical. Quotes that have photos attached documenting a problem are approved by the customer at more than twice the rate than those that do not include photo validation.

What would it be worth to your business to double your quote approval rate? How many more repair jobs could you deliver to your technicians? Answer those questions, and you will know what a picture is worth. Bet it is more than a 1,000 words.

Tuesday, February 7, 2012

What is Service Trade Technology?

Since I have claimed time and again that DunnWell is a service trade technology company, I suppose it is time to define the category of service trade technology. It is also important to outline why I, or anyone else for that matter, should care about service trade technology. Why is it a relevant area of endeavor?

Let’s begin with the relevance part. Service trade work concerns the maintenance, repair, and renovation/up-fit of equipment and facilities outside the context of new construction. It is a huge part of the economy. By my estimate (it is nearly impossible to find statistics) and with the aid of the US Department of Labor, Bureau of Labor Statistics Occupational Outlook Handbook, the service trade market in the US represents over $120 billion in annual commerce with over 1,800,000 skilled workers active in these areas. That’s a lot of commerce and a lot of jobs. The productivity of this group matters. One of the best routes to productivity, in my opinion, is Service Trade Technology.

So what is Service Trade Technology? The best way that I can think of to define it is to layout a short statement followed by a longer series of examples to provide context and clarity to the statement. Sort of like the notion of a law (the short statement, in theory) followed by a set of judicial case precedents that further interpret and refine the law. So, here is the short statement:

Service trade technology is information technology that improves the productivity and quality of service trade work through better communication, collaboration, planning, and decision support for both the customer and the vendor.

Recognizing the simplicity and broadness of this statement, let’s work on refining it with some context. These are information technology that would be within the definition:

GPS or Location Based Job/Work Records – Knowing when the technician arrives (or planning the arrival) helps maintain integrity in the billing transaction.

Appointment Confirmation and Status Systems – As above, being able to plan for arrival minimizes dead end runs for the tech. And the customer can be prepared, which also reduces wasteful cycles.

Digital Photography Applied to Validation of Deficiencies or Repairs – Establishing trust through a visual record eliminates the expense of third party validation (second opinions, quality inspectors, etc). Note to vendors – it also drives up quote approvals.

Equipment Monitoring to Optimize Technician Dispatch – Knowing the customer is wasting money on poorly performing equipment (think energy consumption) eases the pain of unplanned technician expense for an ad hoc maintenance dispatch. Or consider technology such as that provided by SepSensor for optimizing grease trap maintenance schedules.

Fleet or Tech Tracking to Manage Dispatch – Minimizing unbillable (or even billable) drive time makes the system more productive overall. Lower overhead for the vendor and more value for the customer.

Equipment/Asset Management to Aid in Repair Decision Support – A complete record of service for a piece of equipment across vendors and trades often lowers the expense of tech diagnostics in the repair cycle.

Likewise, these are information technology that would be outside the definition:

Email – While email is an important capability, it is too broad and too ad hoc to consider outside of the context of a specific application that generates emails to support functions like those above (i.e. notice of a pending job with a confirmation button in the email).

SmartPhones – Like email, outside of the context of an application that specifically drives tech productivity (GPS job clock, or dispatch/route planning for jobs), the smart phone cannot alone be considered as service trade technology.

MultiMeters or Similar Trade Specific Equipment – While certain equipment specific monitoring capabilities could certainly be considered service trade technology (consider the SepSensor example above for grease trap monitoring), trade specific tools used by the technician in the context of equipment repair will not generally be considered as service trade technology.

Historically, many of the applications that have been delivered in this space have focused heavily on helping the facility manager optimize the “soft costs” associated with receiving service trade work. Soft costs are primarily:

Quality Risk – what does it cost if the maintenance or repair is poorly executed?

Administrative Cost – the human capital costs of aggravation associated with inefficient processes for billing, job planning, and service logistics.

Service Management Cost – the human capital costs at the customer for providing expert oversight to assure quality in the workmanship delivered by the service trade vendor.

While I believe that these soft costs are good areas for improvement, overall I think they represent a small piece of the total costs. In contrast to these, hard costs represent what is paid by the customer to the service trade vendor, and these expenses generally break down as:

Labor – the billable time for the technician(s).

Parts – the billable amount for parts consumed in the job.

Overhead – the burden rate for the vendor associated with idle tech cycles, fuel, depreciation of equipment, support staff, facility rent, etc.

Profit – the rate of return above all other costs absorbed by the vendor.

Aiming technology at these hard costs is much more than just working the vendor over with an annual RFP that focuses on labor rates. See my post on fighting inflation. Inflation will ultimately win and quality will lose.

Instead, service trade technology offers the hope of a win-win outcome on hard costs. Since the primary component of these hard costs is labor, a focus on optimizing technician productivity is critical. It is doubly critical because the biggest component of vendor overhead is likely idle technician cycles that are not applied to billable jobs.

The vendor that embraces technology for optimizing technician productivity and quality across its customer base while passing along part of the fruit of that improvement to each customer will capture the biggest customer base for the longest period of time. In my opinion, that is the name of the game in service trade work – cultivate a long standing customer base and harvest it thoughtfully and forever instead of pillaging (or worse, being inept) and getting fired from the account. It’s too expensive to cultivate customers to be thrown out for over harvesting or for lack of thoughtfulness relative to a competitor who works harder to help the customer optimize their expense budget.

So, what do you think about service trade technology? Have I struck a chord? Or am I missing the boat – or, to keep the agrarian metaphor, the tractor?

Monday, January 16, 2012

Fighting Inflation is a Loser's Bet

While the performance of long term Treasury notes in 2011 seems to indicate not even a whiff of inflation, I will argue that fighting inflation for long term and sustainable savings in service trade delivery is a loser. The prevailing market rate for skilled labor ($/hr), materials ($/lb), and related delivery inputs like fuel ($/gal) will certainly rise. Meanwhile the costs of collecting, transmitting, processing, and analyzing information that might help you lower your consumption quantity for these items is continually in decline. Information is the best bet for lowering your long term service trade costs.

Read more on my blog post at DunnWell.

Wednesday, September 28, 2011

A Whale(y) of a Software Problem

Two days ago I read this article in ComputerWorld about Whaley Food Service suing Epicor for a botched ERP software implementation. The budget for the project was about $200K, and the actual cost for a failed implementation exceeded $1M. WOW!

What was Whaley thinking? Personally, I believe there are VERY FEW companies that should consider an "on-site," IT-managed application solution these days. I think this belief is especially pertinent for smaller, service trade oriented companies.

Whaley is in the service trade space that DunnWell also serves. Whaley services food equipment for commercial kitchens and DunnWell services the fire suppression systems that "cover" the hot side food equipment. DunnWell uses NetSuite for our accounting and (minimal) inventory management needs. We use ServiceNET, an application we developed, for managing service delivery (the core of our business). NetSuite is a SaaS application, and we pretty much use a vanilla implementation with minimal customizations. ServiceNET is very much optimized for our service delivery business, and we control our destiny through our development investment.

Epicor offered Whaley an on-site, IT-managed solution that the article claims required many customizations to meet Whaley's requirements. This is a recipe for disaster - and the cooks delivered the disaster in fine fashion. To be fair, the project started in 2006, and SaaS darling had not yet completely vanquished on-site software implementations to the IT junk heap. But today's lesson for service trade companies is important - do not accept large scale implementation and management risks for software solutions. The days of expensive, high risk software projects are over.

If you are in the service trade business, you should certainly be looking for software solutions to remain competitive in a world where delivering information to your customers, your workers, and your trade partners is probably as important as the service delivery workcraft itself. Both must be very good if you want to compete. But the system you select should just work, you should not manage it, and it should be online and integrated with a social-mobile world. If you haven't found it yet, stay tuned to this channel. We will help you get there.

Tuesday, September 27, 2011

I'm Back

Anyone paying attention no doubt noticed that I have not been an active blogger for the past 18 months or so. I have been working on a new venture outside the infrastructure technology space. However, now is the time to begin gradually unveiling my most recent "change the world" opportunity. And yes, it does involve technology and cloud computing in a very significant way, albeit for a different audience.

I joined DunnWell last April because I believe the opportunity to transform the service trade space via technology is huge - and the time is now. Service trades are all about the people in trucks who show up at your facility - restaurant, office building, house, depot, whatever - and maintain or fix the stuff that makes your world go around - HVAC, refrigeration, cooking equipment, elevators, fire protection systems, whatever. Regarding DunnWell, I spent some quality cycles with the CTO, Brian Smithwick, and he passed all of my criteria for a technology partner in crime - smart, cheap, Linux fan, redneck. Anytime I deviate from these criteria it doesn't work out well for me. I feel good about this one.

With my new opportunity, I am going to pivot this blog slightly away from an emphasis of cloud infrastructure and towards a cloud architecture for service trade transformation. As I predicted way back in 2006, the infrastructure thing has come to pass and Amazon in particular has led the charge. We are big users of Amazon infrastructure, and we could not be happier. I am also pleased that many of the brilliant, former rPath engineers and business leaders now help lead the charge for Amazon Web Services. Bravo Brian, Cristian, Matt, and Nathan.

Back to DunnWell and why this blog is relevant. The service trade industry has an interesting dichotomy. While service professionals have been early adopters of gadgets like GPS, smartphones, and tablets, the software that runs their business is a joke in most cases. Well, that is going to change. And when it changes, the industry will skip over several generations of computing architectures including centralized computing, client-server, and even multi-tenant SaaS. These guys will be social-mobile from day one because it is what they already get and expect with Amazon, Facebook, iPhone, and Android. Their consumer gadget Internet experiences will inform their business software expectations. It won't happen overnight, but the approach and the results will be fascinating. Stay tuned 'cause it's about to get good.

Friday, January 15, 2010

VMware Spits Into the Wind - Buys Zimbra

What's next? Tugging on Superman's Cape? Pulling the mask off the ol' Lone Ranger? In my opinion, the future of email and collaboration belongs to Google, with Microsoft playing very strong defense shifting folks directly to Azure (which doesn't include Exchange today, but I will bet a nickel it is in the works). If the acquisition of Zimbra is an attempt by VMware to arm the service providers with a similar capability, I sincerely hope VMware is not expecting to make any money along the way.

Don't get me wrong, as I have long been a huge fan of VMware and Zimbra. The former President and CTO of Zimbra, Scott Dietzen, is a friend and sits on the board of rPath, the company that I founded. Zimbra was an rPath customer. rPath was a Zimbra customer. But I think the notion of folks running email and basic collaboration functions at small scale (i.e. any scale that doesn't match Google and Microsoft hosted solutions in the future) is a lost cause – or at least a VERY low margin cause. And I don't think VMware should get into that business (i.e. the hosted solutions business) as it is a completely different beast than selling infrastructure software licenses (ask Microsoft, they haven't got it right but they are willing to spend BILLIONS and still be wrong).

It just makes no sense for anyone in the future to take on the burden of running an enterprise collaboration service (i.e. Exchange, Zimbra) with the software model. I even believe that RIM is going to have problems sustaining the Blackberry business as I witness the integration of Gmail with my iPhone. As I survey this market, I find more and more companies quietly moving their email/calendar/messaging services to Google. If VMware thinks this move is about Microsoft, I think they are wrong. I think it is about Google, and I would not be standing in line to pull the cape or remove the mask after spitting into the wind. VMware is no south Alabama boy named Slim.

Thursday, November 19, 2009

No More Cloud Servers - Think Racks and Containers

I just read a very nice post on the profile for a cloud server by Ernest de Leon, the Silicon Whisperer. Here is the opening paragraph:

"With the massive push toward cloud computing in the enterprise, there are some considerations that hardware vendors will have to come to terms with in the long run. Unlike the old infrastructure model with hardware bearing the brunt of fault tolerance, the new infrastructure model places all fault tolerance concerns within the software layer itself. I won’t say that this is a new concept as Google has been doing exactly this for a very long time (in IT time at least.) This accomplishes many things, but two particular benefits are that load balancing can now be more intelligent overall, and hardware can be reduced to the absolute most commodity parts available to cut cost."

I'm on board in a big way with this message until Ernest starts talking about the steps that are taken at failure:

"When there is a failure of a component or an entire cloud server, the Cloud Software Layer can notify system administrators. Replacement is as simple as unplugging the bad server and plugging in a new one. The server will auto provision itself into the resource pool and it’s ready to go. Management and maintenance are simplified greatly."

And I think to myself that there is no way we can operate at cloud scale if we continue to think about racking and plugging servers. If we really want to lower the cost of operational management, which is a big part of the appeal of cloud, we have to start thinking about the innovations that should happen throughout the supply chain.

Commodity parts are great, but I want commodity assembly, shipping, and handling costs as well. The innovations in cloud hardware will be packaging and supply chain innovations. I want to buy a rack of pre-networked systems with a simple interface for hooking up power and network and good mobility associated with the rack itself (i.e. roll it into place, lock it down, roll it out at end of life). Maybe I even want to buy a container with similar properties. And when a system fails, it is powered down remotely and no one even thinks about trying to find it in the rack to replace it. It is dead in the rack/container until the container is unplugged and removed from the datacenter and sent back to the supplier for refurb and salvage.

Let's use "cloud" as the excuse to get crazy for efficiency around datacenter operations. I agree with Ernest that the craziness for efficiency with netbooks has led to a great outcome, but let's think crazy at real operating scale. No more hands on servers. No more endless cardboard, tape, staples, and styrofoam packaging. No more lugging a server under each arm through the datacenter and tripping and dropping the servers and falling into a rack and disconnecting half the systems from the network. My cloud server is a rack or a container that eliminates all this junk.

Monday, November 9, 2009

Virtualization is not Cloud

After spending the early part of last week at the Cloud Computing Expo, which is now co-located with the Virtualization Conference and Expo, I feel compelled to proclaim that virtualization is not cloud. Nor does virtualization qualify for the moniker of IaaS. If virtualization was cloud/IaaS, there would not be so much industry hubbub surrounding Amazon's EC2 offering. Nor would Amazon be able to grow the EC2 service so quickly because the market would be full of competitors offering the same thing. Cloud/IaaS goes beyond virtualization by providing extra services for dynamically allocating infrastructure resources to match the peaks and valleys of application demand.

Virtualization is certainly a valuable first step in the move to cloud/IaaS, but it only provides a static re-configuration of workloads to consume fewer host resources. After going P2V, you have basically re-mapped your static physical configuration onto a virtualization layer – isolating applications inside VMs for stability while stacking multiple VMs on a single machine for higher utilization. Instead of physical machines idling away on the network, you now have virtual machines idling away, but on fewer hosts.

To transform virtual infrastructure to IaaS, you need, at a minimum, the following:

Self Service API – if an application owner needs to call a system owner to gain access to infrastructure resources, you do not have a cloud. Upon presentation of qualified, valid credentials, the infrastructure should give up the resources to the user.

Resource Metering and Accountability – if the infrastructure cannot measure the resources consumed by a set of application services belonging to a particular user, and if it cannot hold that user accountable for some form of payment (whether currency exchange or internal charge-back), you do not have a cloud. When users are charged based upon the value consumed, they will behave in a manner that more closely aligns consumption with actual demand. They will only be wasteful when wasting system resources is the only way to avoid wasting time, which leads us to our next cloud attribute:

Application Image Management – if there is no mechanism for developing and configuring the application offline and then uploading a ready-to-run image onto the infrastructure at the moment demand arises, you do not have a IaaS cloud. Loading a standard OS template that does not reflect the configuration of the application and the OS combination used for testing and development prevents rapid application scaling because configuration can cost the owner hours/weeks/days/months of runtime cycles. Too much latency associated with setup results in over-allocation of resources in order to be responsive to demand (i.e. no one takes down running images even with slack demand because getting the capacity back is too slow). See my post on Single Minute Exchange of Applications.

Network Policy Enforcement – if an application owner cannot allow or deny network access to their virtual machine images without involving a network administrator or being constrained to a particular subset of infrastructure systems, you do not have a cloud. This requirement is related to the Self Service API and it also speaks to the requirement for low latency in application setup in order to be dynamic in meeting application demand fluctuations. True clouds provide unrestricted multi-tenancy (and therefore higher utilization) without compromising compliance policies that mandate network isolation for certain types of workloads

There maybe other requirements that I have missed, but in my mind, this is the minimum set. Any lesser implementation leads to the poor outcomes that are currently the bane of IT existence – VM Sprawl and Rogue Cloud Deployments.

VM sprawl results from lack of accountability for resource consumption (or over-consumption), and it also results from inefficient setup and configuration capabilities. If I don't have to pay for over-consumption, or if I cannot respond quickly to demand, I am not going to bother with giving back resources for others to use.

Rogue cloud deployments, unsanctioned by IT, to Amazon or other service providers result from lack of self service or high latency in requests for network or system resource configuration. Getting infrastructure administrators involved in every resource transaction discourages the type of dynamic changes that should occur based upon the fluctuations in application demand. People give up on the internal process because Amazon has figured it out.

True clouds go beyond virtualization by providing the necessary services to transform static virtual infrastructure to dynamic cloud capacity. These extra services eliminate the friction and latency between the demand signal and the supply response. The result is a much more efficient market for matching application demand with infrastructure supply.

Thursday, October 22, 2009

EC2 Value Shines at Amazon

On the heels of Randy Bias' excellent analysis of the market adoption of EC2 (well 3 wks later but I only read it this week), I thought I would publish the findings of the survey that we conducted on AWS value. While we do not have a huge sample size for the survey (24 responses), I do believe the answers provide some insight into the terrific uptake that Randy describes.

The large majority of respondents (92%) identified themselves as providers of some type of technology application as opposed to enterprise users. I think this mostly reflects these folks friendliness to survey requests versus enterprise users – not a lack of enterprise consumption of AWS. Those in the market with services are more likely to answer surveys due to their empathy for the pursuit of market information. Enterprise users typically have little empathy for the pursuit of information that makes them easier targets for marketers such as myself.

Almost all identified themselves as senior management in their organizations (61%), with 9% claiming middle management and the remaining 30% “breaking their back for the man.” Interestingly, the distribution of the AWS experience curve was not as skewed to the near term as I would have expected. Typically, for a hot/new service, you would expect the majority of the users to be early in their experiences. I consider early to be 6 months to a year. For our respondents, 50% had been using the service for more than a year, with the remaining 50% split 10/20/20 at the 3 month, 6 month, and 12 month experience intervals. I would have anticipated a curve more skewed to 3 to 6 months.

The most popular service of the five we surveyed is S3 (92%), with EC2 (88%) just behind. EBS trailed at 58%, with SimpleDB and SQS bringing up the rear with 17% and 21% respectively of respondents indicated they use these services. Since every EC2 user must use S3, I find the popularity of EC2 to be the most interesting, but not surprising, finding in the survey. It supports Randy's analysis, and it reflects the market generally. The amount of compute cycles sold in the form of hardware on a dollar value basis far outstrips the amount of storage sold on a dollar basis. Also, while there are several storage offerings in the market relative to S3 functionality, few providers have cracked the code on providing compute cycles via a web API with hourly granularity and massive scale in the manner of EC2. EC2 is delivering the big value to Amazon within the AWS portfolio.

To summarize the rest of the responses, scalability was the most important competitive feature followed closely by low cost and pay-as-you-go pricing. Content delivery applications were the most popular workload, with no clear-cut number 2 coming close. Users are spending between $100 and $1000 per month and almost all (67%) plan to add more workloads in the future. Many would like to see an API-compatible AWS capability running on other networks ranging from “my laptop” to their private network to service providers that might be competitive with Amazon. Check out the details for yourself.

My bottom line on this and other indicators in the market is that Amazon's approach to IaaS is effectively the modern datacenter architecture. The market growth of cloud services for compute, storage, messaging, database, etc. will largely reflect the current market for those capabilities as represented by respective sales of existing hardware and licensed solutions. But the availability of these lower friction “virtual” versions of hardware and licensed software will dramatically increase the total market for technology by eliminating the hidden, but real, costs of procurement inefficiencies. When services similar to EC2 run on every corporate and service provider network, we will have more computing and more value from computing. And the world will be a better place.

Tuesday, September 8, 2009

Cloud Attraction Survey

I labeled this blog The Cloud Option because of my belief that the best reason to build an application with a cloud architecture is to manage application demand risk. The cloud option allows you to align application demand with infrastructure supply to protect against a demand forecast that is certain to be inaccurate. While I believe my value hypothesis will prove to be correct in the long term, I think there may be far more basic attractions associated with the near term value driving cloud demand.

With that in mind, I have built a short (12 question) survey that I offer to AWS users (as I believe AWS represents the most successful, high demand implementation of cloud computing by far thus far). If you are a current AWS consumer, please take 2 minutes to fill out the survey. I'll post the results on the blog in a week or two. Thanks for helping me assign value to the cloud option!

Wednesday, September 2, 2009

Latest Gmail Outage Again Fuels Cloud Computing Luddites

By Steve Bobrowski

What's a Luddite? I remember looking this up once, and here's one of the definitions I found at Webster's:

Change scares people, making them feel uncomfortable and uneasy for many reasons. But in the world of technology, we must embrace change, for it is inevitable and fast-paced. And IT change usually happens for the better, not the worst.

In this context, my news wires on cloud computing today have been flooded with countless stories written by a bunch of, well, Luddites. Those insisting that the latest Gmail outage is proof that cloud computing system outages threaten the cloud computing paradigm shift. Really?

The truth is that system outages are a fact of life. We all hear about the public system outages, but we rarely hear about those that occur behind the firewall. Rather than trouncing cloud services when they go down, shouldn't the focus be on how long it took them to return to service and then comparing the impact of this event to similar on-premise outages?

For example, when was the last time that your organization's email system went down? Did your IT department have the training, staff, and resources to quickly identify the problem and then fix it? My personal recollection of an internal email system outage: lots of squabbles and finger-pointing among the parties involved, all leading to two days without email and some lost messages. Never mind the ripple effects of lost time and money while our IT staff needed to suspend work on other internal projects.

Google's team of highly-specialized administrators took less then two hours to fix things and I didn't lose any of my mail! In my experience, that's outstanding service. Furthermore, fixing the outage did not require work from any of my company's in-house resources, which were free to continue being productive on internal projects that lead to revenue generation.

Should your organization worry about relying on a cloud application or cloud platform? The answer is simple—applications should reside where it makes the most sense. In some cases, cloud wins, in others, data center wins. But the trend is undeniable that more and more enterprises are outsourcing common business applications such as email and CRM to the cloud because it provides their workers with a better service and more time to work on core business functions.

In summary, don't let the Luddites scare you—the inevitable world of utility-based computing will improve the enterprise's standard of living in many cases, not the opposite, as they would have you believe.

Thursday, August 27, 2009

Amazon Aims for Enterprises - Poo Poos Internal Clouds

Amazon's announcement yesterday regarding an enterprise feature for linking existing datacenter operations to Amazon's AWS via a Virtual Private Network feature did not surprise me. It is an obvious extension of their value proposition, and folks had already been accomplishing a similar capability with work-arounds that were simply a bit more cumbersome than Amazon's integrated approach. The more surprising piece of news, in my opinion, is the subtle racheting up of the rhetoric by Amazon regarding their disdain for the notion of “internal” cloud. Werner Vogels blog post explaining the rationale for the new VPN features is a case in point. Here are a few tasty excerpts:

Private Cloud is not the Cloud

These CIOs know that what is sometimes dubbed "private [internal] cloud" does not meet their goal as it does not give them the benefits of the cloud: true elasticity and capex elimination. Virtualization and increased automation may give them some improvements in utilization, but they would still be holding the capital, and the operational cost would still be significantly higher. . . .

What are called private [internal] clouds have little of these benefits and as such, I don't think of them as true clouds. . .

[Cloud benefits are]

* Eliminates Cost. The cloud changes capital expense to variable expense and lowers operating costs. The utility-based pricing model of the cloud combined with its on-demand access to resources eliminates the needs for capital investments in IT Infrastructure. And because resources can be released when no longer needed, effective utilization rises dramatically and our customers see a significant reduction in operational costs.

* Is Elastic. The ready access to vast cloud resources eliminates the need for complex procurement cycles, improving the time-to-market for its users. Many organizations have deployment cycles that are counted in weeks or months, while cloud resources such as Amazon EC2 only take minutes to deploy. The scalability of the cloud no longer forces designers and architects to think in resource-constrained ways and they can now pursue opportunities without having to worry how to grow their infrastructure if their product becomes successful.

* Removes Undifferentiated "Heavy Lifting."The cloud let its users focus on delivering differentiating business value instead of wasting valuable resources on the undifferentiated heavy lifting that makes up most of IT infrastructure. Over time Amazon has invested over $2B in developing technologies that could deliver security, reliability and performance at tremendous scale and at low cost. Our teams have created a culture of operational excellence that power some of the world's largest distributed systems. All of this expertise is instantly available to customers through the AWS services.

Elasticity is one of the fundamental properties of the cloud that drives many of its benefits. While virtualization has tremendous benefits to the enterprise, certainly as an important tool in server consolidation, it by itself is not sufficient to give the benefits of the cloud. To achieve true cloud-like elasticity in a private cloud, such that you can rapidly scale up and down in your own datacenter, will require you to allocate significant hardware capacity. While to your internal customers it may appear that they have increased efficiency, at the company level you still own all the capital expense of the IT infrastructure. Without the diversity and heterogeneity of the large number of AWS cloud customers to drive a high utilization level, it can never be a cost-effective solution.

OK. Let's examine Werner's sales proposition without the pressure to sell anything (as I am not currently trying to sell anyone anything). Clearly, Amazon is now attacking the vendors such as VMware that seem intent on attacking them by proclaiming that Amazon cannot give you enterprise features. Not only is Amazon delivering features targeted at the enterprise, but they are also scaling up the war of words by poo pooing the value proposition of these classic vendors – namely the notion of an internal cloud. Werner makes two assertions in dissing internal clouds:

First, he asserts that an internal cloud is not elastic. Well, why not? Just because your IT department has historically been labeled the NO department doesn't mean that it always must be that way. Indeed, the very pressure of Amazon providing the terrific services they provide without the mind-numbing procurement and deployment friction of your IT department is going to lead to massive changes on the part of IT. They are going to virtualize, provide self provisioning tools, and more closely align business application chargebacks to actual application usage. If the application owners are thoughtful about their architecture, they will be able to scale up and scale back based upon the realities of demand, and their IT transfer costs will reflect their thoughtfulness. Other business units will benefit from the release of resources, and server hoarding will be a thing of the past. All this is not to say that an IT department should “own” every bit of compute capacity they use. They don't. They won't. And there will probably be an increasing shift toward owning less.

But Werner claims that ownership is generally a bad thing in his second assertion that capex is bad and opex is good. Werner writes that cloud eliminates costs by eliminating capital spending. Well, it might - depending on the scenario. But his insinuation that capex is bad and opex is good is silliness. They are simply different, and the measurement that any enterprise must take is one relating to risk of demand and cost of capital. For a capital constrained startup with high risk associated with application demand, laying out precious capital for a high demand scenario in the face of potential demand failure makes no sense at all. However, for a cash rich bank with years of operating history relative to the transaction processing needs associated with servicing customer accounts, transferring this burden from capital expense to operating expense is equally senseless. Paying a premium for Amazon's gross profit margin when demand is fairly deterministic and your cost of capital is low is certainly a losing proposition.

The challenge and the opportunity of cloud for any enterprise is moving applications to an architecture that can exercise the cloud option for managing demand risk while simultaneously striking the right balance between capex and opex relative to the cost of capital. I find it funny that Amazon's new VPN feature is designed to make this opportunity a reality, while the blog post of their CTO announcing the feature proclaims that internal operations are too costly. Maybe they are viewing the VPN as a temporary bridge that will be burned when capex to opex nirvana is attained. Personally, I see it as the first of many permanent linkages that will be built to exercise the cloud option for managing demand risk. Lower costs associated with a proper portfolio balance of capex and opex is just icing on the cake.

Monday, August 24, 2009

VMware Springs Big for SpringSource

In a blog post back in May, I described why I believed a SpringSource and Hyperic combination was a good thing. In the new world of virtualized infrastructure and cloud computing, the application delivery and management approach is going to be lightweight and lean. At the time, however, I never imagined lightweight and lean would be worth $420M to VMware. While I have no doubt that a lightweight and agile approach to application delivery and management is going to replace the outdated heavy approach of J2EE and EJB, I am not quite convinced that VMware is getting in this deal what they want us to believe they are getting – general purpose operating system irrelevance.

VMware has done an incredible job abstracting the hardware away from the general purpose operating system. Now they have moved to the other end of the stack in an attempt to abstract the application away from the operating system. If the operating system is not responsible for hardware support and it is likewise not responsible for application support, then it is irrelevant, right? It is a good theory, but it is not quite true.

While the majority of application code will certainly be written in languages that can be supported by SpringSource (java, grails), there will remain lots and lots of application utilities and services that are provided by various programs that are not, and will never be, written in Java or the related languages supported by SpringSource. All of these various programs will still need to be assembled into the system images that represent a working application. And while I absolutely believe the general purpose operating system should die an ugly death in the face of virtualized infrastructure and cloud computing, I do not believe that operating systems can be rendered irrelevant to the application. I simply believe they become lighter and more application specific. I also believe that we are going to see a proliferation of application language approaches, not a consolidation to Java alone.

Acquiring SpringSource puts VMware on the path to providing not only Infrastructure as a Service technology, but also Platform as a Service technology. From what I have seen to date in the market, PaaS lags far, far behind IaaS in acceptance and growth. I have written multiple posts praising the Amazon approach and decrying the Google and Salesforce approach for cloud because the latter requires developers to conform to the preferences of the platform provider while the former allows developers to exercise creativity in the choice of languages, libraries, data structures, etc. That's not to say that PaaS cannot be a valuable part of the application developer toolkit. It's just that the market will be much more limited in size due to the limitations in the degrees of freedom that can be exercised. And if developers love one thing more than anything else, it is freedom.

VMware's acquisition of SpringSource moves them into the very unfamiliar territory of developer tools and runtimes. It is a different sale to a different audience. Developers are notoriously fickle, and it will be interesting to see how a famously insular company like VMware manages to maintain the developer momentum built by the SpringSource team.

Thursday, August 13, 2009

The Cloud Option

A few months back, I participated in a panel on the evolution of cloud computing that was hosted by Union Square Advisors. Alongside me on the panel were executives from Amazon, Adobe, Cisco, and NetApp. Someone in the audience claimed that their economic analysis of running an application on Amazon AWS indicated the services were not cost competitive relative to an internal deployment. My response was that the analysis was likely based upon a simple, non-volatile application demand scenario. I said that the analysis should have instead considered the option value of Amazon's services subject to some level of demand volatility. What is an option worth that allows you to quickly scale up and scale down with application demand with costs scaling (or descaling) proportionately? How many applications in your portfolio could benefit from this type of risk management hedge? What type of premium should you be willing to pay for a cost profile that is correlated more closely to your demand profile? To capture without big capital outlays the benefits of terrific demand while simultaneously avoiding the costs of over-provisioning when demand fails?

My response to the simplistic Amazon cost analysis struck a chord with the audience, and I have since been thinking quite a bit about the metaphor of financial options as applied to the value of cloud computing. A financial option basically allows the holder of the option to participate in the market for a particular asset at some future date for a defined price (the premium) today. Aside from their value as a tool for market speculation, options provide a low cost way to manage the risk associated with sudden and significant swings in the market for important portfolio assets. The cloud option provides just this risk management function for the portfolio of applications that any given enterprise must execute and manage in the course of delivering on the promises of its business. In exchange for a cloud architecture premium, the owner of the application gets both upside and downside protection associated with a demand forecast (and its related budget) that is almost certain to be inaccurate.

The objective of this blog, The Cloud Option, is to discover the various costs and benefits associated with the premium of a cloud architecture. By analyzing the structure of the various cloud offerings and the technologies which underpin them (i.e. virtualization, programming APIs, etc), we will provide application owners with a context for evaluating which cloud services and technology might provide the best option for managing their demand risks. At the level of the enterprise, IT planners will be able to more effectively undertake an analysis of their application portfolio in order to lay out a broad demand-risk management strategy based upon cloud technology and services.

Contributing to this blog alongside me will be Steve Bobrowski. Steve is the former CTO of SaaS at Computer Sciences Corporation, former Director of SaaS Technology at BEA Systems, and currently freelances as a technical, strategic, and marketing consultant to prominent cloud vendors. Because of the variety and breadth of our experiences, we should be able to cover the material fairly broadly and with a compelling level of depth. To provide some context on my historical perspective of cloud, I have posted below the cloud related entries from my open source blog dating back to November of 2006.

Cloud technology and services are certainly going to change the landscape of enterprise computing. I believe it can substantially lower the risk adjusted cost of delivering applications. We hope to help elucidate the cloud option – insuring that the premium paid to adopt the architecture truly helps manage cost and risk instead of simply making a technology fashion statement.

Tuesday, August 11, 2009

IBM Cloud Fizzles

From June 30, 2009

Based on my positive review below of IBM's CloudBurst technology for building internal clouds, I tuned into the IBM webinar for the external cloud companion product with high hopes. I was hoping to hear about a consistent architecture across the two products that would allow an enterprise to federate workloads seamlessly between the internal and external cloud. Boy, was I disappointed.

It seems the IBM external cloud is nothing more than an IBM hosted capability for running virtual appliances of IBM Rational Software products. Among my many disappointments:

- no ability to run virtual appliances defined by me. They don't even publish a specification.

- no federation between internal and external. They are not even the same architecture because one runs Xen and the other runs VMware, and they do not provide a conversion utility.

- private beta (alpha maybe?) for invited customers only. Why make an announcement?

- no timetable for general availability of a product. Why make an announcement?

This announcement was a terrible showing by IBM to say the least. It is obvious to me that the CloudBurst appliance folks (call them “left hand”) and the Smart Business cloud folks (call them “right hand”) were two totally different teams. And the left hand had no idea what the right hand was doing. But each was intent not to be outdone by the other in announcing “something” with cloud in the title. And they were told to “cooperate” by some well meaning marketing and PR person from corporate. And this mess of a situation is the outcome. Good grief!

IBM CloudBurst Hits the Mark

From June 29, 2009

IBM rolled out a new infrastructure offering called CloudBurst last week. Aimed at development and test workloads, it is essentially a rack of x86 systems pre-integrated with VMware’s virtualization technology along with IBM software technology for provisioning, management, metering, and chargeback. I believe IBM, unlike Verizon, has hit the cloud computing mark with this new offering.

First, IBM is targeting the offering at a perfect application workload for cloud – development and test. The transient nature of development and test workloads means that an elastic computing infrastructure with built-in virtualization and chargeback will be attractive to IT staff currently struggling to be responsive to line of business application owners. The line of business application owners are holding the threat of Amazon EC2 over the head of the IT staff if they cannot get their act together with frictionless, elastic compute services for their applications. By responding with a development and test infrastructure that enables self-service, elasticity, and pay-as-you-go chargeback capability, the IT staff will take a step in the right direction to head off the Amazon threat. Moving these dev/test workloads to production with the same infrastructure will be a simple flick of the switch when the line of business owners who have become spoiled by CloudBurst for dev/test complain that the production infrastructure is not flexible, responsive, or cost competitive.

Second, IBM embraced virtualization to enable greater self-service, and elasticity. While they do not detail the use of VMware’s technology on their website (likely to preserve the ability to switch it out for KVM or Xen at some future date), IBM has clearly taken an architectural hint from Amazon by building virtualization into the CloudBurst platform. Virtualization allows the owners of the application to put the infrastructure to work quickly via virtual appliances, instead of slogging through the tedious process of configuring some standard template from IT (which is never right) to meet the needs of their application – paying for infrastructure charges while they fight through incompatibilities, dependency resolution, and policy exception bureaucracy. CloudBurst represents a key shift in the way IT will buy server hardware in the future. Instead of either a bare-metal unit or pre-loaded with a bloated general purpose OS (see the complaint about tedious configuration above), the systems will instead come pre-configured with virtualization and self-service deployment capability for the application owners - a cloud-computing infrastructure appliance if you will. Cisco has designs on the same type capability with their newly announced Unified Computing System.

Third, it appears that IBM is going to announce a companion service to the CloudBurst internal capability tomorrow. From the little information that is available today, I surmise that IBM is likely going to provide a capability through their Rational product to enable application owners to “federate” the deployment of their applications across local and remote CloudBurst infrastructure. With this federated capability across local (fixed capital behind the firewall) and remote sites (variable cost operating expense from infrastructure hosted by IBM), the IBM story on cloud will be nearly complete.

The only real negatives I saw in this announcement were that IBM did not include an option for an object storage array for storing and cataloging the virtual appliances, nor did they include any utilities for taking advantage of existing catalogs of virtual appliances from VMware and Amazon. While it probably hurt IBM’s teeth to include VMware in the offering, perhaps they could have gone just a bit further and included another EMC cloud technology for the object store. Atmos would be a perfect complement to this well considered IBM cloud offering. And including a simple utility for accessing/converting existing virtual appliances really would not be that difficult. Maybe we’ll see these shortcomings addressed in the next version. All negatives aside, I think IBM made a good first showing with CloudBurst.