Cloud migrations: Don’t settle for just some operational savings

I’ve stopped thinking of simple migration to the public clouds as “success.”

Yes, businesses do benefit. You decrease operational costs by a certain amount, if you plan correctly, and you certainly increase the convenience of not having to deal with hardware and software. But all that gets you to just 10 to 20 percent in savings.

And that savings has a big price tag: Migration projects are very labor-intensive, and they often run into issues such as internal politics, cost overruns, and compliance issues as you’re looking to drive platform changes.

Moreover, you have to consider the cost of risk. If you bother to calculate it, the risk is high considering the issues I just mentioned, and that could remove any benefit gained for at least a few years.

Of course, I am not arguing against migration to the cloud. But mot enterprises need to think more deeply as to why they are migrating to the cloud, and then how.

Unfortunately, most enterprises consider cloud to be a tactical technology, and the CFOs and CEOs are glad to see the cost reductions. But if the use of cloud computing is not transformative to the core business, it’s really not providing you the ROI you seek.

“Transformative” means that you leverage the innovation and disruption that cloud computing provides. For example, a car company that can remove all friction from its supply chain by using cloud-based technologies, or a bank that can finally use its systems to gain access to key customer data that lets it provide better products and increase market share.

These are tricks we’ve done with technology for years, but the cloud removes much of the complexity and cost from having to on-board these technologies with traditional mechanisms. For exmple, in the cloud, you can access—within a few hours or even a few minutes—machine learning technology and advanced analytics, as well as databases that can store many petabytes.

The agility aspect of cloud computing is another clear benefit that most enterprises don’t consider, but it’s a key reason why many businesses remain with the cloud.

The transformative nature of this technology makes it an effective weapon for owning your market. Doesn’t that sound better than a 10 to 20 percent cost reduction?

Database decisions: AWS has changed the game for IT

You may not have heard of OpenSCG, but Amazon Web Services has. A week ago, AWS quietly acquired the PostgreSQL migration services company founded by PostgreSQL veteran Denis Lussier. While some PostgreSQL fans weren’t happy about the move, the OpenSCG acquisition is emblematic of a much larger move by AWS to serve a wide array of database needs.

At the recent AWS Summit, Amazon CTO Werner Vogels said as much, declaring that “what makes AWS unique is the data we have, and the quality of that data.” Taking a slap at Oracle in particular, Vogels derided the “so-called database company” for offering far fewer relational database services than AWS, and just a fraction of the array of database services that AWS offers (including NoSQL offerings).

With more than 64,000 databases migrated to AWS in just the last two years, AWS looks set to hold even more enterprise data.

AWS doesn’t tend to announce its acquisitions. They’re invariably small, not triggering any legal requirements to announce them, and while some companies acquire so they haveproducts to sell, AWS only acquires complements to the services it builds in-house.

Nor is it surprising that AWS would be interested in the PostgreSQL sponsor. As one Reddit commenter mentions, “True PostgreSQL expertise is difficult to come by and OpenSCG has a lot of it. If you combine that with Amazon’s clear support of deploying Postgres-related products (RDS/Aurora/Redshift) and its message of #DatabaseFreedom, … it becomes pretty clear why AWS was interested in OpenSCG.” Although OpenSCG has been an AWS partnerfor some time, OpenSCG has particular expertise in helping companies migrate to PostgreSQL.

Which is, of course, perfect for an AWS that is intent on moving orders of magnitude more database workloads than the current 64,000 to AWS.

AWS seeks to be the “every database” store

Not all those database workloads involve PostgreSQL, of course. Although the open source database has experienced a renaissance of popularityover the last few years, it’s just one of the various databases that AWS supports. AWS has been aggressively decomposing applications and infrastructure to give its customers the specialized services that let them develop what they want, Vogels says, “instead of AWS telling them what they must develop.”

You want PostgreSQL? AWS can help with that. How about a NoSQL database with infinite scale and predictable performance? AWS has that, too, with DynamoDB, but also through partners like MongoDB that run a large percentage of their workloads on AWS.

The list goes on.

And on.

All of which leads to the question “What does this mean for IT’s database decisions?”

The database choices aren’t like they used to be

Oracle and Microsoft’s trump cards to date have been that they collectively own three of the world’s most popular databases, including Oracle, MySQL (owned by Oracle), and Microsoft SQL Server. As data has changed, however, these trump cards have lost some of their luster, serving as an almost unwelcome crutch at times. Oracle has missed the market transition to big data applications.

By contrast, Microsoft has not rested on its laurels, releasing a spate of database options, including CosmosDB. Although Microsoft Azure has fewer database alternatives than AWS, it’s a strong No. 2 to AWS’s leadership position. So far, developers have preferred AWS’s approach, which is to offer maximum database choice, fitting particular databases to specialized needs. Even so, Microsoft at least has a credible strategy.

Oracle, by contrast, has spent years ignoring or deriding the cloud, then basically fork-lifting its database to the cloud. A year ago, it made the silly move of trying to raise the price of running Oracle on AWS, hoping to get customers to defect from AWS and run those workloads on Oracle’s struggling cloud. It hasn’t worked.

Nor will Oracle have much hope if AWS continues to move more database services into its arsenal of server-less functions. As industry expert Simon Wardley posits, “As Amazon’s server-less ecosystem grows, the more metadata it can mine, the faster its rates of innovation, customer focus, and efficiency. Once it gets to around 2 percent of the market then it’s game over for all those not playing at scale.”

Microsoft (and Google) are sprinting to add database services, including serverless options. Oracle keeps muddling through a 1980s way of thinking about the database, and it’s going to cost the database hegemon its lofty market position.

Meanwhile, AWS keeps steadily building out the database services developers require for next-generation applications, all while improving its abilities to migrate existing workloads to AWS.

Cross-cloud software development reaches to Azure

Back in the early 2000’s, while working as an architect in an IT consulting company, I became fascinated by the promise of service-oriented architectures. Taking an API-first approach to application development made a lot of sense to me, as did the idea of using a message- and event-driven approach to application integration. But that dream was lost in a maze of ever-more complex standards. The relatively simple SOAP’s take on remote procedure calls vanished as a growing family of WS-* protocols added more and more features.

It’s not surprising, then, that I find much of what’s happening in the world of cloud-native platforms familiar. Today, we’re using many of the same concepts as part of building micro-service architectures, on top of platforms like Kubernetes. Like SOAP, the underlying concept is an open set of tools that can connect applications and services, working in one public cloud, from on-premises systems to a public cloud, and from cloud to cloud. It’s that cross-cloud option that’s most interesting: Each of the three big public cloud providers does different things well, so why not build your applications around the best of Azure, AWS, and Google Cloud Platform?

Introducing the Open Service Broker

One of the key technologies for enabling this cross-cloud world is the open service broker. Building on the SOA concept of the service broker, the Open Service Broker API provides a way to take information from a platforms list of available services, automate the process of subscribing to a service, provision it, and connect it to an application. It can also handle the reverse, so when you no longer want to use a service, it removes the connection from your application instance and deprovisions the service.

Developed by a team from across several cloud-native platform providers, including Pivotal and Google, there are implementations for common platforms like Cloud Foundry, Kubernetes, and Open Shift. Microsoft has developing its own implementation of the Open Service Broker (OSB), with support for a selection of key Azure services, including Cosmos DB, Azure SQL, Azure Container Instances, and the Azure Service Bus.

OSB comes to Azure

Available on GitHub, the Open Service Broker for Azure (OSBA) installs on any platform that supports Open Service Broker, running anywhere. That’s a big advantage for developers wanting to take advantage of tools like Cosmos DB from applications running on AWS’s Kubernetes implementation or from an on-premises Cloud Foundry. It replaces Azure’s existing service brokers, with one common tool that’s developed in the open, rather than inside Microsoft.

Published under an MIT license, OSBA is an active project, with more than 340 commits and eight releases to date. The code is still under development, so while it’s alpha code that’s close to usable in production, there could be breaking changes between releases.

Getting the Open Service Broker for Azure working is easy enough: The project has a series of quick start documents to help bootstrap your projects. These samples include working with a local Minikube test instance, a Cloud Foundry installation, and AWS Kubernetes Clusters, as well on Microsoft’s own Azure Container Instances. Microsoft’s OSBA builds on work done by the Deis team, especially the Helm package manager. So you’ll need to start with Helm installed on your Kubernetes cluster, ready to install the service catalog and OSBA.

Using OSBA to manage service instances

Once you’ve installed OSBA, you can use the Kubernetes command-line tools to add new service instances. One important tool is the Azure CLI; this gives you access to Azure resources from your computer, with support for MacOS, Windows, and Linux. Once installed, you can use the CLI to collect the information you’ll need to work with OSBA, starting by logging in to Azure and listing available resources. You can simplify working with your tools by creating environment variables for any required login details and keys needed to handle provisioning Azure services, making it easier to automate operations without storing Azure log on details publicly. Once you’ve got this information, you can manage OSBA services running on Azure or check that services provisioned from elsewhere are set up and running.

With command-line access to Kubernetes, you can provision your Azure services directly from the service catalog before binding them to your application. Don’t forget that the process is asynchronous and can take some time, so any automation will need to check for completion before deploying and starting applications. A Kubernetes secret stores connection data for your service, ready for use in an application. Services can be deprovisioned the same way, first unbinding and then deprovisioning.

The same processes work across public and private cloud platforms, giving you a common environment for working with Azure services no matter where you code is running. Cloud portability is an important requirement for modern applications; using OSBA to provision access to Azure services from anywhere goes a long way to fulfilling that promise—making Microsoft’s cloud platform more accessible.

Getting your service APIs right

While the Azure implementation of Open Service Broker is clearly for use with Azure services, there’s nothing to stop you using an installation of the general-purpose OSB with your own services. That does mean you’ll need to think about how you’ll implement your own APIs, and how you’ll manage them. You can include OSBA calls in Kubernetes manifests or in Helm charts, so a single command line can deploy an application from the general service catalog, provision supporting services, and then launch the application. That way, an application that need MySQL support can run on Azure’s MySQL service.

That’s a big issue for any modern application, because it’s not only an issue of application design, it’s also one of application life cycles and lifespan. You’re no longer writing code for yourself; you’re writing it for every developer who’s going to use your service. You need to think about API design and development, looking at choosing the appropriate approach to take (choosing between RESTful and RPC and GraphQL) and how to consider versioning and deprecation.

While every API has its own unique use case, once you make it public your role changes: You’re no longer just a developer, you’re also a caretaker. Publishing services for use with Open Service Broker means you’re now committed to working on someone else’s timetable. As Okta’s Keith Casey points out, “Developers want to do something useful and then go home,” so your APIs need to be rock-solid and ready to go before you make them available through service catalogs and tools like the Open Service Broker.