This is the sixth part of the Building Effective API Programs blog post series. In the previous parts we covered benefits of APIs, alignment between API programs, strategy and business models and API design and implementation. In this part, we discuss aspects of API operations.
Textbooks define operations management as “the activities, decisions and responsibilities of managing the production and delivery of products and services” (Slack, et al., 2013). In line with this, API operations is all about managing APIs once they are live to make sure that they are accessible and deliver according to developers’ expectations. As such, API operations has two functions:
- Internally, the processes need to be streamlined and efficient to reduce cost.
- Externally, API operations need to be effective at meeting developers’ expectations.
This notion ties in to John Musser’s fourth key to a great API, which we covered in part 3: It should be managed and measured. In a previous post, API Gold Standard III, Steve analyzed in detail what that means and how it can be achieved. In this post, we cover an additional tool to help you get API operations right: the API Operations Donut.
API Operations Donut
Operations management theory suggests five key performance objectives:
The donut can be used to define operations tactics to achieve an organization’s API strategy. The inner circle of the donut represents an organization’s internal activities and effects; everything outside of the ring are external effects.
What is the actual availability of the API to developers? Downtime is a useful metric. You can achieve this by redundancy or spike arresting. Another metric is a quota (rate limits) which defines how many API calls can be made by a developer within a certain time frame. A quota protects the API and makes its management more predictable. Also, some API providers’ business models and price plans are based on quotas.
Flexibility relates to the options developers have in adopting your APIs. This could be technical options (see API Design and Implementation) or business options – the possibility or simplicity of changing between price plans or cancellation. Internally, the means would be version control and versioning. In general, the more flexibility that is provided, the more effort (and cost) the organization needs to bear internally.
You can think of quality as consistent conformance to developers’ expectations – this influences their satisfaction. As such, quality is an overarching performance objective that is closely tied to the other four objectives. You can conform to expectations by clearly defining and meeting service level agreements. Streamlined and purposeful automated processes can improve internal efficiency and contribute positively to quality, too.
Important aspects include access latency and throughput. Both can be influenced by throttling or caching. Throttling in particular (like quotas) can be used for defining an API providers’ business model.
The cost objective means providing the best value-for-money for developers. Internally, that means optimizing costs wherever possible, without hampering customer experience (such as perceived value and quality). Depending on context and implementation, all of the other four performance objectives contribute to the cost objective either directly or indirectly.
For a minimum configuration, we suggest:
- Access Control: authentication and authorization systems to identify the originator of incoming traffic and ensure only permitted access.
- Rate Limits and Usage Policies: usage quotas and restrictions on incoming traffic volumes or other metrics to keep traffic loads predictable.
- Analytics: data capture an analysis of traffic patterns to track how the API is being used.
It’s important that the API operations strategy fits into the overall API and business strategy. API management efforts and resources should be in line with the importance and scale of the API itself.
Example: The Slice API
Slice has built a powerful data-extraction engine that connects to any email inbox, identifies the ecommerce receipts contained in that inbox, and extracts item-level purchase information from those receipts. This data-extraction engine has powered the Slice consumer apps (available from www.slice.com) for five years. In 2014, Slice officially launched the Slice API (at developer.slice.com), opening the same engine up to third-party developers building new experiences around their users’ purchase data. In addition to supporting several large financial institutions, the Slice API has powered such diverse use cases as Gone!, a service that helps consumers sell their old stuff; IFTTT, a service that connects APIs together; TheFind, an aggregated ecommerce search engine; and many more.
Slice’s key performance objective in building the API was flexibility. Since the applications of this technology are so diverse, it was important to be able to support everybody from large banks, which have substantial development resources and long time horizons, to tiny startups and hackathon projects, which are quick and nimble but strapped for time and resources.
Slice found that their development partners were divided into two camps: some that wanted complete control over their user experience and were willing to invest the time to do a full white-label of the Slice platform, and others that wanted a quick integration and were comfortable using OAuth to “link an existing Slice account.” Initially, Slice expected to have to pick one integration method to support at the expense of the other, but they realized that the two were almost the same except for the authorization method that they would use. In fact, the API requirements for both groups of developers were almost exactly the same: both simply needed a way to retrieve orders, purchased items, and shipment information for specified users that had authorized Slice to share their data.
Ultimately, Slice decided to support two types of authorization: vanilla OAuth 2.0, and a signature-based method for power white-label integrations. Since this decision added significant complexity to the API, Slice implemented its developer portal in such a way that most developers would only ever be aware of the OAuth integration method. Furthermore, Slice’s API team made an extra push on simplicity elsewhere, described by a product manager as “scalpel-driven design” because his first step was to delete 75% of the fields in the original API spec. This ensured that, for the majority of smaller developers who were interested in an OAuth integration, the API would be simple and straightforward, while also maintaining flexibility to support larger partners who were willing to make the investment for a white-label integration.
- Make sure you understand the objectives and expectations of the API program because the design of the API operations are a consequence of that. API operations is the actual delivery of the promise you make via the API program.
- Based on those objectives, prioritize the key performance objectives of your API operations: dependability, flexibility, quality, speed, cost.
- Define what “internal” and “external” means in your context. Different API programs have different levels of scope: private, partner, public.
- Get a good overview of the metrics for each API operations objective and the means to influence them.
- Make vs. buy: off-the-shelf API management solutions (especially 3scale!) cover most of the API operations very well and cost-effectively.
- If you don’t do anything else in terms of API operations, at least make sure to have some sort of access control, rate limits or usage policies, and analytics.
I covered similar topics in “How to use Donuts and Onions for Scaling API Programs” at APIStrat Chicago 2014 (see slides + video).
In the next part of this series, we’ll cover API marketing.