{"API Orchestration"}

Github As The API Life Cycle Engine

I am playing around with some new features from the SDK generation as a service provider APIMATIC, including the ability to deploy my SDKs to Github. This is just many of the ways Github, and more importantly Git is being used as what I'd consider as an engine in the API economy. Deploying your SDKs is nothing new, but when your autogenerating SDKs from API definitions, deploying to Github and then using that to drive deployment, virtualization, containers, serverless, documentation, testing, and other stops along the API life cycle--it is pretty significant.

Increasingly we are publishing API definitions to Github, the server side code that serves up an API, the Docker image for deploying and scaling our APIs, the documentation that tell us what an API does, the tests that validate our continuous integration, as well as the clients and SDKs. I've been long advocating for use of Github as part of API operations, but with the growth in the number of APIs we are designing, deploying, and managing--Github definitely seems like the progressive way forward for API operations.

I will keep tracking on which service providers allow for importing from Github, as well as publishing to Github--whether its definitions, server images, configuration, or code. As these features continue to become available in these companies APIs I predict we will see the pace of continuous integration and API orchestration dramatically pick up. As we are more easily able to automate the importing and exporting of essential definitions, configurations, and the code that makes our businesses and organizations function.

See The Full Blog Post


API Aggregation, Reciprocity, and Orchestration

I struggle a lot with how I separate out my research areas--there are a lot of reasons why I will break off, or group information in a certain way. Really it all comes down to some layer of separation in my head, or possibly what I perceive will be in my readers head. For example, I broke off hypermedia into its own research project, but now I'm considering just weaving it into my API design research

This is one of the reasons I conduct my research the way I do, is that it lets me spin out research, if I feel necessary, but I can easily combine projects, when I want as well. As I move API aggregation and reciprocity out of my "trends" category, and into my primary bucket of research, I'm consideration an addition of a 3rd area dedicated to just orchestration. Right now I'm considering aggregation staying focused on providing APIs that bring together multiple APIs into a single interface, and reciprocity is about moving things between two API driven services--I'm thinking orchestration will be more about the bigger picture that will involve automation, scheduling, events, jobs, logging, and much more. 

I enjoy my research being like my APIs, and keeping them the smallest possible units as possible. When they start getting too big, I can carve off a piece into its own area. I can also easily daisy chain them together, like API design, definitions, and hypermedia are. Some companies I track on will only enable API reciprocity at the consumer level, like IFTTT, where others like Cloud Elements will live in aggregation, reciprocity, and orchestration. I also think orchestration will always deal with business or industrial grade API usage, where my individual users can look to some of the lighter weight, more focused solutions, available in reciprocity.

Who knows? I might change my tune in the future, but for now I have enough curated stories, and companies who are focused on API orchestration to warrant the spinning off of its own research. Once added, I will link offf the home page of API Evangelist with the other 35+ research projects into how APIs are being put to work. I'm hoping that like my research into API monitoring, testing, and performance has produced a critical Venn diagram for me, that API aggregation, reciprocity, and orchestration, will better help me understand see the overlap in these areas for both API provider, and consumer.

See The Full Blog Post


Reconciling My API Orchestration Research With the Evolution of IDE, SDK, and HTTP Clients

I've been tagging companies that I come across in my research, and stories that I find with the term "orchestration" for some time now. Some of this overlaps with what we know as cloud-centric orchestration using Puppet or Chef, but I am specifically looking for how we orchestrate across the API lifecycle which I feel overlaps with cloud orchestration, but pushes into some new realms.

As I'm carving off my orchestration research, I am also spending time reviewing a newer breed of what I'm calling API hubs, workspaces, or garages. Over the last year, I've broken out IDE research from my overall API Discovery research, and SDK from my API Management research, and client from my API Integration research. In parallel with an API-centric way of life, I want all my research to be as modular as possible, allowing me to link it together into meaningful ways that help me better understand how the space works, or could work.

Now that I'm thinking terms of orchestration, something that seems to be a core characteristic of these new API hubs, work spaces, or garages--I'm seeing a possibly new vision of the API life-cycle. I'm going to organize these new hubs, work spaces, and garages under my IDE research. I am starting to believe that these new work spaces are just the next generation IDE meant to span the entire API life-cycle--we will see how this thought evolves.

This new approach to API IDEs gives us design, and development capabilities, but also allows us to mock and deploy APIs. You can generate API documentation, and SDKs, and I'm seeing hints of orchestration using Github and Docker. I'm seeing popular clients like Postman evolve to be more like a API life-cycle IDE, and I'm also seeing API design tooling like Restlet Studio invest in HTTP clients to expand beyond just design, adding live client interaction, testing, and other vital life-cycle elements.

None of my research is absolute. It is meant to help me make sense of the space, and give me a way to put news I curate, companies I discover, and open source tooling into meaningful buckets that might also help you define a meaningful version of your own API life-cycle. I apologize if this post is a little incoherent, but it is how I work through my thoughts around the API space, how things are expanding and evolving in real-time--something I hope will come into better focus in coming weeks.

See The Full Blog Post


Some Potentially Very Powerful API Orchestration With The Amazon API Gateway

I sat down for a second, more in-depth look at the Amazon API Gateway. When it first released I took a stroll through the interface, and documentation, but this time, I got my hands dirty playing with the moving parts, and considering how the solution fits into the overall API deployment picture.

API Design Tools
As soon as you land on the Amazon API Gateway dashboard page, you can get to work adding APIs by defining endpoints, crafting specific methods (paths), the crafting the details of your HTTP resources (verbs), and round off your resources with parameters, headers, and underlying data models. You can even map the custom sub-domain of your choosing to your Amazon API Gateway generated API, giving it exactly the base URL you need.

API Mapping Templates
One feature provided by the Amazon API Gateway that I find intriguing is the mapping templates. Using the data models and the mapping template tool, you can transform data from one schema to another. This is very interested when you are thinking about evolving your own legacy APIs, but I'm also thinking it could come in real handy for mapping to public APIs, and demonstrating to clients what is possible with a next version--designed from the outside-in-mapping is something I've wanted to see for some time now.

API Integration Types
Up until now, in this review, we are just talking about designing APIs, and possibly mapping our data models together. There are many other ways you can gateway existing systems, databases, and other resources using Amazon API Gateway, but the one that seems to be getting the lions share of the discussion, is deploying APIs with Lambda functions as the back-end.

API Integration Using Lambda Functions
Lambda functions give you the ability to create, store, and manage Node.js and Java code snippets, and wire up these resources using the Amazon API Gateway. When you create your first Lambda function, you are given a small selection of blueprints, like a microservice, or db connection, which also allows you to edit your code inline, upload a .zip file, and pull a .zip file from Amazon S3 (where is the Github love).

Identity & Access Management (IAM)
The Amazon API Gateway gives you some pretty simple ways to secure your APIs using API keys, but then also gives you the whole AWS IAM platform, and resources to put to leverage as well. I think most of the IAM will be more than many API providers will need, but for those that need this, I can see this part of their gateway solution sealing the deal.

Scaling Lambda Functions Behind
Being scalable is one of the promises of a Lambda backed API deployed using Amazon API Gateway, which I can see being pretty alluring for devops focused teams. You can allocate each Lambda function to posses the memory it needs, and individually monitor and scale as needed. While I see the recent containerization movement taking care of 50% of the API back-end needs, I can also see that being able to quickly scale individual functions as you need using the cloud, taking care of the other 50%.

Events For Lambda Functions
Another powerful aspects of a Lambda function, is you can engineer them to response to events. Using the interface, command line, or API, you can define one or many event sources for each Lambda function. Amazon provides some pretty interesting sources for triggering each Lambda function.

Those six event sources provide some pretty potent event sources for triggering specific functions in your vast Lambda code library.  You can rely on running code stored as Lambda functions using the API you deploy using Amazon API Gateway and / or you can have your code run in response to a variety of these events you define.

Beyond Lambda
When it comes to defining a back-end for the APIs you deploy using Amazon API Gateway, Lambda is just the beginning. Amazon provides three other really interesting ways to power APIs. I see a lot of potential in managing code using Lambda, and using it to scale the back-end of many APIs pretty quickly, but these areas provide some pretty interesting potential as well.

HTTP Proxy
A quick way to put Amazon API Gateway to use is as a proxy for an existing API. When you think about the potential in this area, when put mapping templates to work, transforming the methods, resources, and models. I haven't mapped it to any existing APIs yet, but will make sure and do so soon, to better understand the HTTP proxy potential.

Mock Integration
Another way to quickly deploy an API is mock your integration, providing a quick API that can be used to hack on, making sure an API will meet developer's needs. You may even want to mock an existing public API, rather than use a live resoure as you are developing an application. There are many uses for mock integration. 

AWS Service Proxy
The final way Amazon gives provides for you to power your API(s), is by proxying an existing AWS service. This opens up the entire AWS cloud stack for exposing as API resources, using the Amazon API Gateway. This reminds me of other existing API gateway solutions, except instead of your on-premise, legacy infrastructure, this is your in the cloud, more recent infrastructure. I'm guessing this will incentivize many companies to migrate their legacy infrastructure into the cloud, or at least make it cloud friendly, so you can put the AWS service proxy to use--lots of possibilities here.

Defining The Stages Of Your Lifecycle
Going beyond the types of integration you can employ when crafting, and deploying APIs using the Amazon API Gateway, the platform also provides a way to define stages that APIs will exist in from design, development, QA, production, or any other stage you wish. I like the concept of having a stage defined for each API, designating where it exists on the API life-cycle. I tend to just have dev and prod, but this might make me consider this a little more deeply, as it seems to be a big part of defining the API journey.

API Monitoring By Default
Amazon has built in monitoring by default into the API Gateway, and Lambda functions. You can connect APIs, and their designated integration back-end to CloudTrail, and monitor everything about your operations. CloudTrail is very much a cloud infrastructure logging solution over API analytics solutions, but I could see it evolve into something more than just monitoring and logging, providing an overall awareness of API consumption. Maybe an opportunity for the ecosystem to step in via the API(s).

CLI An API For The API Gateway
You have to admit, Amazon gets interfaces, making sure every service on the platform has a command line interface as well as an API. This is where a lot of the API orchestration magic will come into play in my opinion. The ability to automate every aspect of API design, deployment, management, and monitoring, across your whole stack, using an API is the future. 

There Are Some Limitations
There are some current limitations of the Amazon API Gateway. They limit things to 60 APIs maximum per AWS account, 300 resources maximum per API, 10 stages maximum per API, and 10-second timeout for both AWS Lambda and HTTP back-end integration. They are just getting going, so I'm sure they are just learning how people will be using the API deployment and management infrastructure in the cloud, and we'll see this evolve considerably.

What Will This Cost?
Lambda is providing the first 1 million requests per month for  free, and $0.20 per 1 million requests thereafter, or $0.0000002 per request. The Amazon API Gateway costs $3.50 per million API calls received, plus the cost of data transfer out, in gigabytes. It will be interesting to see what this costs at scale, but I'm sure overall, it will be very inexpensive to operate like other AWS services, and with time the cost will come down even further as they dial it all in.

AWS API Gateway Has Me Thinking
I won't be adopting AWS right away, I'd prefer to watch it evolve some more, but overall I like where they are taking things. The ability to quickly deploy code with Lambda, and use blueprints to clone, and deploy the code-behind APIs, has a lot of potential. Most of my APIs are just simple code that either returns data from a database, and conducts some sort of programmatic function, making Lambda pretty attractive, especially when it comes to helping you scale and monitor everything by default. 

My original criticism of the platform still stands. Amazon is courting the enterprise with this, providing the next generation of API gateway for the legacy resources we have all accumulated in the cloud. Something that really doesn't help large companies sort through their technical debt, allowing them to just grow it, and manage it in the cloud. Win for AWS, so honestly it makes sense, even though it doesn't deliver critical API life-cycle lessons the enterprise will need along way to actually make change.

This is a reason I won't be getting hooked on Lambda + Amazon API Gateway anytime soon, because I really don't want to be locked into their services. I'm a big fan of my platform employing common, open server tooling (Linux, Apache, NGINX, MySQL, PHP), and not relying on specialty solutions to make things efficient--I rely on my skills, and experience and knowledge of the resources I'm deploying, to deliver efficiency at scale. My farm to table approach to deploying APIs, keeps me in tune with my supply chain, something that may not work for everyone.

While the tooling I use may not be the most exciting, it is something I can move from AWS, and run anywhere. All of my APIs can easily be recreated on any hosting environment, and I can find skills to help me with this work almost anywhere in the world. After 25 years of managing infrastructure, I'm hyper-aware of lock-in, even the subtle moves that happen over time. However, my infrastructure is much smaller than many of the companies who will be attracted to AWS Lambda + API Gateway, which actually for me, is another big part of the API lesson and journey, but if you don't know this already, I'll keep it to myself.

I'd say AWS gives a healthy nod to the type of platform portability I'm looking for, with the ability to import and export your back-end code using Lambda, and the ability to use API definitions like Swagger as part of Amazon API Gateway emerge. These two things will play a positive role in the overall portability, and interoperability of the platform, but doing this for the deeper connections made with other AWS services, will be a lot harder to evolve from if you ever have to migrate from AWS.

For now, I'll keep playing with Amazon API Gateway, because it definitely holds a lot of potential for some very powerful API orchestration, add while the platform may not work for me 100%, AWS is putting some really interesting concepts into play.

See The Full Blog Post


API Management Infrastructure And Service Composition Is Key To Orchestration With Microservices In A Containerized World

As I work to redefine my world using microservices, I have this sudden realization how important my API management infrastructure is to all of this. Each one of my microservices are little APIs that do one thing, and do it well, relying on my API management infrastructure to know who should be accessing, and exactly how much of the resource they should have access to.

My note API shouldn’t have to know anything about my users, it is just trained to ask my API management infrastructure, if each user has proper credentials to accessing the resource, and what the service composition will allow them to do with it (aka read, write, how much, etc.) My note API does what it does best, store notes, and relies on my API management layer to do what it does best--manage access to the microservice.

This approach to API management has llowed me to deploy any number of microservices, using my API management infrastructure to compose my various service packages—this is called service composition. I employ 3Scale infrastructure for all my API / microservice management, which I use to define different service tiers like retail, wholesale, internal, and other service specific groupings. When users sign up for API access, I add them to one of the service tiers, and my API service composition layer handles the rest.

Modern API management service composition is the magic hand-waiving in my microservice orchestration, and without it, it would be much more work for me to compose using microservices in this containerized API world that is unfolding.

Disclosure: 3Scale is an API Evangelist partner.

See The Full Blog Post


The Emerging Landscape Of API Orchestration Platforms

I’ve been exploring one possible API future more and more lately, a future which centers around the a concept of being able to deploy virtual API stacks, by employing the power of deploying API resources in virtualized containers, something that will free individual API resources up for orchestration in new and exciting ways--doing for APIs, what APIs have been doing for companies.

Driven by a recent evolution in cloud computing introduced by Docker.io, we are beginning to see new services emerge that get us closer to this vision of API orchesstration. Last week I wrote about how StrongLoop providing one look at the future of API deployment, using Node.js. This week I was introduced to another API deployment solution that also resembles some of these earlier thoughts I have had on API orchestration, from BlockSpring.

Blockspring deploys a containerized API for any Python, Ruby, and R scripts, so all you do is take some code resource, deploy using BlockSpring, and it generates API endpoints, and a form interface for working with the resource. Blockspring provides documentation to help you understand how to craft your code, and a library to publish your API to, when ready.

When you look at their API library, it even drops you into a folder called /blocks, which gives you into a list of APIs deployed using Blockspring, doing a range of functions from screen capture, applying image filters, to sentiment and text analysis. Blockspring enables, and encourages the design, and deployment of very granular API resources, that do one thing, and hopefully also doing it well, providing an interesting, and very modular way of deploying APIs--think legos.

This deployment of simple, programmatic API resources, using a very containerized architecture, is an important new layer to the growing world of API orchestration, complimenting the simpler content and data driven API resources we are seeing deployed from spreadsheets and databases, and common file stores. I’m not seeing a pure spreadsheet to API, or database to API solution which solely employs virtual containers, but I’m sure its not far off.

Seeing what Strongloop and Blockspring are up to makes me happy. Imagine a world where you can deploy exactly the data, content, and programmatic resources you will need for your web, mobile or single page application. This ability to define, deploy, scale and manage all of your API resources, in such a granular way will contribute significantly to the evolution how we build apps, and connect our devices to the Internet.

This emerging landscape of new API orchestration platforms will change how we deploy and consume APIs, making APIs much more remixable and composable, bringing APIs out of their silos. I also hope this new approach will also expand on wholesale opportunities for API providers, and continue to change how we monetize our content, data, and programmatic resources in the API economy.

See The Full Blog Post