RSS

API Orchestration News

These are the news items I've curated in my monitoring of the API space that have some relevance to the API definition conversation and I wanted to include in my research. I'm using all of these links to better understand how the space is testing their APIs, going beyond just monitoring and understand the details of each request and response.

Adding Ping Events To my Webhooks And API Research

I am adding another building block to my webhooks research out of Github. As I continue this work, it is clear that Gthub will continue to play a significant role in my webhook research and storytelling, because they seem to be the most advanced when it comes to orchestration via API and webhooks. I’m guessing this is a by-product of continuous integration (CI) and continuous deployment (CD), which Github is at the heart of. The API platforms that have embraced automation and orchestration as part of what they do, always have the most advanced webhook implementations, and provide the best examples of webhooks in action, which we can all consider as part of our operations.

Today’s webhook building block is the ping event. “When you create a new webhook, we’ll send you a simple ping event to let you know you’ve set up the webhook correctly. This event isn’t stored so it isn’t retrievable via the Events API. You can trigger a ping again by calling the ping endpoint.” A pretty simple, but handy features when it comes to getting up and going with webhooks, making sure everything is working properly out of the gate–something that clearly comes from experience, and listening to the problems your consumers are encountering.

These types of subtle webhook features are exactly the types of building blocks I’m looking to aggregate as part of my research. As I do with other areas of my research, is at some point I will publish all of these into a single, (hopefully) coherent guide to webhooks. After going through the webhook implementations across the leading providers like Github, I should have a wealth of common patterns in use. Since webhooks aren’t any formal standard, it is yet another aspect of doing business with APIs we have to learn from the health practices already in use across the space. It helps to emulate providers like Github, because developers are pretty familiar with how Github works, when your webhooks behave in similar ways it reduces the cognitive load API consumers face when they are getting started.

One other thing to note in this story–my link to Github’s documentation goes directly to the section on webhook ping events, because they use anchors for all titles and subtitles. This is something that makes storytelling around API operations soooooooooo much easier, and more precise. Please, please, please emulate this in your API operations. If I can directly link to something interesting within your API documentation, the chances are much greater I will tell a story, and publish a blog post about it. If I have to make a user search for whatever I’m talking about, I’m probably just gonna pass on it. One more trick for your toolbox, when it comes to getting me to tell more stories about what you are up to.


Kubernetes JSON Schema Extracted From OpenAPI

I’ve been doing my regular trolling of Github lately, looking for anything interesting. I came across a repository this week that contained JSON Schema for Kubernetes. Something that is interesting by itself, but I also thought the fact that they had autogenerated the individual JSON Schema files from the Kubernetes OpenAPI was worth a story. It demonstrates for me, the growing importance of schema in all of this, and shows that having them readily available on Github is becoming more important for API providers and consumers.

Creating schema is an important aspect of crafting an OpenAPI, but I find that many API providers, or the consumers who are creating OpenAPIs and publishing them to Github are not always investing the time into making sure the definitions, or schema portion of them are complete. Another aspect, as Gareth Rushgrove, the author of the Github repo where I found these Kubernetes schema points out, is the JSON Schema in OpenAPI often leaves much to be desired. Until version 3.0 it hasn’t supported everything you need, and many of the ways you are going to use these schema aren’t going to be able to use them in an OpenAPI, and you will need them as individual schema files like Gareth has done.

I just published the latest version of the OpenAPI for my Human Services Data API (HSDA) work, and one of the things I’ve done is extracted the JSON Schema into separate files so I can use them in schema validation, and other services and tooling I will be using throughout the API lifecycle. I’ve setup an API that automatically extracts and generates them from the OpenAPI, but I’m also creating a Github repo that does this automatically for any OpenAPI I publish into the data folder for the Github repository. This way all I have to do is publish an OpenAPI, and there is automatically a page that tells me how complete or incomplete my schema are, as well as generates individual representations that I can use independent of the OpenAPI.

I am hoping this is the beginning of folks investing more into getting their schema act together. I’m also hoping this is something that OpenAPI 3.0 will help us focus on more as well. Pushing API designers, architects, and developers to get their schema house in order, and publish them not just as OpenAPI, but individual JSON Schema, so they can be used independently. I’m investing more cycles into helping folks learn about JSON Schema as I’m pushing my own awareness forward, and will be creating more tooling, training material, and stories that help out on this front. I’m a big fan of OpenAPI, and defining our APIs, but as an old database guy I’m hoping to help stimulate the schema side of the equation, which I think is often just as important.


Bringing The API Deployment Landscape Into Focus

I am finally getting the time to invest more into the rest of my API industry guides, which involves deep dives into core areas of my research like API definitions, design, and now deployment. The outline for my API deployment research has begun to come into focus and looks like it will rival my API management research in size.

With this release, I am looking to help onboard some of my less technical readers with API deployment. Not the technical details, but the big picture, so I wanted to start with some simple questions, to help prime the discussion around API development.

  • Where? - Where are APIs being deployed. On-premise, and in the clouds. Traditional website hosting, and even containerized and serverless API deployment.
  • How? - What technologies are being used to deploy APIs? From using spreadsheets, document and file stores, or the central database. Also thinking smaller with microservices, containes, and serverless.
  • Who? - Who will be doing the deployment? Of course, IT and developers groups will be leading the charge, but increasingly business users are leveraging new solutions to play a significant role in how APIs are deployed.

The Role Of API Definitions While not every deployment will be auto-generated using an API definition like OpenAPI, API definitions are increasingly playing a lead role as the contract that doesn’t just deploy an API, but sets the stage for API documentation, testing, monitoring, and a number of other stops along the API lifecycle. I want to make sure to point out in my API deployment research that API definitions aren’t just overlapping with deploying APIs, they are essential to connect API deployments with the rest of the API lifecycle.

Using Open Source Frameworks Early on in this research guide I am focusing on the most common way for developers to deploy an API, using an open source API framework. This is how I deploy my APIs, and there are an increasing number of open source API frameworks available out there, in a variety of programming languages. In this round I am taking the time to highlight at least six separate frameworks in the top programming languages where I am seeing sustained deployment of APIs using a framework. I don’t take a stance on any single API framework, but I do keep an eye on which ones are still active, and enjoying usag bey developers.

Deployment In The Cloud After frameworks, I am making sure to highlight some of the leading approaches to deploying APIs in the cloud, going beyond just a server and framework, and leveraging the next generation of API deployment service providers. I want to make sure that both developers and business users know that there are a growing number of service providers who are willing to assist with deployment, and with some of them, no coding is even necessary. While I still like hand-rolling my APIs using my peferred framework, when it comes to some simpler, more utility APIs, I prefer offloading the heavy lifting to a cloud service, and save me the time getting my hands dirty.

Essential Ingredients for Deployment Whether in the cloud, on-premise, or even on device and even the network, there are some essential ingredients to deploying APIs. In my API deployment guide I wanted to make sure and spend some time focusing on the essential ingredients every API provider will have to think about.

-Compute - The base ingredient for any API, providing the compute under the hood. Whether its baremetal, cloud instances, or serverless, you will need a consistent compute strategy to deploy APIs at any scale. -Storage - Next, I want to make sure my readers are thinking about a comprehensive storage strategy that spans all API operations, and hopefully multiple locations and providers. -DNS - Then I spend some time focusing on the frontline of API deployment–DNS. In todays online environment DNS is more than just addressing for APIs, it is also security. -Encryption - I also make sure encryption is baked in to all API deployment by default in both transit, and storage.

Some Of The Motivations Behind Deploying APIs In previous API deployment guides I usually just listed the services, tools, and other resources I had been aggregating as part of my monitoring of the API space. Slowly I have begun to organize these into a variety of buckets that help speak to many of the motivations I encounter when it comes to deploying APIs. While not a perfect way to look at API deployment, it helps me thinking about the many reasons people are deploying APIs, and craft a narrative, and provide a guide for others to follow, that is potentially aligned with their own motivations.

  • Geographic - Thinking about the increasing pressure to deploy APIs in specific geographic regions, leveraging the expansion of the leading cloud providers.
  • Virtualization - Considering the fact that not all APIs are meant for production and there is a lot to be learned when it comes to mocking and virtualizing APIs.
  • Data - Looking at the simplest of Create, Read, Update, and Delete (CRUD) APIs, and how data is being made more accessible by deploying APIs.
  • Database - Also looking at how APIs are beign deployed from relational, noSQL, and other data sources–providing the most common way for APIs to be deployed.
  • Spreadsheet - I wanted to make sure and not overlook the ability to deploy APIs directly from a spreadsheet making APIs are within reach of business users.
  • Search - Looking at how document and content stores are being indexed and made searchable, browsable, and accessible using APIs.
  • Scraping - Another often overlooked way of deploying an API, from the scraped content of other sites–an approach that is alive and well.
  • Proxy - Evolving beyond early gateways, using a proxy is still a valid way to deploy an API from existing services.
  • Rogue - I also wanted to think more about some of the rogue API deployments I’ve seen out there, where passionate developers reverse engineer mobile apps to deploy a rogue API.
  • Microservices - Microservices has provided an interesting motivation for deploying APIs–one that potentially can provide small, very useful and focused API deployments.
  • Containers - One of the evolutions in compute that has helped drive the microservices conversation is the containerization of everything, something that compliments the world of APis very well.
  • Serverless - Augmenting the microservices and container conversation, serverless is motivating many to think differently about how APIs are being deployed.
  • Real Time - Thinking briefly about real time approaches to APIs, something I will be expanding on in future releases, and thinking more about HTTP/2 and evented approaches to API deployment.
  • Devices - Considering how APis are beign deployed on device, when it comes to Internet of Things, industrial deployments, as well as even at the network level.
  • Marketplaces - Thinking about the role API marketplaces like Mashape (now RapidAPI) play in the decision to deploy APIs, and how other cloud providers like AWS, Google, and Azure will play in this discussion.
  • Webhooks - Thinking of API deployment as a two way street. Adding webhooks into the discussion and making sure we are thinking about how webhooks can alleviate the load on APIs, and push data and content to external locations.
  • Orchestration - Considering the impact of continous integration and deployment on API deploy specifically, and looking at it through the lens of the API lifecycle.

I feel like API deployment is still all over the place. The mandate for API management was much better articulated by API service providers like Mashery, 3Scale, and Apigee. Nobody has taken the lead when it came to API deployment. Service providers like DreamFactory and Restlet have kicked ass when it comes to not just API management, but making sure API deployment was also part of the puzzle. Newer API service providers like Tyk are also pusing the envelope, but I still don’t have the number of API deployment providers I’d like, when it comes to referring my readers. It isn’t a coincidence that DreamFactory, Restlet, and Tyk are API Evangelist partners, it is because they have the services I want to be able to recommend to my readers.

This is the first time I have felt like my API deployment research has been in any sort of focus. I carved this layer of my research of my API management research some years ago, but I really couldn’t articulate it very well beyond just open source frameworks, and the emerging cloud service providers. After I publish this edition of my API deployment guide I’m going to spend some time in the 17 areas of my research listed above. All these areas are heavily focused on API deployment, but I also think they are all worth looking at individually, so that I can better understand where they also intersect with other areas like management, testing, monitoring, security, and other stops along the API lifecycle.


Setting The Rules For API Automation

Twitter released some automation rules this spring, laying the ground rules when it comes to building bots using the Twitter API. Some of the rules overlap with their existing terms of service, but it provides an interesting evolution in how platform providers need to be providing some direction for API consumers in a bot-driven conversational landscape.

They begin by laying the ground rules for automation using the Twitter API:

Do!

  • Build solutions that automatically broadcast helpful information in Tweets
  • Run creative campaigns that auto-reply to users who engage with your content
  • Build solutions that automatically respond to users in Direct Messages
  • Try new things that help people (and comply with our rules)
  • Make sure your application provides a good user experience and performs well — and confirm that remains the case over time

Don’t!

  • Violate these or other policies. Be extra mindful of our rules about abuse and user privacy!
  • Abuse the Twitter API or attempt to circumvent rate limits
  • Spam or bother users, or otherwise send them unsolicited messages

Twitter is just evolving their operation by providing an automation update to the Twitter rules and the developer agreement and policy, outlining what is expected of automated activity when it comes to engaging with users account, when bots are tweeting, direct messages, and other actions you take when it comes to Tweets or Twitter accounts. Providing an interesting look at the shift in API platform terms of service as the definition of what is an application continues to evolve.

While there were may automated aspects to the classic interpretation of web or mobile applications, bots are definitely bringing an entirely new meaning to what automation can bring to a platform. I think any API driven platform that is opening up their resources to automation is going to have to run down their list of available resources and think deeply about the positive and negative consequences of automation in the current landscape. Whether it is bots, voice, iPaaS, CI, CD, or any other type of API driven automation, the business, and politics of API operations are shifting rapidly, and the threats, risks, and stakes are only going to get higher.


Github As The API Life Cycle Engine

I am playing around with some new features from the SDK generation as a service provider APIMATIC, including the ability to deploy my SDKs to Github. This is just many of the ways Github, and more importantly Git is being used as what I'd consider as an engine in the API economy. Deploying your SDKs is nothing new, but when your autogenerating SDKs from API definitions, deploying to Github and then using that to drive deployment, virtualization, containers, serverless, documentation, testing, and other stops along the API life cycle--it is pretty significant.

Increasingly we are publishing API definitions to Github, the server side code that serves up an API, the Docker image for deploying and scaling our APIs, the documentation that tell us what an API does, the tests that validate our continuous integration, as well as the clients and SDKs. I've been long advocating for use of Github as part of API operations, but with the growth in the number of APIs we are designing, deploying, and managing--Github definitely seems like the progressive way forward for API operations.

I will keep tracking on which service providers allow for importing from Github, as well as publishing to Github--whether its definitions, server images, configuration, or code. As these features continue to become available in these companies APIs I predict we will see the pace of continuous integration and API orchestration dramatically pick up. As we are more easily able to automate the importing and exporting of essential definitions, configurations, and the code that makes our businesses and organizations function.


API Aggregation, Reciprocity, and Orchestration

I struggle a lot with how I separate out my research areas--there are a lot of reasons why I will break off, or group information in a certain way. Really it all comes down to some layer of separation in my head, or possibly what I perceive will be in my readers head. For example, I broke off hypermedia into its own research project, but now I'm considering just weaving it into my API design research

This is one of the reasons I conduct my research the way I do, is that it lets me spin out research, if I feel necessary, but I can easily combine projects, when I want as well. As I move API aggregation and reciprocity out of my "trends" category, and into my primary bucket of research, I'm consideration an addition of a 3rd area dedicated to just orchestration. Right now I'm considering aggregation staying focused on providing APIs that bring together multiple APIs into a single interface, and reciprocity is about moving things between two API driven services--I'm thinking orchestration will be more about the bigger picture that will involve automation, scheduling, events, jobs, logging, and much more. 

I enjoy my research being like my APIs, and keeping them the smallest possible units as possible. When they start getting too big, I can carve off a piece into its own area. I can also easily daisy chain them together, like API design, definitions, and hypermedia are. Some companies I track on will only enable API reciprocity at the consumer level, like IFTTT, where others like Cloud Elements will live in aggregation, reciprocity, and orchestration. I also think orchestration will always deal with business or industrial grade API usage, where my individual users can look to some of the lighter weight, more focused solutions, available in reciprocity.

Who knows? I might change my tune in the future, but for now I have enough curated stories, and companies who are focused on API orchestration to warrant the spinning off of its own research. Once added, I will link offf the home page of API Evangelist with the other 35+ research projects into how APIs are being put to work. I'm hoping that like my research into API monitoring, testing, and performance has produced a critical Venn diagram for me, that API aggregation, reciprocity, and orchestration, will better help me understand see the overlap in these areas for both API provider, and consumer.


Reconciling My API Orchestration Research With the Evolution of IDE, SDK, and HTTP Clients

I've been tagging companies that I come across in my research, and stories that I find with the term "orchestration" for some time now. Some of this overlaps with what we know as cloud-centric orchestration using Puppet or Chef, but I am specifically looking for how we orchestrate across the API lifecycle which I feel overlaps with cloud orchestration, but pushes into some new realms.

As I'm carving off my orchestration research, I am also spending time reviewing a newer breed of what I'm calling API hubs, workspaces, or garages. Over the last year, I've broken out IDE research from my overall API Discovery research, and SDK from my API Management research, and client from my API Integration research. In parallel with an API-centric way of life, I want all my research to be as modular as possible, allowing me to link it together into meaningful ways that help me better understand how the space works, or could work.

Now that I'm thinking terms of orchestration, something that seems to be a core characteristic of these new API hubs, work spaces, or garages--I'm seeing a possibly new vision of the API life-cycle. I'm going to organize these new hubs, work spaces, and garages under my IDE research. I am starting to believe that these new work spaces are just the next generation IDE meant to span the entire API life-cycle--we will see how this thought evolves.

This new approach to API IDEs gives us design, and development capabilities, but also allows us to mock and deploy APIs. You can generate API documentation, and SDKs, and I'm seeing hints of orchestration using Github and Docker. I'm seeing popular clients like Postman evolve to be more like a API life-cycle IDE, and I'm also seeing API design tooling like Restlet Studio invest in HTTP clients to expand beyond just design, adding live client interaction, testing, and other vital life-cycle elements.

None of my research is absolute. It is meant to help me make sense of the space, and give me a way to put news I curate, companies I discover, and open source tooling into meaningful buckets that might also help you define a meaningful version of your own API life-cycle. I apologize if this post is a little incoherent, but it is how I work through my thoughts around the API space, how things are expanding and evolving in real-time--something I hope will come into better focus in coming weeks.


Reconciling My API Orchestration Research With the Evolution of IDE, SDK, and HTTP Clients

I've been tagging companies that I come across in my research, and stories that I find with the term "orchestration" for some time now. Some of this overlaps with what we know as cloud-centric orchestration using Puppet or Chef, but I am specifically looking for how we orchestrate across the API lifecycle which I feel overlaps with cloud orchestration, but pushes into some new realms.

As I'm carving off my orchestration research, I am also spending time reviewing a newer breed of what I'm calling API hubs, workspaces, or garages. Over the last year, I've broken out IDE research from my overall API Discovery research, and SDK from my API Management research, and client from my API Integration research. In parallel with an API-centric way of life, I want all my research to be as modular as possible, allowing me to link it together into meaningful ways that help me better understand how the space works, or could work.

Now that I'm thinking terms of orchestration, something that seems to be a core characteristic of these new API hubs, work spaces, or garages--I'm seeing a possibly new vision of the API life-cycle. I'm going to organize these new hubs, work spaces, and garages under my IDE research. I am starting to believe that these new work spaces are just the next generation IDE meant to span the entire API life-cycle--we will see how this thought evolves.

This new approach to API IDEs gives us design, and development capabilities, but also allows us to mock and deploy APIs. You can generate API documentation, and SDKs, and I'm seeing hints of orchestration using Github and Docker. I'm seeing popular clients like Postman evolve to be more like a API life-cycle IDE, and I'm also seeing API design tooling like Restlet Studio invest in HTTP clients to expand beyond just design, adding live client interaction, testing, and other vital life-cycle elements.

None of my research is absolute. It is meant to help me make sense of the space, and give me a way to put news I curate, companies I discover, and open source tooling into meaningful buckets that might also help you define a meaningful version of your own API life-cycle. I apologize if this post is a little incoherent, but it is how I work through my thoughts around the API space, how things are expanding and evolving in real-time--something I hope will come into better focus in coming weeks.


Some Potentially Very Powerful API Orchestration With The Amazon API Gateway

I sat down for a second, more in-depth look at the Amazon API Gateway. When it first released I took a stroll through the interface, and documentation, but this time, I got my hands dirty playing with the moving parts, and considering how the solution fits into the overall API deployment picture.

API Design Tools
As soon as you land on the Amazon API Gateway dashboard page, you can get to work adding APIs by defining endpoints, crafting specific methods (paths), the crafting the details of your HTTP resources (verbs), and round off your resources with parameters, headers, and underlying data models. You can even map the custom sub-domain of your choosing to your Amazon API Gateway generated API, giving it exactly the base URL you need.

API Mapping Templates
One feature provided by the Amazon API Gateway that I find intriguing is the mapping templates. Using the data models and the mapping template tool, you can transform data from one schema to another. This is very interested when you are thinking about evolving your own legacy APIs, but I'm also thinking it could come in real handy for mapping to public APIs, and demonstrating to clients what is possible with a next version--designed from the outside-in-mapping is something I've wanted to see for some time now.

API Integration Types
Up until now, in this review, we are just talking about designing APIs, and possibly mapping our data models together. There are many other ways you can gateway existing systems, databases, and other resources using Amazon API Gateway, but the one that seems to be getting the lions share of the discussion, is deploying APIs with Lambda functions as the back-end.

API Integration Using Lambda Functions
Lambda functions give you the ability to create, store, and manage Node.js and Java code snippets, and wire up these resources using the Amazon API Gateway. When you create your first Lambda function, you are given a small selection of blueprints, like a microservice, or db connection, which also allows you to edit your code inline, upload a .zip file, and pull a .zip file from Amazon S3 (where is the Github love).

Identity & Access Management (IAM)
The Amazon API Gateway gives you some pretty simple ways to secure your APIs using API keys, but then also gives you the whole AWS IAM platform, and resources to put to leverage as well. I think most of the IAM will be more than many API providers will need, but for those that need this, I can see this part of their gateway solution sealing the deal.

Scaling Lambda Functions Behind
Being scalable is one of the promises of a Lambda backed API deployed using Amazon API Gateway, which I can see being pretty alluring for devops focused teams. You can allocate each Lambda function to posses the memory it needs, and individually monitor and scale as needed. While I see the recent containerization movement taking care of 50% of the API back-end needs, I can also see that being able to quickly scale individual functions as you need using the cloud, taking care of the other 50%.

Events For Lambda Functions
Another powerful aspects of a Lambda function, is you can engineer them to response to events. Using the interface, command line, or API, you can define one or many event sources for each Lambda function. Amazon provides some pretty interesting sources for triggering each Lambda function.

Those six event sources provide some pretty potent event sources for triggering specific functions in your vast Lambda code library.  You can rely on running code stored as Lambda functions using the API you deploy using Amazon API Gateway and / or you can have your code run in response to a variety of these events you define.

Beyond Lambda
When it comes to defining a back-end for the APIs you deploy using Amazon API Gateway, Lambda is just the beginning. Amazon provides three other really interesting ways to power APIs. I see a lot of potential in managing code using Lambda, and using it to scale the back-end of many APIs pretty quickly, but these areas provide some pretty interesting potential as well.

HTTP Proxy
A quick way to put Amazon API Gateway to use is as a proxy for an existing API. When you think about the potential in this area, when put mapping templates to work, transforming the methods, resources, and models. I haven't mapped it to any existing APIs yet, but will make sure and do so soon, to better understand the HTTP proxy potential.

Mock Integration
Another way to quickly deploy an API is mock your integration, providing a quick API that can be used to hack on, making sure an API will meet developer's needs. You may even want to mock an existing public API, rather than use a live resoure as you are developing an application. There are many uses for mock integration. 

AWS Service Proxy
The final way Amazon gives provides for you to power your API(s), is by proxying an existing AWS service. This opens up the entire AWS cloud stack for exposing as API resources, using the Amazon API Gateway. This reminds me of other existing API gateway solutions, except instead of your on-premise, legacy infrastructure, this is your in the cloud, more recent infrastructure. I'm guessing this will incentivize many companies to migrate their legacy infrastructure into the cloud, or at least make it cloud friendly, so you can put the AWS service proxy to use--lots of possibilities here.

Defining The Stages Of Your Lifecycle
Going beyond the types of integration you can employ when crafting, and deploying APIs using the Amazon API Gateway, the platform also provides a way to define stages that APIs will exist in from design, development, QA, production, or any other stage you wish. I like the concept of having a stage defined for each API, designating where it exists on the API life-cycle. I tend to just have dev and prod, but this might make me consider this a little more deeply, as it seems to be a big part of defining the API journey.

API Monitoring By Default
Amazon has built in monitoring by default into the API Gateway, and Lambda functions. You can connect APIs, and their designated integration back-end to CloudTrail, and monitor everything about your operations. CloudTrail is very much a cloud infrastructure logging solution over API analytics solutions, but I could see it evolve into something more than just monitoring and logging, providing an overall awareness of API consumption. Maybe an opportunity for the ecosystem to step in via the API(s).

CLI An API For The API Gateway
You have to admit, Amazon gets interfaces, making sure every service on the platform has a command line interface as well as an API. This is where a lot of the API orchestration magic will come into play in my opinion. The ability to automate every aspect of API design, deployment, management, and monitoring, across your whole stack, using an API is the future. 

There Are Some Limitations
There are some current limitations of the Amazon API Gateway. They limit things to 60 APIs maximum per AWS account, 300 resources maximum per API, 10 stages maximum per API, and 10-second timeout for both AWS Lambda and HTTP back-end integration. They are just getting going, so I'm sure they are just learning how people will be using the API deployment and management infrastructure in the cloud, and we'll see this evolve considerably.

What Will This Cost?
Lambda is providing the first 1 million requests per month for  free, and $0.20 per 1 million requests thereafter, or $0.0000002 per request. The Amazon API Gateway costs $3.50 per million API calls received, plus the cost of data transfer out, in gigabytes. It will be interesting to see what this costs at scale, but I'm sure overall, it will be very inexpensive to operate like other AWS services, and with time the cost will come down even further as they dial it all in.

AWS API Gateway Has Me Thinking
I won't be adopting AWS right away, I'd prefer to watch it evolve some more, but overall I like where they are taking things. The ability to quickly deploy code with Lambda, and use blueprints to clone, and deploy the code-behind APIs, has a lot of potential. Most of my APIs are just simple code that either returns data from a database, and conducts some sort of programmatic function, making Lambda pretty attractive, especially when it comes to helping you scale and monitor everything by default. 

My original criticism of the platform still stands. Amazon is courting the enterprise with this, providing the next generation of API gateway for the legacy resources we have all accumulated in the cloud. Something that really doesn't help large companies sort through their technical debt, allowing them to just grow it, and manage it in the cloud. Win for AWS, so honestly it makes sense, even though it doesn't deliver critical API life-cycle lessons the enterprise will need along way to actually make change.

This is a reason I won't be getting hooked on Lambda + Amazon API Gateway anytime soon, because I really don't want to be locked into their services. I'm a big fan of my platform employing common, open server tooling (Linux, Apache, NGINX, MySQL, PHP), and not relying on specialty solutions to make things efficient--I rely on my skills, and experience and knowledge of the resources I'm deploying, to deliver efficiency at scale. My farm to table approach to deploying APIs, keeps me in tune with my supply chain, something that may not work for everyone.

While the tooling I use may not be the most exciting, it is something I can move from AWS, and run anywhere. All of my APIs can easily be recreated on any hosting environment, and I can find skills to help me with this work almost anywhere in the world. After 25 years of managing infrastructure, I'm hyper-aware of lock-in, even the subtle moves that happen over time. However, my infrastructure is much smaller than many of the companies who will be attracted to AWS Lambda + API Gateway, which actually for me, is another big part of the API lesson and journey, but if you don't know this already, I'll keep it to myself.

I'd say AWS gives a healthy nod to the type of platform portability I'm looking for, with the ability to import and export your back-end code using Lambda, and the ability to use API definitions like Swagger as part of Amazon API Gateway emerge. These two things will play a positive role in the overall portability, and interoperability of the platform, but doing this for the deeper connections made with other AWS services, will be a lot harder to evolve from if you ever have to migrate from AWS.

For now, I'll keep playing with Amazon API Gateway, because it definitely holds a lot of potential for some very powerful API orchestration, add while the platform may not work for me 100%, AWS is putting some really interesting concepts into play.


Some Potentially Very Powerful API Orchestration With The Amazon API Gateway

I sat down for a second, more in-depth look at the Amazon API Gateway. When it first released I took a stroll through the interface, and documentation, but this time, I got my hands dirty playing with the moving parts, and considering how the solution fits into the overall API deployment picture.

API Design Tools
As soon as you land on the Amazon API Gateway dashboard page, you can get to work adding APIs by defining endpoints, crafting specific methods (paths), the crafting the details of your HTTP resources (verbs), and round off your resources with parameters, headers, and underlying data models. You can even map the custom sub-domain of your choosing to your Amazon API Gateway generated API, giving it exactly the base URL you need.

API Mapping Templates
One feature provided by the Amazon API Gateway that I find intriguing is the mapping templates. Using the data models and the mapping template tool, you can transform data from one schema to another. This is very interested when you are thinking about evolving your own legacy APIs, but I'm also thinking it could come in real handy for mapping to public APIs, and demonstrating to clients what is possible with a next version--designed from the outside-in-mapping is something I've wanted to see for some time now.

API Integration Types
Up until now, in this review, we are just talking about designing APIs, and possibly mapping our data models together. There are many other ways you can gateway existing systems, databases, and other resources using Amazon API Gateway, but the one that seems to be getting the lions share of the discussion, is deploying APIs with Lambda functions as the back-end.

API Integration Using Lambda Functions
Lambda functions give you the ability to create, store, and manage Node.js and Java code snippets, and wire up these resources using the Amazon API Gateway. When you create your first Lambda function, you are given a small selection of blueprints, like a microservice, or db connection, which also allows you to edit your code inline, upload a .zip file, and pull a .zip file from Amazon S3 (where is the Github love).

Identity & Access Management (IAM)
The Amazon API Gateway gives you some pretty simple ways to secure your APIs using API keys, but then also gives you the whole AWS IAM platform, and resources to put to leverage as well. I think most of the IAM will be more than many API providers will need, but for those that need this, I can see this part of their gateway solution sealing the deal.

Scaling Lambda Functions Behind
Being scalable is one of the promises of a Lambda backed API deployed using Amazon API Gateway, which I can see being pretty alluring for devops focused teams. You can allocate each Lambda function to posses the memory it needs, and individually monitor and scale as needed. While I see the recent containerization movement taking care of 50% of the API back-end needs, I can also see that being able to quickly scale individual functions as you need using the cloud, taking care of the other 50%.

Events For Lambda Functions
Another powerful aspects of a Lambda function, is you can engineer them to response to events. Using the interface, command line, or API, you can define one or many event sources for each Lambda function. Amazon provides some pretty interesting sources for triggering each Lambda function.

Those six event sources provide some pretty potent event sources for triggering specific functions in your vast Lambda code library.  You can rely on running code stored as Lambda functions using the API you deploy using Amazon API Gateway and / or you can have your code run in response to a variety of these events you define.

Beyond Lambda
When it comes to defining a back-end for the APIs you deploy using Amazon API Gateway, Lambda is just the beginning. Amazon provides three other really interesting ways to power APIs. I see a lot of potential in managing code using Lambda, and using it to scale the back-end of many APIs pretty quickly, but these areas provide some pretty interesting potential as well.

HTTP Proxy
A quick way to put Amazon API Gateway to use is as a proxy for an existing API. When you think about the potential in this area, when put mapping templates to work, transforming the methods, resources, and models. I haven't mapped it to any existing APIs yet, but will make sure and do so soon, to better understand the HTTP proxy potential.

Mock Integration
Another way to quickly deploy an API is mock your integration, providing a quick API that can be used to hack on, making sure an API will meet developer's needs. You may even want to mock an existing public API, rather than use a live resoure as you are developing an application. There are many uses for mock integration. 

AWS Service Proxy
The final way Amazon gives provides for you to power your API(s), is by proxying an existing AWS service. This opens up the entire AWS cloud stack for exposing as API resources, using the Amazon API Gateway. This reminds me of other existing API gateway solutions, except instead of your on-premise, legacy infrastructure, this is your in the cloud, more recent infrastructure. I'm guessing this will incentivize many companies to migrate their legacy infrastructure into the cloud, or at least make it cloud friendly, so you can put the AWS service proxy to use--lots of possibilities here.

Defining The Stages Of Your Lifecycle
Going beyond the types of integration you can employ when crafting, and deploying APIs using the Amazon API Gateway, the platform also provides a way to define stages that APIs will exist in from design, development, QA, production, or any other stage you wish. I like the concept of having a stage defined for each API, designating where it exists on the API life-cycle. I tend to just have dev and prod, but this might make me consider this a little more deeply, as it seems to be a big part of defining the API journey.

API Monitoring By Default
Amazon has built in monitoring by default into the API Gateway, and Lambda functions. You can connect APIs, and their designated integration back-end to CloudTrail, and monitor everything about your operations. CloudTrail is very much a cloud infrastructure logging solution over API analytics solutions, but I could see it evolve into something more than just monitoring and logging, providing an overall awareness of API consumption. Maybe an opportunity for the ecosystem to step in via the API(s).

CLI An API For The API Gateway
You have to admit, Amazon gets interfaces, making sure every service on the platform has a command line interface as well as an API. This is where a lot of the API orchestration magic will come into play in my opinion. The ability to automate every aspect of API design, deployment, management, and monitoring, across your whole stack, using an API is the future. 

There Are Some Limitations
There are some current limitations of the Amazon API Gateway. They limit things to 60 APIs maximum per AWS account, 300 resources maximum per API, 10 stages maximum per API, and 10-second timeout for both AWS Lambda and HTTP back-end integration. They are just getting going, so I'm sure they are just learning how people will be using the API deployment and management infrastructure in the cloud, and we'll see this evolve considerably.

What Will This Cost?
Lambda is providing the first 1 million requests per month for  free, and $0.20 per 1 million requests thereafter, or $0.0000002 per request. The Amazon API Gateway costs $3.50 per million API calls received, plus the cost of data transfer out, in gigabytes. It will be interesting to see what this costs at scale, but I'm sure overall, it will be very inexpensive to operate like other AWS services, and with time the cost will come down even further as they dial it all in.

AWS API Gateway Has Me Thinking
I won't be adopting AWS right away, I'd prefer to watch it evolve some more, but overall I like where they are taking things. The ability to quickly deploy code with Lambda, and use blueprints to clone, and deploy the code-behind APIs, has a lot of potential. Most of my APIs are just simple code that either returns data from a database, and conducts some sort of programmatic function, making Lambda pretty attractive, especially when it comes to helping you scale and monitor everything by default. 

My original criticism of the platform still stands. Amazon is courting the enterprise with this, providing the next generation of API gateway for the legacy resources we have all accumulated in the cloud. Something that really doesn't help large companies sort through their technical debt, allowing them to just grow it, and manage it in the cloud. Win for AWS, so honestly it makes sense, even though it doesn't deliver critical API life-cycle lessons the enterprise will need along way to actually make change.

This is a reason I won't be getting hooked on Lambda + Amazon API Gateway anytime soon, because I really don't want to be locked into their services. I'm a big fan of my platform employing common, open server tooling (Linux, Apache, NGINX, MySQL, PHP), and not relying on specialty solutions to make things efficient--I rely on my skills, and experience and knowledge of the resources I'm deploying, to deliver efficiency at scale. My farm to table approach to deploying APIs, keeps me in tune with my supply chain, something that may not work for everyone.

While the tooling I use may not be the most exciting, it is something I can move from AWS, and run anywhere. All of my APIs can easily be recreated on any hosting environment, and I can find skills to help me with this work almost anywhere in the world. After 25 years of managing infrastructure, I'm hyper-aware of lock-in, even the subtle moves that happen over time. However, my infrastructure is much smaller than many of the companies who will be attracted to AWS Lambda + API Gateway, which actually for me, is another big part of the API lesson and journey, but if you don't know this already, I'll keep it to myself.

I'd say AWS gives a healthy nod to the type of platform portability I'm looking for, with the ability to import and export your back-end code using Lambda, and the ability to use API definitions like Swagger as part of Amazon API Gateway emerge. These two things will play a positive role in the overall portability, and interoperability of the platform, but doing this for the deeper connections made with other AWS services, will be a lot harder to evolve from if you ever have to migrate from AWS.

For now, I'll keep playing with Amazon API Gateway, because it definitely holds a lot of potential for some very powerful API orchestration, add while the platform may not work for me 100%, AWS is putting some really interesting concepts into play.


API Management Infrastructure And Service Composition Is Key To Orchestration With Microservices In A Containerized World

As I work to redefine my world using microservices, I have this sudden realization how important my API management infrastructure is to all of this. Each one of my microservices are little APIs that do one thing, and do it well, relying on my API management infrastructure to know who should be accessing, and exactly how much of the resource they should have access to.

My note API shouldn’t have to know anything about my users, it is just trained to ask my API management infrastructure, if each user has proper credentials to accessing the resource, and what the service composition will allow them to do with it (aka read, write, how much, etc.) My note API does what it does best, store notes, and relies on my API management layer to do what it does best--manage access to the microservice.

This approach to API management has llowed me to deploy any number of microservices, using my API management infrastructure to compose my various service packages—this is called service composition. I employ 3Scale infrastructure for all my API / microservice management, which I use to define different service tiers like retail, wholesale, internal, and other service specific groupings. When users sign up for API access, I add them to one of the service tiers, and my API service composition layer handles the rest.

Modern API management service composition is the magic hand-waiving in my microservice orchestration, and without it, it would be much more work for me to compose using microservices in this containerized API world that is unfolding.

Disclosure: 3Scale is an API Evangelist partner.


API Management Infrastructure And Service Composition Is Key To Orchestration With Microservices In A Containerized World

As I work to redefine my world using microservices, I have this sudden realization how important my API management infrastructure is to all of this. Each one of my microservices are little APIs that do one thing, and do it well, relying on my API management infrastructure to know who should be accessing, and exactly how much of the resource they should have access to.

My note API shouldn’t have to know anything about my users, it is just trained to ask my API management infrastructure, if each user has proper credentials to accessing the resource, and what the service composition will allow them to do with it (aka read, write, how much, etc.) My note API does what it does best, store notes, and relies on my API management layer to do what it does best--manage access to the microservice.

This approach to API management has llowed me to deploy any number of microservices, using my API management infrastructure to compose my various service packages—this is called service composition. I employ 3Scale infrastructure for all my API / microservice management, which I use to define different service tiers like retail, wholesale, internal, and other service specific groupings. When users sign up for API access, I add them to one of the service tiers, and my API service composition layer handles the rest.

Modern API management service composition is the magic hand-waiving in my microservice orchestration, and without it, it would be much more work for me to compose using microservices in this containerized API world that is unfolding.

Disclosure: 3Scale is an API Evangelist partner.


The Emerging Landscape Of API Orchestration Platforms

I’ve been exploring one possible API future more and more lately, a future which centers around the a concept of being able to deploy virtual API stacks, by employing the power of deploying API resources in virtualized containers, something that will free individual API resources up for orchestration in new and exciting ways--doing for APIs, what APIs have been doing for companies.

Driven by a recent evolution in cloud computing introduced by Docker.io, we are beginning to see new services emerge that get us closer to this vision of API orchesstration. Last week I wrote about how StrongLoop providing one look at the future of API deployment, using Node.js. This week I was introduced to another API deployment solution that also resembles some of these earlier thoughts I have had on API orchestration, from BlockSpring.

Blockspring deploys a containerized API for any Python, Ruby, and R scripts, so all you do is take some code resource, deploy using BlockSpring, and it generates API endpoints, and a form interface for working with the resource. Blockspring provides documentation to help you understand how to craft your code, and a library to publish your API to, when ready.

When you look at their API library, it even drops you into a folder called /blocks, which gives you into a list of APIs deployed using Blockspring, doing a range of functions from screen capture, applying image filters, to sentiment and text analysis. Blockspring enables, and encourages the design, and deployment of very granular API resources, that do one thing, and hopefully also doing it well, providing an interesting, and very modular way of deploying APIs--think legos.

This deployment of simple, programmatic API resources, using a very containerized architecture, is an important new layer to the growing world of API orchestration, complimenting the simpler content and data driven API resources we are seeing deployed from spreadsheets and databases, and common file stores. I’m not seeing a pure spreadsheet to API, or database to API solution which solely employs virtual containers, but I’m sure its not far off.

Seeing what Strongloop and Blockspring are up to makes me happy. Imagine a world where you can deploy exactly the data, content, and programmatic resources you will need for your web, mobile or single page application. This ability to define, deploy, scale and manage all of your API resources, in such a granular way will contribute significantly to the evolution how we build apps, and connect our devices to the Internet.

This emerging landscape of new API orchestration platforms will change how we deploy and consume APIs, making APIs much more remixable and composable, bringing APIs out of their silos. I also hope this new approach will also expand on wholesale opportunities for API providers, and continue to change how we monetize our content, data, and programmatic resources in the API economy.


The Emerging Landscape Of API Orchestration Platforms

I’ve been exploring one possible API future more and more lately, a future which centers around the a concept of being able to deploy virtual API stacks, by employing the power of deploying API resources in virtualized containers, something that will free individual API resources up for orchestration in new and exciting ways--doing for APIs, what APIs have been doing for companies.

Driven by a recent evolution in cloud computing introduced by Docker.io, we are beginning to see new services emerge that get us closer to this vision of API orchesstration. Last week I wrote about how StrongLoop providing one look at the future of API deployment, using Node.js. This week I was introduced to another API deployment solution that also resembles some of these earlier thoughts I have had on API orchestration, from BlockSpring.

Blockspring deploys a containerized API for any Python, Ruby, and R scripts, so all you do is take some code resource, deploy using BlockSpring, and it generates API endpoints, and a form interface for working with the resource. Blockspring provides documentation to help you understand how to craft your code, and a library to publish your API to, when ready.

When you look at their API library, it even drops you into a folder called /blocks, which gives you into a list of APIs deployed using Blockspring, doing a range of functions from screen capture, applying image filters, to sentiment and text analysis. Blockspring enables, and encourages the design, and deployment of very granular API resources, that do one thing, and hopefully also doing it well, providing an interesting, and very modular way of deploying APIs--think legos.

This deployment of simple, programmatic API resources, using a very containerized architecture, is an important new layer to the growing world of API orchestration, complimenting the simpler content and data driven API resources we are seeing deployed from spreadsheets and databases, and common file stores. I’m not seeing a pure spreadsheet to API, or database to API solution which solely employs virtual containers, but I’m sure its not far off.

Seeing what Strongloop and Blockspring are up to makes me happy. Imagine a world where you can deploy exactly the data, content, and programmatic resources you will need for your web, mobile or single page application. This ability to define, deploy, scale and manage all of your API resources, in such a granular way will contribute significantly to the evolution how we build apps, and connect our devices to the Internet.

This emerging landscape of new API orchestration platforms will change how we deploy and consume APIs, making APIs much more remixable and composable, bringing APIs out of their silos. I also hope this new approach will also expand on wholesale opportunities for API providers, and continue to change how we monetize our content, data, and programmatic resources in the API economy.


If you think there is a link I should have listed here feel free to tweet it at me, or submit as a Github issue. Even though I do this full time, I'm still a one person show, and I miss quite a bit, and depend on my network to help me know what is going on.