As you read in my last post, defining API guidelines for your company is a great first step to ensure success. For effective adoption, you have to disseminate the guidelines across teams through training and support the intent behind them with consistent messaging. Making API guidelines compliance part of a release’s checklist can help to get a team’s attention early and ensure work gets started on the right foot.

It’s helpful for API developers to get feedback on whether their API design and implementation comply with company guidelines. Checklists can be useful for tracking a small number of items, but for a list with hundreds of line items, they can become unwieldy.

Going through the list manually and assessing each item isn’t easy. That’s where automating validation and feedback can come in handy.

I recommend using the following three validation approaches to checking whether your APIs are compliant with company guidelines:

  • Static Validation – Use static validation for things that can be checked by introspecting the API spec, which is typically written as an OpenAPI/RAML/Blueprint doc. Static tests can check for valid document (ex. OpenAPI), header names, URL template, HTTP methods, supported schemes, response codes, error description schema, supported authentication, model schemas, naming conventions, date and time formats, and more.
  • Dynamic Validation – Use dynamic validation for things that you can validate only with functional endpoints. It helps you check for for consistency between the API spec and the actual request/responses, API response time, TLS 1.1 or above, rate limits, etc.
  • Manual Review – You would use a manual review for things that require subjective evaluation. Do the resources accurately model the domain objects? Are there overlaps with other resources that can be normalized? Are nouns, plurals, and verbs used correctly while designing resources? Is the API surface area minimal? does the API cover all use cases? Answering these questions not only requires manual review, but also good domain knowledge and API design awareness.

You can manage static and dynamic validation through automation, and you can handle the manual review with a well-defined process. Let’s take a look at each in more detail.


Static validation automation is most valuable during the design phase, when it’s important to provide feedback to the API designer on the API spec. The goal is to have a tool like JSONLint, where the API developer can paste the API spec and get feedback on what’s inconsistent with the guidelines (and even tips to correct common mistakes). A GUI-based tool like JSONLint is easy to use, but you need to have first-class API support for validation so your APIs can be integrated into continuous integration pipelines.

One approach we’ve taken is shown here:

  • Static validation of API is implemented as a service deployed on Azure.
  • An API-first design, which takes a OpenAPI document as input and provides output (JSON) on how many guidelines were checked and how many passed/failed. For failed ones, the issue location is pinpointed.
  • Simple, web-based UI (similar to JSONLint) implemented using Angular.
  • All tests implemented using the popular and powerful JavaScipt-based Mocha framework.
  • Express (NodeJS) app implements the API. Azure tables are used for storage and Azure message queues for job-scheduling.
  • The Input OpenAPI doc is stored in the database and the job ID is inserted into the queue. One of the available workers (Webjobs) pick up the doc and runs the Mocha tests on it. The result is pushed into the database. Once the results are available, the API responds with results. Please note, the asynchronous version of API is coming next.
  • Splunk is used for logging and telemetry.
  • The UI/API access is protected through Azure Active Directory, which is linked to Citrix.


Once the API implementation begins and there are tests written for it, we can start running the dynamic validation tests. Unlike static validation, which works only out of the API spec, here we need actual API endpoints and tests to exercise the endpoints and we must introspect the input/output payloads. This validation approach is not amenable to manual input/output. The ideal way to do this is to integrate it with the functional testing of the API.

One approach we’ve taken is shown here:

  • The API gateway has a policy which does an (async) post of the API req/resp (aka API Callobj to the service).
  • The service API is implemented as Azure function, which stores the API Callobj in cache and runs the validations tests. Results of the validation (JSON) are stored in Cosmos DB. It includes the number of tests run and how many passed and failed. For the failures, the exact part of the Callobj that failed is mentioned.
  • Validation tests are implemented using Mocha.
  • A Report API is implemented using Azure, which can retrieve the validation results based on service name and the time period range.
  • Use a summarization job to periodically clear old results and only retain the summary of the passes/failures per service.
  • Use the summary API to get the service level validation summary (daily/weekly/monthly).

Manual Review

With manual review, we need to define a process and workflow in the tracking tool. Like with any other process, it should be simple. It should also scale when more APIs are brought into the fold. For example, instead of having a centralized approval committee, which can become a bottleneck in large organizations, it’s better to have a distributed and federated approval process.

Our approach for defining a workflow is shown below.

  • The custom workflow introduced in JIRA for tracking API review.
  • It’s easy to know how many APIs across services are in review and the current status.
  • API champions (nominated from respective teams) are responsible for driving the review.
  • The review should include both technical and business (product) aspects.

Keep an eye out for our next post, where we’ll take a look at other capabilities that constitute a complete API platform and how they related to each other. And check out my posts on API experience and defining API guidelines for your company.