API Test Design Techniques & Standards
API test strategy
The test strategy is the high-level description of the test requirements from which a detailed test plan can later be derived, specifying individual test scenarios and test cases. Our first goal is functional testing — ensuring that the API functions correctly.
The main objectives in functional testing of the API are:
to ensure that the implementation is working correctly as expected — no bugs!
to ensure that the implementation is working as specified according to the requirements specification
API as a contract — first things first!
An API is essentially a contract between the client and the server or between two applications. Before any implementation test can begin, it is important to make sure that the contract is correct.
This can be done by:
Inspecting the Swagger interface and making sure that endpoints are correctly named
The resources and their types correctly reflect the object model
There is no missing functionality or duplicate functionality
The relationships between resources are reflected in the API correctly.
Links: https://uniphore.atlassian.net/wiki/spaces/RedboxHome/pages/2562752836
Aspects of the API Test
After validating the API contract, we are ready to think of what to test. Whether it’s test automation or manual testing, our functional test cases have the same test actions, are part of wider test scenario categories, and belong to three kinds of test flows.
API test actions
Each test is comprised of test actions. These are the individual actions a test needs to take per API test flow. For each API request, the test would need to take the following actions:
Verify correct HTTP status code. For example, creating a resource should return 201 CREATED and unpermitted requests should return 403 FORBIDDEN, etc.
Verify response payload. Check valid JSON body and correct field names, types, and values — including in error responses.
Verify response headers. HTTP server headers have implications on both security and performance.
Verify basic performance sanity. If an operation was completed successfully but took an unreasonable amount of time, the test fails.
All API response status codes are separated into five classes (or categories) in a global standard.
The most common API response status codes are:
2xx (Successful): Success codes returned when browser request was received, understood, and processed by the server.
4xx (Client Error): Client error codes indicating that there was a problem with the request.
5xx (Server Error): Server error codes indicating that the request was accepted, but that an error on the server prevented the fulfillment of the request.
However, the actual response status code of an API is specified by the development team that built the API. So as a tester, you need to verify whether:
The code follows global standard classes
The code is specified in the requirement.
The below table defines the common HTTP Status code used widely across Conversa -
Code | Description |
200 | OK |
201 | Created |
202 | Accepted |
204 | No Content |
400 | Bad Request |
401 | Unauthorized |
403 | Forbidden |
404 | Not Found |
415 | Unsupported Media Type |
429 | Too Many Requests |
500 | Internal Server Error |
501 | Not Implemented |
504 | Gateway timeout |
Note: The HTTP status code 500 is a generic error response, Server failed to fulfill a valid request due to an error with server. Please check your Infrastructure to validate the server errors or contact DevOps Administrator
Test scenario categories
The test cases fall into the following general test scenario groups:
Basic positive tests (happy paths)
Extended positive testing with optional parameters
Negative testing with valid input
Negative testing with invalid input
Destructive testing
Security, authorization, and permission tests
Happy path tests check basic functionality and the acceptance criteria of the API. And extend positive tests to include optional parameters and extra functionality. The next group of tests is negative testing where we expect the application to gracefully handle problem scenarios with both valid user input (for example, trying to add an existing username) and invalid user input (trying to add a username which is null). Destructive testing is a deeper form of negative testing where we intentionally attempt to break the API to check its robustness (for example, sending a huge payload body in an attempt to overflow the system).
Test flows
There are three kinds of test flows which comprise our test plan:
1. Testing requests in isolation (Manual) – Executing a single API request and checking the response accordingly. Such basic tests are the minimal building blocks we should start with, and there’s no reason to continue testing if these tests fail.
2. Multi-step workflow with several requests (E2E Automation Test) – Testing a series of requests which are common user actions, since some requests can rely on other ones. For example, we execute a POST request that creates a resource and returns an auto-generated identifier in its response. We then use this identifier to check if this resource is present in the list of elements received by a GET request. Then we use a PUT endpoint to update new data, and we again invoke a GET request to validate the new data. Finally, we DELETE that resource and use GET again to verify it no longer exists.
3. Combined API and web UI tests (UI Testing) – This is mostly relevant to manual testing, where we want to ensure data integrity and consistency between the UI and API.
We execute requests via the API and verify the actions through the web app UI and vice versa. The purpose of these integrity test flows is to ensure that although the resources are affected via different mechanisms the system still maintains expected integrity and consistent flow.
Validation and response messaging should be provided via the API layer, not built into the UI. This will ensure that API integrations are easy for our customers and to make sure the messaging from the API is clear as to where the validation error occurs.
Examples of Test Scenarios:
Test cases derived from the table below should cover different test flows according to our needs, resources, and priorities.
# | Test Scenario Category | Test Action Category | Test Action Description |
1 | Basic positive tests (happy paths) |
|
|
| Execute API call with valid required parameters | Validate Status Code: | 1. All requests should return 2XX HTTP status code 2. Returned status code is according to spec: – 200 OK for GET requests – 201 for POST requests creating a new resource – 202 for PUT requests updating existing resource – 204 for a DELETE operation and so on |
|
| Validate payload: | 1. Response is a well-formed JSON object 2. Response structure is according to data model (schema validation: field names and field types are as expected, including nested objects; field values are as expected; non-nullable fields are not null, etc.) |
|
| Validate headers: | Verify that HTTP headers are as expected, includingcontent-type,connection,cache-control,expires,access-control-allow-origin,keep-alive, and other standard header fields – according to spec. Verify that information is NOT leaked via headers (e.g.X-Powered-By header is not sent to user). |
|
| Performance sanity: | Response is received in a timely manner (within reasonable expected time) — as defined |
2 | Positive + optional parameters |
|
|
| Execute API call with valid required parameters AND valid optional parameters Run same tests as in #1, this time including the endpoint’s optional parameters (e.g., filter, sort, limit, skip, etc.) |
|
|
|
| Validate status code: | As in #1 |
|
| Validate payload: | Verify response structure and content as in # 1. In addition, check the following parameters:– filter: ensure the response is filtered on the specified value. – sort: specify field on which to sort, test ascending and descending options. Ensure the response is sorted according to selected field and sort direction. – skip: ensure the specified number of results from the start of the dataset is skipped – limit: ensure dataset size is bounded by specified limit. – limit + skip: Test pagination Check combinations of all optional fields (fields + sort + limit + skip) and verify expected response. |
|
| Validate headers: | As in #1 |
|
| Performance sanity: | As in #1 |
|
|
|
|
3 | Negative testing – valid input |
|
|
| Execute API calls with valid input that attempts illegal operations. i.e.: – Attempting to create a resource with a name that already exists (e.g., user configuration with the same name) – Attempting to delete a resource that doesn’t exist (e.g., user configuration with no such GUID) – Attempting to update a resource with illegal valid data (e.g., rename a configuration to an existing name) – Attempting illegal operation (e.g., delete a user configuration without permission.) |
|
|
|
| Validate status code: | 1. Verify that an erroneous HTTP status code 400 or 404 is sent (NOT 2XX) 2. Verify that the HTTP status code is in accordance with error case as defined in spec |
|
| Validate payload: | 1. Verify that error response is received 2. Verify that error format is according to spec. e.g., error is a valid JSON object or a plain string (as defined in spec) 3. Verify that there is a clear, descriptive error message/description field 4. Verify error description is correct for this error case and in accordance with spec 5. Verify no system exceptions are displayed in the response. Exceptions should be replaced with the custom user friendly error messages as defined in the spec |
|
| Validate headers: | As in #1 |
|
| Performance sanity: | Ensure error is received in a timely manner (within reasonable expected time) |
|
|
|
|
4 | Negative testing – invalid input |
|
|
| Execute API calls with invalid input, e.g.:– Missing or invalid authorization token– Missing required parameters– Invalid value for endpoint parameters, e.g.: Invalid GUID in path or query parameters Payload with invalid model (violates schema) Payload with incomplete model (missing fields or required nested entities) Invalid values in nested entity fields |
|
|
|
| Validate status code: | As in #1 |
|
| Validate payload: | As in #1 |
|
| Validate headers: | As in #1 |
|
| Performance sanity: | As in #1 |
|
|
|
|
5 | Destructive testing |
|
|
| Intentionally attempt to fail the API to check its robustness: Malformed content in request Wrong content-type in payload Content with wrong structure Overflow parameter values. E.g.: – Attempt to create a user configuration with a title longer than 200 characters – Attempt to GET a user with invalid GUID which is 1000 characters long – Overflow payload – huge JSON in request body Empty payloads Empty sub-objects in payload Special & foreign characters in parameters or payload Using incorrect HTTP headers (e.g. Content-Type) |
|
|
|
| Validate status code: | As in #3. API should fail gracefully. |
|
| Validate payload: Validate headers: | As in #3. API should fail gracefully. As in #3. API should fail gracefully. |
|
| Performance sanity: | As in #3. API should fail gracefully. |
Security and Authorization
Check that the API is designed according to correct security principles: deny-by-default, fail securely, least privilege principle, reject all illegal inputs.
Positive: ensure API responds to correct authorization via agreed auth method – Bearer token – as defined in spec
Negative: ensure API refuses all unauthorized requests
Role Permissions: ensure that specific endpoints are exposed to user based on role. API should refuse calls to endpoints which are not permitted for user’s role
Protocol: check HTTP/HTTPS according to spec
Usability Tests
For public APIs: a manual “Product”-level test going through the entire developer journey from documentation, login, authentication, code examples, etc. to ensure the usability of the API for users without prior knowledge of Conversa.
Sign Off