This article is outdated. Please click here to see the newer version.
We recently released our test cases generation feature which allows you to define test cases in the API description using our online editor. At the time of SDK generation in a particular language, the test cases are automatically generated for that language. Furthermore, we generate CI configurations files that build and invoke test cases seamlessly. This tutorial covers the following:
- An introduction to the languages that support this feature and the testing frameworks you can use to run the generated test cases
- A detailed example to make you familiar with writing and generating test cases.
- Additional test configurations
- Some details on our test validations
- FAQ
Supported Languages and Testing Frameworks
The currently supported languages for test cases generation are C#, Java, Android, Objective C, PHP, Python and Ruby.
C#
For C#, you can either generate a Portable Class Library (PCL) or a Universal Windows Platform (UWP) library at the time of SDK generation.
The Portable Class Library (PCL) can run on range of platforms such as Windows, Silverlight and Windows Phone. In order to run the test cases generated along with the library, you require a testing framework. One such framework is “NUnit” whose current release version is 3.0. If you use Visual Studio you can easily install it using the Extension Manager.
After a successful build of the generated solution in Visual Studio, you can run the tests from the Test Explorer and a generated report will tell you various statistics including the number of tests that passed or failed.
The Universal Windows Platform (UWP) library also supports many of the Windows platforms including but not limited to Windows 10, Windows 10 Mobile, Xbox One etc. The unit testing framework you can use for running the test cases generated along with this library is MSTest. Just like NUnit, MSTest also allows the tests to be run from within Visual Studio.
Android
You can generate a gradle based Android library from your API description. You can then use it inside an IDE like Android Studio (that comes with Gradle) or a command line based gradle build system. An Android library compiles into an Android Archive (AAR) file that you can use as a dependency for an Android app module.
The generated tests are defined as JUnit tests. JUnit is the most popular and widely-used unit testing framework for Java. You can navigate from within Android Studio to run these generated tests which internally makes use of the Android JUnit runner. The tests are instrumented tests that run on the emulator/hardware.
Java
The generated Java SDK is actually a java library which can be used with JRE7. You can use an IDE like Eclipse equipped with Maven to utilize this library e.g. this Java library can be added as a dependency for a Java project.
To run the generated tests you require a testing framework. For Java, the most widely used framework is JUnit which not only acts as a framework but also as a runner. Inside Eclipse, you would need to select the library project and then choose to run the JUnit tests.
This will run all the tests present in the “tests” directory and display relevant statistics.
Objective C
Generation of SDK for IOS will generate a Cocoa Touch Static Library which is a static library in Objective-C. The advantage of generating a static library is that it supports iOS versions as old as iOS 6.
You can run the generated tests from within an IDE like “xCode”. It provides its users with capabilities for extensive software testing. It consists of a built-in test framework called “XCTest” which allows for smooth running of the generated tests.
PHP
When you generate an SDK for PHP you will obtain a php library that is based on PHP version 5.3 or greater and also requires Composer dependency manager. You can use this library as a dependency in your project. The test cases generated during SDK generation can be run using a testing framework like PHPUnit.
Python
SDK generation in Python results in a Python Library based on Python 2.7, which uses PIP as the dependency manager.
A famous testing framework is “unittest” which is python’s xUnit style framework. It is a test module that comes bundled with the Python standard library. “nose” extends “unittest” to make testing easier as it provides automatic test discovery. For our generated SDK, “unittest” is used as the testing framework and “nose” is used as the test runner. Invoking a simple command “nosetests” on the SDK will help run the tests.
Ruby
SDK generation in Ruby results in a Ruby Gem based on Ruby version 2.0.0 or greater. The tests automatically generated can be run using a testing framework like Test::Unit that includes the appropriate test runners.
Defining your first Test Case (With Example)
We will now move on to describing how you can define your first test case. First, you need to define your API for which follow the article at https://apimatic.zendesk.com/hc/en-us/articles/205614517-Defining-your-first-API . After this, you will end up with an API called Calculator API.
Click on “Edit” to go to the API description. You will see a sidebar similar to this:
There are currently no test cases listed under the “Test Cases” menu.
Suppose you want to check if the Calculate endpoint returns you the correct value after “sum” operation e.g. 2+4 must return 6. We need to create a test case to verify that. Let’s proceed to create one.
Before we create the test case, you need to know that in the endpoint you defined, you have 3 parameters involved:
1) “operation” which specifies the operation you want to perform on the input values. The valid values defined are “SUM”, “ADD”, “SUBTRACT”, “DIVIDE”.
2) “x” is the first operand
3) “y” is the second operand
Defining a test case to check if 2+4 gives us 6 will result in the above parameters taking the following values:
Name |
Value |
operation |
SUM |
x |
2 |
y |
4 |
Now in order to create a test case for “Calculate” endpoint there are two possible ways which are described below:
1) If there are currently no existing test cases for your endpoint group and you click on "Test Cases" menu you will see an option to create your first test case for the chosen endpoint group. Just select an endpoint from the drop down menu and create a test case for it.
As you want to create the test for "Calculate" endpoint so here we will choose "Calculate" from the endpoints list and click on "Create Test Case"
2) The other way is to open the desired endpoint from Endpoints menu ("Calculate" in this case) and scroll down where you can view a list of already created and enabled/disabled test cases. Clicking on "Create Test Case" will help create the new test case.
In both the above cases, you will be taken to a new page where you can then define your test case.
Step 1: Define Test Case Name
Details:
Specify a unique name for your test case under the “Name” box. You can describe what your test case does under the “Description” box. Toggling the “Enable Test” button will enable or disable the test. If the test case is not enabled it will not be generated during code generation.
Calculate Endpoint Test Case Example:
As can be seen from above figure the details for the Calculate endpoint test case are as follows:
“Name” specified is “TestSum”
“Description” specified is “Check if the endpoint returns correct sum for any two inputs”
“Enable Test” is set to true
Step 2: Define Input Parameters
Details:
The input parameters consists of Name Value pairs defining values for parameters of the endpoint that the test case belongs to. The “Name” of the input parameter MUST correspond to the name of an endpoint parameter specified within definition of that endpoint. The “Value” must correspond with the type of the endpoint parameter.
Some examples of name value pairs are as below:
Name |
Value |
someInteger |
1231 |
someString |
Hello, world! |
someIntegerArray |
[1,2,3,4,5] |
someBoolean |
True |
someModel |
{\"name\":\"Bob\",\"age\":22,\"uploadCount\":12} |
someEnumeration |
[123,22,125] |
Note: If the endpoint has optional query parameters enabled then such type of parameters can be provided within the input parameters by preceding the Name with asterisk (*). Similarly, if the endpoint has optional field parameters enabled then such type of parameters can be provided within the input parameters by preceding the Name with a plus sign (+)
If the “Is Null” flag is enabled or a required parameter is not provided an input value, the input parameter is given a null or zero value according to the type of the input.
Calculate Endpoint Test Case Example:
As already discussed, the Calculate endpoint contains 3 parameters. Their names will already be listed in the “Name” of the parameters. You need to just specify their values and decide whether to disable/enable “Is Null” flag.
In the above, we specified the values for the parameters. We have disabled the “Is Null” for all the parameters as we do not want them to take null values as input.
Step 3: Specify Header Status
Details:
The “Status Code” refers to the expected status code of the response. The expected status can be given an exact value such as "200" or a range of values such as "20X". This will match any status between 200 - 208 (inclusive) which are all valid HTTP status codes. Other possible HTTP status ranges are "30X", "4XX" and "41X". Note that only valid HTTP code values within each range will be checked.
Calculate Endpoint Test Case Example:
We expect the status code to be 200 if the operation is successful hence we input the value 200.
Step 3: Specify Expected Headers
Details:
If the expected headers are specified the response is tested to see if it contains these headers. Just like with input parameters we need to specify “Name-Value” pairs to specify header values. The “Check Value” if enabled, will not only check the presence of a header in the response with the same name as mentioned in the Name field but will also check that the value of that response header is the same as specified in the expected header value.
If the “Allow Extra Headers” flag is disabled it will cause the test case to fail if the response contains other headers than those listed in the expected headers list.
Calculate Endpoint Test Case Example:
For our Calculate endpoint test case, we will keep the expected headers empty.
Step 4: Specify Expected Body
Details:
The expected body helps verify if the response body matches with the one specified. Whatever you expect the response to be you input that into the “Expected Body” box. This could be as simple as a number, some string or can be complex like an array or some valid JSON, etc.
The “Body Match Mode” dropdown menu enlists various modes supported by APIMATIC for body matching. What modes are applicable will depend on the response type of the endpoint. The modes are:
Match Mode |
Valid for Types |
Description |
NONE |
All |
The expected body is ignored and the response body is not tested. |
NATIVE |
Number, Long, Precision, Boolean and DateTime |
Tests the response body as a primitive type using a simple equality test. Response must match exactly except in case of arrays where array ordering and strictness can be controlled via other options. |
KEYS |
Enumerations and Arrays of Number, Long, Precision, Boolean and DateTime types |
Checks whether the response body contains the same keys as those specified in the expected body. The keys provided can be a subset of the response being received. If any key is absent in the response body, the test fails. The test generated will perform deep checking which means if the response object contains nested objects, their keys will also be tested. |
KEYSANDVALUES |
Models, Dynamic and Arrays of Number, Long, Precision, Boolean and DateTime types |
Same as the KEYS mode except values are tested as well. The values must match. Since the deep comparison is performed, nested objects must also contain the correct values. In case of nested arrays, their ordering and strictness depends on the provided options. |
RAW |
All |
The response body is compared with the expected body via simple string checking. In case of Binary response, byte-by-byte comparison is performed. The expected body takes a URI path to a remote file to compare with the Binary response, which must be valid URI path. |
There are two flags related to testing arrays. One is “Check array order” which if enabled will involve ensuring that the response body contains the array elements in the same order as the expected body. The other is the “Check array count” which if enabled will ensure that the response body contains the same number of elements in the array as does the expected body. If both the flags are enabled then the arrays will be strictly checked for equality i.e. their order as well as matching lengths.
Example 1 of Body Match Modes “KEYS”, “KEYSANDVALUES”:
Expected body:
{
"name" : "bob",
"address" : {
"city" : "ABC"
}
}
Response body:
{
"name" : "bob",
"age" : 100,
"alive" : true,
"address" : {
"city" : "ABC",
"postcode" : "21333",
}
}
✅ This will pass for both KEYS and KEYSANDVALUES mode.
Note: Response body that passes KEYSANDVALUES mode for a given expected body will also pass KEYS mode.
Example 2 of Body Match Modes “KEYS”, “KEYSANDVALUES”:
Expected body:
{
"name" : "xxxx",
"address" : {
"city" : "aaaaa"
}
}
Response body:
{
"name" : "bob",
"age" : 100,
"alive" : true,
"address" : {
"city" : "ABC",
"postcode" : "21333",
}
}
✅ This passes for KEYS mode.
❌ This fails for KEYSANDVALUES mode.
Example 3 of Body Match Modes “KEYS”, “KEYSANDVALUES” and Array flags:
Expected body:
{
"name" : "bob",
"workingDays" : ["Tuesday", "Monday"]
}
Response body:
{
"name" : "bob",
"age" : 100,
"alive" : true,
"workingDays" : ["Monday", "Tuesday", "Wednesday"]
}
✅ This passed for KEYS and KEYSANDVALUES mode only if both ExpectedArrayCheckCount and ExpectedArrayOrderedMatching are false.
Example 4 of Body Match Modes “KEYS”, “KEYSANDVALUES” and Array flags:
Expected body:
[
{
"name" : "alice",
},
{
"name" : "bob",
}
]
Response body:
[
{
"name" : "frank",
"age" : 100,
"alive" : true,
},
{
"name" : "alice",
"age" : 70,
"alive" : true,
},
{
"name" : "bob",
"age" : 90,
"alive" : false,
}
]
✅ This passed for KEYS and KEYSANDVALUES mode only if both ExpectedArrayCheckCount and ExpectedArrayOrderedMatching are false.
Note: Each object from the expected body is checked for presence in response body. Note that two objects from expected body may match the same object from response body if they are subsets of that object.
Example 5 of Body Match Modes “KEYS”, “KEYSANDVALUES” and Array flags:
Expected body:
[
{
"name" : "bob",
},
{
"name" : "alice",
}
]
Response body:
[
{
"name" : "alice",
"age" : 70,
"alive" : true,
},
{
"name" : "bob",
"age" : 90,
"alive" : false,
}
]
✅ This passed for KEYS and KEYSANDVALUES mode if ExpectedArrayCheckCount is true or false.
❌ This fails for KEYSANDVALUES mode if ExpectedArrayOrderedMatching is true
Note: In case of an object array, the elements will be compared using the subset method i.e. either KEYSANDVALUES or KEYS. For primitives, simple equality comparison is used.
Calculate Endpoint Test Case Example:
We expect the result of 2+3 as 5 hence the “Expected body” is given as “5”. The body matching should perform a simple string comparison between the body of the response and “Expected Body” to check if it contains the number “5”. This behavior matches with the mode “NATIVE” hence the mode chosen in “NATIVE”.
Step 5: Save your Test Case
Just click on “Save Test Case” at the end of the Test Case settings page and your test case along with all the settings will be saved.
Step 6: Generate your SDK
Click on the box as shown in the figure below to generate your code.
Choose the platform of your choice:
For our example we will be choosing the Windows platform to generate a Portable Class Library in C#.
Once the code generation is successful download the Zip file.
You will see a “Calculator-CSharp” zip file. Extract its files in the same folder.
You will have something similar inside the extracted folder:
Open the “Calculator.sln” file in Visual Studio.
Step 8: Run the Test Case Using NUnit in Visual Studio
In order to run the test case in Visual Studio you need to have NUnit 3.0 installed. Rebuild your solution and open “Test Explorer”.
If the build was successful, you will see our test case “TestTestSum” listed in the Test Explorer.
Click on “Run All” to run the test. You will see the statistics displayed about the number of tests that passed or failed and the time taken, etc.
You successfully created and ran your first test case!
Additional Test Configurations
You can also specify additional test configurations which affects how test cases are generated. This can also be used to configure SDK with different values such as the baseUri, apikey, etc. which are usually different in the test environment than the production environment. All of these configurations are optional. The details are explained below:
Test Environment Settings
The “Test Timeout” refers to the number of seconds after which if the endpoint is not returning any response the test is forced to fail e.g. a timeout of 60 would mean that the tests that timeouts after 60 seconds will fail.
The “Precision Delta” refers to the number of decimal places to cover when comparing precision types in test e.g. a precision delta of 0.1 would mean that all precisions will be compared to 1 decimal place only.
Configuration Parameters
The configuration parameters allows you to provide values for Configuration file for use in the test environment e.g. you may specify a new URL for “baseUri” parameter. This can be useful if you require to test your endpoint responses from a sandbox environment instead of a real environment (which would otherwise make use of “baseUri” described in the API Description). Similarly, you can also define values for test authentication parameters within these configuration parameters.
Example:
Name |
Value |
baseUri |
http://example.com |
apikey |
972938472934234 |
Here, the generated test code will configure the baseUri and apikey with the given values
Test Validation
Before the code generation, the API description is validated. This includes validation of the test cases you have defined. Some examples of where test validation may fail could be if your test case name is invalid (contains unaccepted characters or is too long, etc.), you may have entered invalid HTTP codes or the input parameter value may not be valid for the parameter type. In all such cases and many others, proper errors/ warnings are displayed to the user.
FAQ
1) I am getting the error “The test input for parameter (parameter Name) in test case (Testcase Name) is invalid”. What should I do?
Please verify that the value you have specified for this parameter is indeed a valid value for the type of the parameter you defined in the endpoint definition. E.g. if your parameter is defined to be a “Number” and the test value you are giving it is a string then the validation will fail and you will get this error.
2) I am getting the error “Test input (parameter Name) does not correspond to an endpoint parameter in test case (Testcase Name)”. What should I do?
Please ensure that the endpoint for which you are defining the test case indeed contains a definition of a parameter with that name. If not, please define that parameter in the endpoint definition and then try again.
3) I am getting the warning “Query params/ Field params are provided in test (Testcase Name) when endpoint does not allow it”. What is causing this warning?
You seem to have defined some query parameters or field parameters in your input parameters of the test case even though you did not enable the use of these parameters in your endpoint definition. Please enable the Query parameters using the “Allow dynamic query params” and enable the Field parameters using the “Allow dynamic form fields” in the endpoint definition.
4) I am getting the error “Body match mode (Mode Name) is not allowed for response of type (Type Name) in test case (Test Case Name) “. What should I do?
A particular body match mode may not be applicable to responses of certain types. In such a case you will encounter this error. To refer to the modes available for a particular response type please refer to “Defining your first Test Case (With Example) -> Step 4: Specify Expected Body”.
Comments
0 comments
Please sign in to leave a comment.