Test Complete Automation Best Practices to Follow:
After completing few engagements using Test Complete, we have come up with these best practices which we would like to share it with you. It might help us to improve automated testing processes and get releases out on time.
Test Scripts are to be Optimized, that are resistant to changes in the UI.
Your automated tests should be reusable, maintainable, and resistant to changes in the application’s UI. We recommend not allowing automated tests to rely on screen coordinates to find the control, as this is less stable and breaks easily.
We should make sure while writing test scripts that they are not very verbose, there is no dead code and meet the agreed coding standards.
Be cautious with test data loops in order to ensure that the loop end condition is appropriately met and it is not going into infinite loop.
Also, the test scripts should follow indentations and code level comments so that they are readable to test scripts reviewer and are very easy to maintain and update in future execution of these test scripts.
Use key Test Complete features: Utilize the OCR enabled Intelligent to detect and test application components that were undetectable, including PDFs.
Record Test Script before using Descriptive Programming:
It is always a good idea to record a test script before using the descriptive programming to enhance it.
This way you will get a skeleton code to work on and would not have to write everything from scratch.
This saves a lot of coding time and is especially useful for those who don`t have a lot of experience.
When you perform some actions against the tested application by recording them then Test Complete automatically recognizes all these actions and convert them to a test script.
One of the best practices while recording a test is to keep the help notes handy.
Test Complete Interaction:
Whether you have prepared your application for testing would determine how Test Complete interacts with it.
For white-box app or open app, a tester can access the methods, fields, and properties and for a black-box app, image-based tests are created.
Choosing the right method to test your app would result in accurate scripts and hence, accurate test results. Images can be saved in image repository as shown in the figure below.
Carefully operate Pointer devices such as Mouse, etc.: During the test execution, it is advisable that tester should not move the mouse or press any key since it can interfere with the running test actions which will produce unexpected behavior. Leave the screen as is while running the test for accurate results.
Regular housekeeping of testing Logs: After each test execution cycle, results are stored in the form of log files which can consume a lot of memory when tests are run repeatedly. To avoid this, tester either can delete the log files manually or limit the number of log files created. This log file has all the actions listed with the time stamp of an action completed.
Check all pre-conditions are met: Before running a recorded android test, you should make sure that all the preconditions are met. If this practice is not followed, then the test might fail. For example, say the app was already running when you recorded the test but when you are executing the recorded test, you closed down the app and tried to run the same test on it. It would fail since the script will expect the app to be running.
Running tests on multiple devices: To run one test on multiple devices, it is very important to prepare that device for test execution. Always make sure that device setting “Stay Awake” is enabled for smooth test execution.
Other Tips & Tricks for Test Complete
Make sure to disable test visualizer in order to increase the speed of your tests and save the hard disk space
To enhance automated test execution, use operation specific optimized waits in your tests, such as WaitProperty, WaitWindow, WaitChild, and WaitProcess.
Not use NameMapping if you have dynamically changing environments, as it will cause failure because of the static values stored. Instead, we recommend you use alternative methods in your automated tests.
Work with “Object Browser” while dealing with Page object. Any browser window will require working with the process windows, buttons, and navigation elements, which are all outside of the page object.
Since each object in a process is treated as a Test Node in Test Complete, they all share certain basic properties and set of functions that can be utilized.
Avoid dependent settings.
Dependent settings can be found in the path to tested applications, page address in web testing, and database checkpoints.
Make sure to use relative paths, if possible, and move the tested applications along with the project.
If your web page resides locally, it may contain localhost instead of server name, making your code system dependent. Your database connection string may contain computer-specific data, making your connection strings system dependent. To avoid this, move the database to a shared folder and specify the network path to the database.
It is advisable not to run Test Complete and Test Execute at the same time on your machine, since these two android test engines will conflict and produce unexpected results. To avoid this, either run them separately or one after the other.
Make sure the latest Test Complete updates were done whenever possible to get the additional benefits out of it.
Enhancing Test Complete Performance
You can increase the speed of test runs in several ways:
Disable Features You Do Not Use
Prepare for Automated Testing
Optimize Your Tests
Helpful Resources
Disable Features You Do Not Use
Test Complete includes multiple features allowing it to run in a wide variety of situations and provide extensive feedback on events happening during tests. However, if you need to test a particular application and have already optimized your test, these features may slow down your test execution instead. In this case, we recommend that you disable or limit them.
You can reduce the size of the Name Mapping repository:
During test recording, Test Complete automatically captures images of mapped objects and adds them to the Name Mapping repository. If you do not want to store the images or if you need to decrease memory consumption, select Do not store in the Store images section of the Store Name Mapping Data dialog, in the Name Mapping editor:
This will remove images of all mapped objects from the Name Mapping repository.
By default, the Name Mapping repository stores complete information on properties and methods of mapped objects. This information is relevant only at design time and does not affect the test runs.
You can disable storing object methods and properties to reduce the Name Mapping repository. To do this, select Do not store in the Store Code Completion data section of the Store Name Mapping Data dialog, in the Name Mapping editor:
If you do not use Name Mapping in your project, disable the Map object names automatically option and remove the NameMapping project item from the Project Explorer.
You can disable Test Visualizer. Capturing images takes additional time and requires a lot of disk space.
As an alternative, you can configure Test Visualizer only to capture images and not to collect data on test objects whose images it captures.
You can also configure Test Visualizer not to update frames during test runs.
In the File | Install Extensions dialog, you can disable plugins that your tests do not use. This will narrow down the Test Complete functionality and speed up searching for objects and methods.
We recommend that you disable the following plugins (if you do not use them in your tests):
Visual C++ Application Support
Delphi and C++ Builder Application Support
UI Automation Support
Microsoft Active Accessibility Support
Text Recognition
As an alternative, if you use UI Automation, Microsoft Active Accessibility or Text Recognition in your tests, you can configure your project to recognize only the objects necessary for your tests. See below.
You can also disable other plugins you do not use in your tests. This will not affect the test run speed, but Test Complete will start faster.
If you use supplementary object recognition approaches (UI Automation, Microsoft Active Accessibility or Text Recognition) or if you test JavaFX applications, we recommend that you configure your project to recognize only the objects necessary for your tests.
For instance, we recommend that you not use the * (all objects) preset in the list of recognized objects that you specify in your projects:
Click the image to enlarge it.
Otherwise, it can take long for Test Complete to expose all the objects that can slow down your test performance.
In addition, if the accessibility information or the UI Automation information in your tested application is implemented incorrectly, the application may fail when Test Complete tries to access that information. If, in your test, you do not need the active accessibility information or the UI Automation information to recognize the tested objects, disable the appropriate plugins or configure your projects to exclude objects you do not need in your tests.
We recommend that you use presets that explicitly specify objects you use in your tests.
We also recommend that you configure your project to recognize the needed objects properly before you start creating tests. Otherwise, if you modify the recognition properties after creating tests, those tests may fail during the run.
You can filter applications with which you do not work in your tests by using the Process Filter. This will narrow down the number of applications Test Complete treats as Open and speed up working with them:
Open the project for which you want to filter applications and select Tools | Current Project Properties from the Test Complete main menu.
Click Open Applications | Process Filter on the resulting Properties page.
Select the needed process filter mode.
In the Process list, specify the applications Test Complete should treat as Open or black-box ones (depending on the process filter mode you have chosen):
Click the image to enlarge it.
For example, you can configure Test Complete only to treat applications stored in the Tested Application collection of your project as Open.
If your automated test does not use internal objects of the application, you can disable the Debug Info Agent, a special subsystem used to get access to them through debug information. Reading and parsing debug information can take some time, and this can significantly slow down the automated test execution.
To test Silverlight applications, you can use the Silverlight Open Application Support plugin or the Microsoft UI Automation technology. The plugin exposes all controls in Silverlight applications, including those that may be unnecessary for testing. If you do not need the methods provided by the plugin, you can increase the test run speed by using the UI Automation technology (see Testing Silverlight Applications - Overview). As an alternative, you can reduce the number of objects the plugin exposes by configuring the Object Mapping options of your project.
Prepare for Automated Testing
Unless specified otherwise, Test Complete performs tests with default options and in default operating system conditions. These settings are optimized for stability and usability. However, when running automated tests, they may be suboptimal. You can change them to increase the test execution speed.
Note: | Please be careful: changing these settings can cause errors during the test run. |
You can optimize the time during which Test Complete waits until the needed window or control becomes available:
By specifying the Auto-wait timeout in project options.
By setting a timeout value in the Auto-wait timeout column in keyword tests.
By calling the Wait method in script tests.
Set a shorter timeout interval to increase the test run speed.
Note: | Using very short timeouts may cause errors, because your tests will not wait for objects to appear. |
You can adjust the delay between actions using the Playback group of Test Complete project properties.
The lower the delay values are, the faster the test execution is.
Note: | Using the lowest settings may cause errors during the test execution.
|
You can increase the speed of double-clicking in your operating system. This will also increase the speed of single clicks since Test Complete pauses the execution until the double-click timeout elapses.
Note: | Fast double-clicks may be inconvenient for the users working on the computer. |
You can disable extra UI visual effects of your operating system. Using settings that provide the best performance will increase the test execution speed.
Note: | Changing UI settings may affect your application appearance. Make sure to adjust your tests appropriately. |
Optimize Your Tests
Test Complete provides multiple ways to simulate user actions. In some cases, modifying your tests will increase their execution speed:
Recorded tests usually use high-level methods because they are more concise, easy to read and encapsulate more verifications. However, basic commands provide faster playback.
For example, the Sys.Desktop.MouseDown and Sys.Desktop.MouseUp methods work on a lower level than the Click method, and are simulated faster. The Click method, in its turn, works on a lower level as compared to the ClickItem method.
You can replace high-level methods in your tests with lower-level ones to increase the test run speed.
Note: | Using lower-level methods can make tests more complex and less stable. |
You can optimize your recorded low-level actions. When recording these, tests often record extra unneeded delays or actions. You can delete these delays and extra actions to increase the playback speed. Deleting important actions may cause your test to behave in unexpected ways.
Avoiding Performance Problems
Distributing the testing of large projects between multiple Test Items will reduce the load on the testing machine.
Creating large script tests is not recommended. Using tests with over 15 MB of script code per gigabyte of available RAM may cause performance decay or a Test Complete crash.
| Due to technical implementation, Test Complete 32-bit cannot access over 2 GB of RAM for a 32-bit Windows edition, and 3 GB of RAM for a 64-bit Windows edition.[Text Wrapping Break]This allows using 30 and 45 MB of script code respectively. |
If you need to use more script code in your project, creating multiple suites and distributing the tests among them is recommended.
Keyword tests do not share this limitation.
Set up project NameMapping before recording the main test. Avoiding extra items in project NameMapping will prevent performance decay.
Running Test Complete on a computer under recommended System Requirements may reduce the test execution speed.
To increase the test execution speed, follow the advice in the Enhancing Test Complete Performance topic.
A large amount of log items may cause performance decay, so removing all unnecessary items from the Project Suite Logs is recommended.
Continuous testing generates large amounts of data stored on the disk. If your test runs for an extended period of time, more than 2 GB disk space may be required.
Collecting data on test objects whose images the Test Visualizer captures during the test run (and test recording) may reduce the performance of your test (test recording). To avoid possible issues, we recommend that you configure Test Visualizer not to collect data on test objects.
Updating Test Visualizer frames during the test run may slow down the test playback. To avoid possible problems, we recommend that you configure Test Visualizer not to update frames during test runs. See Updating Visualizer Frames.
The Name Mapping repository in your project and the Find, FindAll, FindChild, and FindAllChildren methods in your tests use object properties to find the needed object in the application. We do not recommend using the VisibleOnScreen property to search for objects. It can take much time for Test Complete to get this property value, and Test Complete will have to get the value of every object it checks during the search. As a result, it can decrease your test performance significantly.
Reducing System Impact on Test Result
If there are long pauses between test actions in your distributed tests, make sure that the screen saver, power saving options and lock screen are disabled. Otherwise, Test Complete will fail to continue the test run after the computer is locked.
If your test interacts with tray icons, make sure that they are always displayed. If the icon is not visible when you try to simulate a click over it, the test will fail. You can avoid this issue by enabling the Always show all icons and notifications on the task bar option.
When testing under a user account, check whether the user running the test has the required permissions.
Consider the specifics of the operating systems.
When performing Distributed or Parallel testing, configure Firewall on all test computers.
Using Windows environment variables (%windir%, %programfiles% and others) instead of direct paths ("C:\Windows") will allow your test to run on machines where Windows is installed on a different drive.
Helpful Resources
You can find more information on improving your experience with Test Complete on our website:
Best Practices of UI Test Maintenance
· Define naming conventions
· Test case number
· Component
· Test Attributes
· Test case types
· Priority
· Description
· Test Data
· Assumption
Rules of Test maintenance:
Keep test granular and Independence (Create test in proper flow)
Avoid brittle and Hard to Maintain Test Scenarios (Avoid adding testcases in the middle)
If one test fails in this order all Test should face the impact
Other tips To Better Test Maintenance:
Design from the end - user point of view
Follow test creation best practices for effective test writing
Give details to steps or required test data (use detail function like log message to indicate important functions)
Make it reusable ( make repeatable functions as reusable)
Test case clean-up
Delete duplicate test (Delete not useful test cases)
Copy paste
Test the same thing “Just to be safe”
Delete uninformative test cases (Delete test case without proper failure reasons)
Not enough information on failure
Checking To many operation
Delete unreliable test
Inability to run everywhere, anytime
Flaky test
Unstable platform
Delete misleading test (Unclear test sets and incorrect steps leads to pass the functions)
Test that always pass
Changing data sets
Incorrect dataset
Unclear naming conversation
Review Tests (After merging or before merge review testcases and identify reusable testcases)
Find problem early
Enforce best practice
Knowledge sharing
What to Review?
Readability (other users should able to read the testcases)
Dependency on other system to run (Do not use un identify functions in testcases)
Performance
Impact on other system
Organize Testcases:
We can run multiple testcases in Organize folder in Execution plan.
Name mapping object update:
We can update on-screen Object in name mapping repository by clicking update object button it will capture the object in your screen.
Merge objects from other projects:
We can merge object that we used on other projects by merge with option in name mapping section
We can also rename Objects name in aliases for other specific functional object
Name mapping update by using page object identifier
Object in page can be identify by using Dynamic URL (*), it will identify object in the URL and also branch functions
Behavioral Data Driven Tests:
Test can be created by using script's function by object name.
We can recognize the object deeper by enabling Text recognizing and UI Automation for we can use specific language.
Sharing Test results and Log:
We can share test results and log by exporting in log panel
Unused Object in name mapping:
We can identify the unused object by right-click on the name-mapping panel and click show usage it will show the used and unused object
Validate checkpoints:
Region checkpoints used to identify Images after complete specific operation
We can also post screenshots after completing testcases
Other functions in Test complete:
Generate random data
Cross browser testing
No comments:
Post a Comment