Does your team have automated tests?
- Do you run automation as part of your tests?
- How do schedule and execute your automated scripts?
- Do you generate reports out of your automation? Do you do this together with your manual tests?
It’s great to see how test automation is becoming more common in today’s testing organizations. Something that used to be mainly a buzzword in the industry, is becoming a helpful tool to many teams around the world.
Regardless if you use Selenium, QTP, or even home-brewed scripts, automation is more and more a part of everyday QA life. Even though it is still far from being a Silver Bullet, as many tool vendors want us to believe, it allows to streamline our processes and to develop more stable products, helping us release high quality products faster than before.
The problem is that, in order to make proper use of automation you need to manage it and integrate it into your overall testing process. For example, you need to understand what part of your tests are automatic vs. manual; or sometimes you are asked to generate reports that will show your stakeholders the overall testing status of your product (regardless if the test is automated or not!).
What were we looking to solve?
After getting a number of requests from users to expand PractiTest’s functionality to manage (also) automated tests we started searching for the best possible solution.
Basically we wanted to come up with an approach that would allow users to:
- Manage their tests and runs, whether manual or automated, in one place.
- Increase the visibility of the automation efforts and teams, eliminating any communication or coordination issues.
- To see the results of all my tests in one place, creating comprehensive reports.
- Providing a solution that is simple to deploy and flexible enough to manage as many tools as possible.
Our solution – the xBot automation agent
So, after talking to our users and reviewing many possible approaches we chose to develop what we call the xBot automation agent, a small java utility that runs on every OS (Windows, Mac, Linux) and is able to execute scripts written on any tool or any language.
Users who want to run automated test simply map their automation scripts to tests in the Test Library, they then create a Test Set with their scripts, and schedule the time when they want their tests to start running. When this time arrives the xBot gets from PractiTest the order to execute the scripts locally. Finally after each test is run, the agent uploads the results to PractiTest as part of the test execution log.
Public release of the xBot agent
In the beginning of the year we started a private beta with a limited number of customers who had asked for this functionality. Later on, in February, we released the public beta in order to get more feedback on the solution.
In May and June published a number of posts in our QAblog, linked-in and other forums, asking testers what are their biggest challenges when coordinating automated and manual tests.
Now, after taking the feedback from the Beta and the inputs from the web, we are proud to release this new solution to support automated testing via PractiTest.
We hope you’ll like our solution and invite you to keep providing us your feedback. Here’s more information about the xBot and PractiTest’s support for Automated Tests.
Do you know a company that may benefit from PractiTest? Now you can earn money by recommending users to work with PractiTest.
PractiTest will pay a referral fee of twenty percent (20%) of net revenues for the life-time of your referral, subject to a minimum requirements.
PractiTest Affiliate Program is simple:
- In order to receive a referral fee you need to refer at least one new paying customer every twelve months, generating a minimum US $400 revenues per month.
- As long as this minimum milestone is kept, you will receive the referral fee of 20% for all the accumulative accounts you referred!
As you can see, this is an awesome affiliate program so go ahead and check-out the details. This is your chance to benefit both yourself and your colleagues, by recommending PractiTest.
This weekend we released a new version of PractiTest with many features requested during the last couple of months.
– Added the option to create a Private Dashboard tab where each user can define the graphs and information he wants to see.
– Dashboard graphs are now clickable and will allow users to “zoom-in” and see the data behind the specific pie or graph displayed. So, for example, in a pie chart of the bugs for your release you can click and see the specific ones that are in status rejected or open.
– We added the ability to personalize which dashboard tab you want to be displayed by default on each of your projects.
– Added new dashboard items for test instances.
Exporting, Reporting and Printing
– We added the ability to print specific issues, tests, test sets and requirements straight from the form.
– Expanded the support to export specific custom views from all the modules in PractiTest.
– Users can now see the attachments defined in the test library as part of the test instances running within a test set.
– We added a visual indication when adding test instances to your test set whenever a specific test is already part of the given test set.
– Improvements to the Multiselect fields.
– Plus a number of additional smaller features and fixes.
As the amount of feature requests grows from month to month we’d like to remind and encourage you to use our User Feedback Forum in order to let us know what additional functionality you’d like to see in PractiTest. Remember that this forum is there not only to add your own requests, but also to vote for the features requested by other users that you’d also want to see in PractiTest. This in turn will help us plan and prioritize our work based on your priorities.
Feel free to let us know what you think by leaving comments and sending us your feedback!
This weekend we released a new version of PractiTest with one feature in particular that has been requested by a number of customers – the public release of our Automation XBot Plugin.
Using the XBot, you can run automated tests and scripts created with virtually any tool (Selenium, Watir, QTP, etc) or any homegrown testing framework. You can learn more about the XBot from our Automated Tests documentation, and if you want to take part in this Public Beta just send us an email and we will activate this feature in your projects.
This version comes with a number of small additions in the Report Center, such as the ability to export your reports to PDF format as well as to work with Views on Requirement Reports.
The current release also fixes a number of smaller issues that were reported by some of you in the last couple of weeks.
We’ll be happy to get your feedback on these or any of the rest of PractiTest, you mail us directly or use our User Feedback Forum.
Last night we released a new version of PractiTest with many of the features you’ve requested either using our feedback forum, or by contacting us directly.
Here’s a recap of this release’s main new features:
- Dashboard customization – control the information you display as part of your PractiTest Dashboard. Customize the graphs and tables you show in your project’s dashboard, when you login to the system, according to your personal and specific needs and constraint
- Run your tests in either Compact or Extended mode – we introduced a new way to run your tests in “Compact mode”. This mode allows you to see many steps together displayed in a convenient grid format. There are still some functions that are available only on the “Extended mode”, such as attaching files or manually linking issues, but if you have long and short steps, or if you are running Exploratory Testing Sessions you may find this mode more convenient.
- Step Count field in the Test Library
- This version includes other small enhancement such as a “Run Next Test” button in the Test Run Window, Information Tooltips, enlarged comment fields, and some bug fixes
We hope you find it useful!
Don’t be shy and use our User Feedback Forum to help us develop more great features.
Last month IDC published a Private Vendor Profile on PractiTest.
The report includes: Company Overview, Company Snapshot, PractiTest Watch Factor Score Versus Watch Factor Average Score, Potential Market, Market Disruption, Competitive Landscape, Technology/Solution and more.
We cannot recommend it or not (since it costs $3,000 US), but it’s nice to know that after our previous product review, which was very positive, now we are being reviewed and followed by no less than IDC.
Yesterday (Nov 16th, 1:00AM GMT) we experienced some availability issues with PractiTest. Within a short time the team managed to handle everything and we were able to return to full service as usual.
We believe in honesty and openness, that’s the reason we log this and all issues we experience in our company blog. To serve as a reference, our last availability issue was back in June 2009 (17 months ago!), and a result of it we decided to shift to Amazon EC2 mainly due to the better SLA we were able to receive from them.
We’re still investigating the current incident, but it was reassuring to see that our monitoring services and fast resolution procedures were able to kick-in promptly and effectively.
We are also very proud that our previous availability issue was 17 months ago, and we will do our best to keep this level of service availability going forward.
Please feel free to contact us if you have any questions.
This week we rolled-out a new PractiTest update. This version comes with a large number of features and improvements that came directly from our customers’ feedback.
One of the biggest additions is the expansion of the Views & Filters functionality into the Requirements module. This features was requested by a number of users who realized Hierarchical Views is what they need in order to manage their requirements more effectively.
As an example, Views can allow you to classify requirements as a hierarchical product tree and also organize them based on Versions and Releases you have in your project plan; you will also be able to group them based on the customer that requested the requirement, or on the cross over functionality it covers. As you see, the possibilities and flexibility are endless – and so are sometimes our users requirements 🙂
Some additional functionality added as part of this version is:
- Expansion of the Actual Results field in the test runs. Giving you more space to add results and format them to provide a better description of your findings.
- Save & New button in the Requirements, Tests and Issues. This button will assist you when entering multiple entities for a single product area, by allowing you to save the entity (i.e. test) and keeping all the fields in the general tab so that you don’t need to enter them once again from the beginning.
- Hide / Show all Step Details in the Test Library Steps. Providing a better and faster way to go over your steps when reviewing or updating your tests.
- And an a large number of smaller features and fixes around views, exporting, emailing, search, tags, etc.
We greatly appreciate your feedback and encourage you to continue sharing your requests via our Users Feedback Forum.
It’s been more than a month since we introduced the new scalable Test Management model with TestSets and Test Instances.
In this release we improved the functionality around this new model:
- Reporting for Test Instances
- Sorting Test-Instances, inside a TestSet
- Adding a whole Test-view to a TestSet
- Improved GUI when adding the Instances
In addition to the above and the bug fixes we also added a one-way integration with Trac, like we already have with Bugzilla, Jira, and Fogbugz.
The change that we believe would be mostly appreciated by you is the change in the notification mechanism. Until today, the notifications were sent to the user, according to his preference:
From this update, the notification will not be sent to the user who’ve made the change. There’s no reason to send me an email with the notification of change in a Test, if I’m the one who did that change. So let’s assume there’s a new Issue – the system checks who should be notified for this issue : all the users whose preferences is “for all issue changes” + the author if the author have check in “where I am the author” preference + the assignee, if the assignee set on his preference to be notified. So before sending PractiTest will remove from this list the current user (who, is in this case, the user who created this new issue). In a way, from now on, PractiTest is smart enough to really send only the notifications you need.