Yesterday (Saturday 23rd) we upgraded our integration with JIRA Server, JIRA Cloud
End to end traceability of your Jira Requirements and Issues in PractiTest
See JIRA issues directly in PractiTest
JIRA users can now see their JIRA Issues directly in PractiTest including all the relevant testing information.
Linked issues are synced to PractiTest in real time.
Integrate Requirements into PractiTest
Your requirements/User stories are now a part of your testing.
Users can now link requirements directly from JIRA to PractiTest and have testing traceability from end to end, including:
Requirements, tests, issues and the relevant reports.
Requirements are synced from JIRA to have information at PractiTest updated at all times.
Users can choose to link/unlink the Requirement from PT to JIRA.
New projects are now easier to set up than ever before.
Project, issue and PT IDs are automatically populated from your JIRA projects.
Start working with this integration today
JIRA Cloud users : Update was done automatically!
JIRA Server users: Please update your PractiTest-JIRA add-on and contact us.
For more information, please review our JIRA integration page.
As part of on-going operations to improve the functionality and the performance of PractiTest we will be carrying out maintenance operations to our servers during the weekend of August 9th, 2014. Even though most of our maintenance does not usually cause any downtime, this specific operation will require us to have some downtime in our service.
In order to minimize the impact of this operation we have scheduled our maintenance to happen on Saturday, August 9th, 2014 at the following time:
- GMT – 5:00 to 9:00
- CEST – 7:00 to 11:00
- Eastern US – 1:00 to 5:00
- Pacific US – 22:00 (Friday Aug 8) to 2:00 AM
- Eastern Australia – 15:00 to 19:00
You can follow our live updates for this operation via PractiTest’s Twitter account.
As always, we will be more than happy to provide additional information and answer any questions you may have. Feel free to send your questions to our support.
The PractiTest Team
Agile Development and Agile Testing
Today, more and more teams are shifting over to AGILE. One of the interesting facts about agile development is that it comes in many variations, from Scrum to Extreme Programming and more; but regardless of the approach you follow there a number of principles in agile development that everyone agrees to:
- Customer satisfaction is achieved by frequent delivery of useful software
- Changes in requirements are part of the software development process
- Regular adaptation is needed to comply with the changing circumstances
When you work based on frequent deliveries, constant changes and regular adaptations; how can you still manage an efficient testing process?
Part of the philosophy behind agile development & agile testing talks about shifting towards automated testing as the means for covering regression testing & test driven development to achieve more stable code from the beginning. This “advise” is helpful but it still falls short from providing the solution and the guidance needed by a QA team to cope with the challenges of shifting to AGILE.
There are books that talk about this, and Joel wrote about agile testing in his blog. But in the end there is nothing like the experience from working with many organizations that shifted to AGILE with the help of PractiTest, and from the knowledge we’ve gain from our own experience developing PractiTest as an agile team. The tools you use will help you achieve your testing and development goals in the same way that a hammer and the nails help the carpenter to make his furniture, and flexibility & adaptability is a must when you are looking for a tool to help you manage your agile process.
PractiTest Test Management Solution for Agile Development
A quick search on the Internet will show there are many solutions specifically designed to handle agile development, and some of them even claim to support agile testing, but we still see many customers who check them out and find they are missing important functionality needed to cover the testing process.
As a methodological test-management solution, from time to time we are asked to show how we support agile development and testing, and an interesting fact is that we support Agile Testing without having any specific feature developed solely for this purpose!
How do we do it? The answer is simple, we believe in flexibility. PractiTest enables YOU, the user, to customize the system based on your process, your product and your needs.
If you working based on sprints, for example, you can customize the system by creating a custom field called SPRINT and adding it to your requirements (or user stories), to your tests and to the issues in your project. Once you have this field in place you can organize all your data and work based on the sprints you defined. You can then create views and reports that will make it easy for everyone to gain visibility into their tasks and those of the team and allow the whole team to manager their work fast and easily.
What’s even best is that you can modify the values in the fields and even the fields themselves with a small number of “clicks” and in a matter of seconds (without the need to of complicated customizations or processes).
Another aspect common to Agile Testing is the popularity of Exploratory Testing among testers. This approach works mainly by defining testing charters up front and documenting your testing steps at the same time you execute the tests. In PractiTest this is easily achieved with the functionality available that lets you edit your tests steps even when you are running them within the Test Sets & Runs module.
There are many aspects that define Agile Development & Agile Testing, and the truth is that each organization and even each team will approach Agile in their own individual and different way (based on their needs and their constraints). In this same way, we believe that it is not correct or even possible to try to define for you how you should manage your agile testing process. The best approach, the one we believe in, is to give you the freedom to let you decide how to work and to support you process your way!
“Thanks for a great product!”
Michele Williams, Core Apps
The latest news headlines regarding the healthcare.gov debacle are an excellent example of how not to launch a new website, application or any other IT product. The common development pitfalls — lack of visibility and communication were imminent throughout the whole development process.
As many of you will agree, lack of visibility and communication are illnesses suffered by many software development and IT projects. The problematic signs are usually there, but they are not communicated to higher management.
Without the proper visibility into the process and without the ability to understand the status of the project at all times, management cannot make the correct decisions to ensure the success of the project. Healthcare.gov is an unfortunate example of the catastrophic consequences of noise interfering with the message communicated to higher management.
How can such problems be avoided? … More on the subject in our latest PR release
Hello to all PractiTesters!
Recently we performed another update of PractiTest with important additions to make your work better and more effective. Let’s start with some additions to make your work clearer, specially when you are only getting started with the system. We believe that each tester (or non-tester) working with PT should understand our methodology (and read the page at least once).
As part of our Testing Methodology you can see that:
- A Test Set is a collection of Test Instances
- An Instance is linked to a Test (in the Test Library), so that each Test can be linked to multiple Instances
- Each Instance can have multiple Test Runs (where the status of the Instance is the status of its last Test Run).
The Test Legend
To avoid confusion (“where am I right now?”), we added a Test Legend in all testing related entities, indicating the location in the application. We believe that especially for new PractiTesters, this legend will help understand our methodology and the links between the different entities.
The standard Testing Scenario
- Each Test (in the Test Library) is defined only once, with all its relevant steps
- Test Set creation, and their respective Instances, are defined once in a while when there’s a reason to start running these tests ( e.g. a new version /release, new functionality, etc).
- Whenever a user goes to the Instance, and presses the Run button, a new Test Run is created, copying all the steps to the test run (so if the Test in the Library is changed later, the steps in the run will not change)
- If for any reason, an Instance needs to be re-run in a specific Test Set, the tester will create an additional Run and not overwrite the previous run. Ensuring all run history is saved.
Better Test Sets and Runs navigation
In addition to the above we decided to streamline the testing process, making the run button much more powerful, and skipping some screens when they’re not required:
- If the user starts a Run from the instance grid, and the Instance has no required fields or older runs, then it goes straight to the run window (with the steps). Enabling to start and testing right a way.
- In cases when there are required fields in the instance that are not filled, it will go to the instance window.
- And in the case there are older runs (that you want to review before starting your new Run!), it will go to the instance window, showing previous runs, enabling to go and check or update one of those test runs.
We are sure these changes will make your testing experience more productive. And we invite you to share with us additional ideas to make your work even more effective in the future.
“When you post an issue they get back to you promptly and their turnaround is second to none. They’ve even included some features we requested in their releases. You can’t get better than that.”
Kfir Hemed, Head of the Verification Unit at Radwin
“The PractiTest team has always been great to work with. If I have found an issue they respond back quickly and very professionally. Without spending hundreds of thousands of dollars a year, we are getting a great product with great support that fits the needs of our department.”
Stephen Musal, Freeman’s Lead QA Tester
Since we released the new GUI last month, we got some amazing feedback from our users.
We did an online survey, to explore more about the things you liked and those you though we could still improve and we found that although most of you really loved the changes we did, you still thought we had some things that could be improved.
Many users found it annoying that when they moved their mouse over the id in the grid, the popovers would open automatically; and we also had couple of bugs in this feature (which we fixed in one of those patches we’re doing all the time…) We even added a place in the personal settings to disable those popover from popping automatically.
But we really think this is a great feature and so we decided to solve the issues that were “annoying” some users. We started by changing the functionality so that when the mouse is over the id (hover), we now show the “info” button, enabling the users to click to see the preview but only with a mouse click; and another mouse click to close it.
In addition, we added some useful information to those previews. Tests -> now you can see the first 5 steps, TestSets -> now you can see the first 5 instances, Requirements – > the traceability.
We are sure this feature is now a lot better and it provides even more value than before.
Note: the disable popover flag in the personal settings settings works (turns off the popover) in the places where the feature would have been opened in the “old way” without pressing on the popover button (not in the grids). But we foresee that in the next couple of releases this flag will become irrelevant as we migrate all the popover functionality to work as it is doing in the grids today.