The ISTQB glossary belongs in every QA engineer's toolkit
Picture this: a new QA engineer joins the team and asks where to find the "test scripts." One person points to the automation repository. Another sends them a folder of manual test procedures. A third says, "We don't have scripts, just test cases." Everyone's technically correct — and completely talking past each other.
This scenario plays out constantly in QA teams. The testing industry lacks enforced terminology standards, so the same words mean different things depending on who's talking. That's where the ISTQB glossary comes in — and no, using it doesn't require getting certified.
What ISTQB actually provides
The International Software Testing Qualifications Board (ISTQB) is best known for its certification program. As of 2025, they've issued over one million certifications across 130+ countries. But certification is just one part of what they do.
More relevant for day-to-day work: ISTQB maintains a free, searchable glossary of software testing terminology. This glossary has become the de facto standard for how the industry defines testing concepts. Whether or not anyone on the team holds an ISTQB certificate, the terminology shows up in job postings, vendor documentation, and technical discussions everywhere.
Terms that cause the most confusion
Some terminology debates come up over and over. Here are a few that the ISTQB glossary clarifies:
Test case vs test script vs test procedure
According to the ISTQB glossary, a test case is "a set of input values, execution preconditions, expected results and execution postconditions, developed for a particular objective or test condition." It defines what to test.
A test procedure (or test script in manual testing context) defines how to execute the test — the specific steps to follow. In automation, "test script" typically refers to the code that implements the test.
The distinction matters. A test case might say "verify login with valid credentials succeeds." The test procedure spells out each click and field entry. The automated test script contains the actual code.
Error vs defect vs failure
These three terms describe different points in a chain of events:
- Error (or mistake): A human action that produces an incorrect result — a developer writes buggy code, a tester misunderstands a requirement
- Defect (or bug): A flaw in the software that can cause incorrect behavior
- Failure: The actual incorrect behavior observed during execution
The chain works like this: someone makes an error, which introduces a defect into the code, which may eventually cause a failure when the right conditions trigger it. Not every defect causes a failure — some bugs sit dormant until specific scenarios activate them.
Test automation framework
This one gets thrown around loosely. The ISTQB definition is specific: "a tool that provides an environment for test automation. It usually includes a test harness and test libraries."
So when someone says "we built a test automation framework," the question becomes: did they build tooling that provides an environment for writing and running tests? Or did they just organize their tests into folders and add some helper functions? Both are valid work — but calling the latter a "framework" creates confusion when discussing architecture decisions.
The communication cost of inconsistent terminology
Terminology mismatches create friction in several ways:
Onboarding takes longer. New team members spend weeks decoding what the team actually means by common terms. "Test case" might mean a Jira ticket, a row in a spreadsheet, or a method in the test codebase depending on who's explaining.
Cross-team collaboration suffers. When the QA team, developers, and product managers use the same words differently, requirements get misunderstood. "We need more test coverage" means different things to different people.
Vendor and contractor relationships get messy. External teams often use ISTQB terminology by default. If the internal team uses different definitions, proposals and estimates won't align with expectations.
Technical discussions go in circles. Debates about testing strategy waste time when participants aren't even talking about the same concepts.
The certification debate — and why it's separate
The QA community has strong opinions about ISTQB certification itself. Critics argue the exams test memorization over practical skills. Supporters point to the value of a shared credential and structured learning path. That debate won't get settled here.
But here's the thing: the terminology question is separate from the certification question.
Knowing that ISTQB defines "regression testing" as "a type of change-related testing to detect whether defects have been introduced or uncovered in unchanged areas of the software" doesn't require sitting for an exam. It just requires reading the glossary. The definitions exist independently of the certification program.
A team can collectively decide ISTQB certification isn't worth pursuing while still adopting ISTQB terminology as a shared vocabulary. The two decisions don't have to be linked.
A practical starting point
For QA engineers — whether focused on manual testing, automation, or both — spending an hour with the ISTQB glossary pays dividends. Not to memorize every term, but to get familiar with how the industry defines common concepts.
A few terms worth looking up beyond the ones discussed above:
- Test oracle — how you determine if a test passed or failed
- Test basis — the documents and information used to create test cases
- Test condition — a testable aspect of a component or system
- Test harness — the tools and code that support test execution
When the whole team shares vocabulary — even loosely — technical discussions get more productive. Requirements reviews catch ambiguities earlier. Documentation actually communicates.
The glossary is free, searchable, and doesn't require registration. Bookmark it. Whether certification ever makes sense is a personal career decision, but the terminology belongs in every QA engineer's toolkit regardless.