TestNg Framework Notes
TestNg Framework Notes
TestNG is a testing framework for Java that was inspired by JUnit but introduces new
features and improvements to support a wider range of testing needs. It is required because it
offers advanced functionalities that are not present in JUnit, such as:
• Parallel Execution: Allows running tests in parallel, which speeds up the testing
process and optimizes resource usage.
• Flexible Configuration: Supports configuration through XML files, making it easier
to manage complex test setups and dependencies.
• Test Grouping: Enables grouping of test methods into groups, which helps in
executing specific sets of tests.
• Data-Driven Testing: Supports @DataProvider for running the same test with
different sets of data, facilitating comprehensive testing.
• Dependency Testing: Allows specifying dependencies between test methods,
ensuring that tests are executed in a specified order.
• Detailed Reporting: Provides detailed and customizable test reports, which help in
analyzing test results effectively.
TDD helps in ensuring that the code is thoroughly tested from the beginning,
encourages simpler designs, and facilitates early detection of bugs.
• Parallel Test Execution: TestNG supports running tests in parallel, which can
significantly reduce the time required for test execution. JUnit does not natively
support parallel execution (JUnit 5 has added some parallelism support).
• Test Configuration: TestNG provides more flexible configuration options through its
XML file, allowing complex test setups and dependencies. JUnit primarily uses
annotations for configuration.
• Data-Driven Testing: TestNG’s @DataProvider allows for more flexible and
comprehensive data-driven testing compared to JUnit’s @Parameterized tests.
• Test Grouping: TestNG supports grouping tests into different categories, which can
be executed selectively. JUnit does not have built-in support for test grouping.
• Dependency Management: TestNG allows specifying dependencies between test
methods, ensuring tests are executed in a specific order. JUnit does not offer this
feature natively.
4. Advantages of TestNG:
• Parallel Execution: TestNG can execute tests in parallel, which improves efficiency
and reduces test runtime.
• Flexible Test Configuration: Allows detailed configuration using XML files,
making it easier to manage large test suites.
• Advanced Reporting: Provides detailed and customizable test reports, which help in
tracking test results and debugging issues.
• Group and Dependency Management: Supports test grouping and method
dependencies, enabling more organized and manageable test execution.
• Data-Driven Testing: Facilitates running tests with multiple data sets using
@DataProvider, ensuring extensive test coverage.
• Annotation Support: Offers a wide range of annotations for different phases of test
execution, providing flexibility in test management.
• Limited Parallel Execution: Running tests in parallel may not be supported or would
require additional setup and libraries.
• Lack of Advanced Configuration: Configuring complex test scenarios and
dependencies may become cumbersome and less flexible.
• No Built-in Grouping: Test grouping and selective execution would be harder to
implement.
• Limited Data-Driven Testing: Performing data-driven testing would be less efficient
or require additional libraries.
• Basic Reporting: Reporting capabilities might be less detailed or customizable
compared to TestNG’s offerings.
• Dependency Management Issues: Managing dependencies between tests would be
more challenging without built-in support.
Annotations in Java provide metadata about the code. In TestNG, annotations are used to
define test methods, setup methods, and teardown methods. Key TestNG annotations include:
• @Test: Marks a method as a test method. You can also specify parameters, priorities,
and groups.
java
Copy code
@Test
public void testMethod() {
// test code here
}
• @BeforeMethod: Runs before each test method. Useful for setting up preconditions.
java
Copy code
@BeforeMethod
public void setUp() {
// setup code here
}
• @AfterMethod: Runs after each test method. Used for cleanup operations.
java
Copy code
@AfterMethod
public void tearDown() {
// cleanup code here
}
• @BeforeClass: Runs once before any test methods in the class. Used for class-level
setup.
java
Copy code
@BeforeClass
public void setUpClass() {
// class-level setup code here
}
• @AfterClass: Runs once after all test methods in the class. Used for class-level
teardown.
java
Copy code
@AfterClass
public void tearDownClass() {
// class-level teardown code here
}
• @BeforeSuite: Runs before any tests in the suite. Used for suite-level setup.
java
Copy code
@BeforeSuite
public void setUpSuite() {
// suite-level setup code here
}
• @AfterSuite: Runs after all tests in the suite. Used for suite-level teardown.
java
Copy code
@AfterSuite
public void tearDownSuite() {
// suite-level teardown code here
}
Batch execution refers to running a set of tests together in one go. It is useful for running
multiple tests or test suites in an organized manner. Achieve batch execution by configuring
test suites in testng.xml, which can group and execute multiple tests together.
xml
Copy code
<suite name="SuiteName">
<test name="TestName">
<classes>
<class name="com.example.TestClass1"/>
<class name="com.example.TestClass2"/>
</classes>
</test>
</suite>
testng.xml is used to configure and manage test execution. Here is a basic example:
xml
Copy code
<!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd">
<suite name="SuiteName">
<test name="TestName">
<classes>
<class name="com.example.TestClass"/>
</classes>
</test>
</suite>
This file specifies the suite and test names, and lists the classes to be executed.
Grouping execution allows organizing tests into different categories or groups and running
them selectively. Achieve group execution by using the groups attribute in @Test
annotations and defining groups in testng.xml.
Example:
java
Copy code
@Test(groups = {"smoke"})
public void smokeTest() {
// smoke test code here
}
@Test(groups = {"regression"})
public void regressionTest() {
// regression test code here
}
In testng.xml:
xml
Copy code
<suite name="SuiteName">
<test name="SmokeTests">
<groups>
<run>
<include name="smoke"/>
</run>
</groups>
<classes>
<class name="com.example.TestClass"/>
</classes>
</test>
<test name="RegressionTests">
<groups>
<run>
<include name="regression"/>
</run>
</groups>
<classes>
<class name="com.example.TestClass"/>
</classes>
</test>
</suite>
Parallel execution allows running multiple tests or test methods simultaneously to speed up
the testing process. Achieve parallel execution by configuring testng.xml with the
parallel attribute or using @Test annotations with threadPoolSize.
Example testng.xml:
xml
Copy code
<suite name="SuiteName" parallel="methods" thread-count="5">
<test name="TestName">
<classes>
<class name="com.example.TestClass"/>
</classes>
</test>
</suite>
In @Test:
java
Copy code
@Test(threadPoolSize = 5, invocationCount = 10)
public void testMethod() {
// test code here
}
To execute specific test methods, use the <methods> tag within testng.xml to include or
exclude methods.
Example testng.xml:
xml
Copy code
<suite name="SuiteName">
<test name="TestName">
<classes>
<class name="com.example.TestClass">
<methods>
<include name="specificTestMethod"/>
</methods>
</class>
</classes>
</test>
</suite>
Cross-browser testing ensures that your application works correctly on different browsers.
Achieve this by configuring Selenium WebDriver with different browser drivers (e.g.,
ChromeDriver, FirefoxDriver) and running tests on each browser.
Example:
java
Copy code
WebDriver driver;
@BeforeMethod
@Parameters("browser")
public void setUp(String browser) {
if (browser.equals("chrome")) {
driver = new ChromeDriver();
} else if (browser.equals("firefox")) {
driver = new FirefoxDriver();
}
// Additional setup code
}
@Test
public void testMethod() {
driver.get("https://example.com");
// Test code here
}
@AfterMethod
public void tearDown() {
driver.quit();
}
In testng.xml:
xml
Copy code
<parameter name="browser" value="chrome"/>
• Selenium Grid: Configure nodes to run tests in parallel across different machines or
browsers.
• Cloud Platforms: Use services like BrowserStack or Sauce Labs to run tests across
different environments and browsers in parallel.
17. Whenever we get a new build, how to start with smoke tests followed by
regression tests in automation?
1. Create Separate Test Suites: Define separate test suites for smoke and regression
tests in testng.xml.
2. Configure Test Execution: First execute the smoke tests. If they pass, proceed with
the regression tests.
Example testng.xml:
xml
Copy code
<suite name="SmokeSuite">
<test name="SmokeTests">
<classes>
<class name="com.example.SmokeTestClass"/>
</classes>
</test>
</suite>
<suite name="RegressionSuite">
<test name="RegressionTests">
<classes>
<class name="com.example.RegressionTestClass"/>
</classes>
</test>
</suite>
18. How to disable TestNG test scripts when one class contains multiple @Test
methods?
To disable specific test methods, use the enabled attribute in the @Test annotation.
Example:
java
Copy code
@Test(enabled = false)
public void disabledTestMethod() {
// This test method will be skipped
}
19. How to execute the same test multiple times with the same data?
To execute the same test method multiple times with the same data, use invocationCount in
the @Test annotation.
Example:
java
Copy code
@Test(invocationCount = 5)
public void testMethod() {
// This test will run 5 times
}
20. How to execute the same test multiple times with different data?
Use @DataProvider to provide different sets of data for the same test method.
Example:
java
Copy code
@DataProvider(name = "dataProvider")
public Object[][] dataProvider() {
return new Object[][] {
{"data1"}, {"data2"}, {"data3"}
};
}
@Test(dataProvider = "dataProvider")
public void testMethod(String data) {
// This test will run with different data
}
21. What is Assertion/CheckPoints, and how many assertions have you used in
real-time Selenium test scripts? Explain with a real-time example.
Assertions/CheckPoints are used to validate the outcomes of test cases against expected
results. Common assertions include:
Example:
java
Copy code
@Test
public void testLogin() {
WebDriver driver = new ChromeDriver();
driver.get("https://example.com/login");
driver.quit();
}
The @Parameter annotation allows passing parameters from testng.xml to test methods.
Example:
xml
Copy code
<parameter name="browser" value="chrome"/>
java
Copy code
@Test
@Parameters("browser")
public void testMethod(String browser) {
if (browser.equals("chrome")) {
driver = new ChromeDriver();
} else if (browser.equals("firefox")) {
driver = new FirefoxDriver();
}
// Test code here
}
The @Listener annotation allows you to define listeners that can listen to test events and
perform actions based on them.
Example:
java
Copy code
@Listeners(TestListener.class)
public class TestClass {
// Test methods here
}
• TestNG Listener: Listens to TestNG test events, such as test start, success, failure,
and skips. It is used for test reporting and logging.
Example:
java
Copy code
public class TestListener implements ITestListener {
@Override
public void onTestSuccess(ITestResult result) {
// Handle test success event
}
// Other methods
}
Example:
java
Copy code
public class WebDriverListener extends AbstractWebDriverEventListener
{
@Override
public void beforeClickOn(WebElement element, WebDriver driver) {
// Handle before click event
}
// Other methods
}
25. How to execute only failed tests when batch execution is done?
Example:
java
Copy code
public class RetryAnalyzer implements IRetryAnalyzer {
private int retryCount = 0;
private static final int maxRetryCount = 3;
@Override
public boolean retry(ITestResult result) {
if (retryCount < maxRetryCount) {
retryCount++;
return true;
}
return false;
}
}
Example:
java
Copy code
@Test
public void testMethod1() {
// Method 1 code here
}
@Test(dependsOnMethods = {"testMethod1"})
public void testMethod2() {
// Method 2 code here, will run after testMethod1
}
27. How to skip the second test if the first test is failed?
Use dependsOnMethods in the @Test annotation. If the first test fails, the second test will be
skipped.
Example:
java
Copy code
@Test
public void firstTest() {
// Some code
}
@Test(dependsOnMethods = {"firstTest"})
public void secondTest() {
// This will be skipped if firstTest fails
}
28. Whenever we get a build, which test scripts will you execute first?
• Smoke Tests: Start with smoke tests to ensure the basic functionality of the build is
working.
• Regression Tests: If smoke tests pass, proceed with regression tests to validate
existing functionalities.
• Creating Base Classes: Define common setup and teardown methods in a base class
that other test classes extend.
Example:
java
Copy code
@BeforeClass
public void setUp() {
// Common setup code
}
@AfterClass
public void tearDown() {
// Common teardown code
}
• Using @BeforeSuite and @AfterSuite: For suite-level setup and teardown, place
common annotations in suite-level configuration.
Define multiple classes in testng.xml and specify their order using the <class> tags.
Example testng.xml:
xml
Copy code
<suite name="SuiteName">
<test name="TestName">
<classes>
<class name="com.example.TestClass1"/>
<class name="com.example.TestClass2"/>
<class name="com.example.TestClass3"/>
</classes>
</test>
</suite>
This configuration ensures that TestClass1 runs before TestClass2, and TestClass2 runs
before TestClass3.
lua
Copy code
+--------------------+
| Test Scripts |
+--------------------+
|
v
+--------------------+
| Test Runner |
+--------------------+
|
v
+--------------------+
| Framework Core |
|--------------------|
| Utilities |
| Configuration |
| Logging |
+--------------------+
|
v
+--------------------+
| WebDriver |
| (Selenium) |
+--------------------+
|
v
+--------------------+
| Application |
| Under Test |
+--------------------+
Explanation:
• Test Scripts: These are individual test cases written using the framework.
• Test Runner: Manages the execution of test scripts (e.g., TestNG or JUnit).
• Framework Core: Contains core components like utilities (WebDriverUtility, Java
Utility), configuration management, and logging.
• WebDriver (Selenium): The tool used for interacting with the web application.
• Application Under Test: The actual application being tested.
In my project, I have used a hybrid framework. This framework combines different types of
frameworks (like Data-Driven and Keyword-Driven) to leverage their strengths. It is
designed to be flexible and reusable, integrating various utilities and best practices.
• Efficiency: Automates repetitive testing tasks, reducing manual effort and increasing
test execution speed.
• Consistency: Ensures consistent execution of test cases across different test runs and
environments.
• Coverage: Allows for extensive testing of different scenarios, including edge cases
that are challenging to cover manually.
• Early Detection: Identifies defects early in the development cycle, facilitating faster
bug fixes and improving software quality.
• Reusability: Components and utilities can be reused across different test cases and
projects.
• Maintainability: Frameworks provide a structured approach, making it easier to
update and maintain test scripts.
• Scalability: Facilitates the addition of new tests and functionalities without
significant changes to the existing structure.
• Consistency: Ensures a consistent approach to test design, execution, and reporting.
• Efficiency: Reduces the time and effort required to write and manage test scripts by
automating repetitive tasks.
• Reporting: Provides built-in mechanisms for generating detailed test reports and
logs.
• TestNG Reporters: Using built-in TestNG reporters to generate standard HTML and
XML reports.
• Custom Reporting: Implementing custom reporting mechanisms to capture
additional details, such as screenshots on failure, execution times, and detailed logs.
• Integration with CI/CD: Configuring reports to be generated and published as part
of the Continuous Integration/Continuous Deployment (CI/CD) pipeline.
• Enhanced Visuals: Custom HTML reports with visual indicators for test status,
execution time, and error details.
• Screenshots: Automatically capturing and embedding screenshots in the report when
a test fails.
• Logs and Metrics: Detailed logs and metrics for each test run, providing insights into
performance and potential issues.
• Data Management: Storing and managing test data, such as key-value pairs for
configuration settings and test parameters.
• Caching: Caching frequently accessed elements or data to improve test execution
speed.
• Dynamic Data Storage: Handling dynamic data where the keys represent unique
identifiers, and values store related information.
• Page Object Model (POM): Defining common methods in a base page class and
overriding them in specific page classes to handle page-specific actions.
• Test Methods: Implementing test methods with different implementations based on
input parameters or configurations.
• Page Objects: Casting specific page object classes to their base class type to enable
polymorphic behavior and reuse common methods.
Example:
java
Copy code
BasePage basePage = new LoginPage(driver); // Upcasting LoginPage to
BasePage
basePage.performAction();
• Page Object Model (POM): Creating base page classes with common methods and
properties that are inherited by specific page classes.
• Base Test Classes: Defining common setup and teardown methods in a base test class
that all test scripts inherit.
• Page Object Model (POM): Defining abstract base classes with abstract methods
that specific page classes implement, hiding the complex interactions and exposing
only necessary methods.
• Page Objects: Hiding the internal details of web element interactions and providing
public methods to perform actions or retrieve data.
Example:
java
Copy code
public class LoginPage {
private WebElement usernameField;
private WebElement passwordField;
Example:
java
Copy code
@Override
public void performAction() {
// Specific implementation for a particular page
}
Example:
java
Copy code
public void clickElement(WebElement element) {
element.click();
}
wait.until(ExpectedConditions.elementToBeClickable(element)).click();
}
18. Why is your framework hybrid, and why not another framework?
The framework is hybrid because it combines features of both Data-Driven and Keyword-
Driven frameworks. This approach allows:
• Flexibility: Handling different types of data-driven testing and supporting various test
scenarios.
• Reusability: Reusing keywords and data across multiple
4o mini
You said:
remaining
2/2
ChatGPT said:
ChatGPT
18. Why is your framework hybrid, and why not another framework?
The framework is hybrid because it combines elements from different types of frameworks
to leverage their strengths. Here’s why a hybrid approach was chosen:
Collections are used to manage groups of objects and can be particularly useful in Selenium
for handling test data and results. Some common collections used in Selenium are:
• List:
o Example: Storing a list of web elements retrieved from a page.
java
Copy code
List<WebElement> links = driver.findElements(By.tagName("a"));
for (WebElement link : links) {
System.out.println(link.getText());
}
• Set:
o Example: Storing unique values, such as unique identifiers or links.
java
Copy code
Set<String> uniqueLinks = new HashSet<>();
for (WebElement link : links) {
uniqueLinks.add(link.getAttribute("href"));
}
• Map:
o Example: Storing key-value pairs, such as mapping user roles to their
permissions.
java
Copy code
Map<String, String> userRoles = new HashMap<>();
userRoles.put("Admin", "Full access");
userRoles.put("User", "Limited access");
• Queue:
o Example: Handling tasks that need to be executed in order, such as test steps.
java
Copy code
Queue<String> testSteps = new LinkedList<>();
testSteps.add("Step 1: Open browser");
testSteps.add("Step 2: Login");
while (!testSteps.isEmpty()) {
System.out.println(testSteps.poll());
}
• Lack of Structure: Without a framework, test scripts may lack consistency and
organization, making them harder to maintain.
• Reusability Issues: Reusing code and test data becomes challenging, leading to
duplicated efforts and increased maintenance.
• Difficulty in Scaling: Managing and scaling test automation becomes more complex
without a structured approach.
• Inconsistent Reporting: Reporting and logging might be inconsistent or incomplete,
making it difficult to analyze test results.
• Increased Manual Effort: Without automation best practices, manual effort and
errors are more prevalent.