Next-Gen App & Browser
Testing Cloud
Trusted by 2 Mn+ QAs & Devs to accelerate their release cycles
Master Python integration testing with examples, best practices, and cloud execution to ensure reliable module, API, and database interactions.
Published on: September 29, 2025
Integration testing validates how various components of software work together as a complete system. Python, with its simplicity and strong testing ecosystem, makes it easier to design and automate different types of testing, from integration to regression. Python integration testing ensures reliable communication between modules, APIs, and databases, helping deliver stable and high-quality applications.
What is Python Integration?
Python integration testing verifies that different parts of an application, such as modules, APIs, and databases, interact correctly when combined. Unlike unit tests that focus on isolated pieces, integration tests validate complete workflows, uncovering issues with data flow, dependencies, and communication across components. This ensures applications behave reliably in real-world scenarios.
What’s the Best Way to Write Python Integration Tests?
Writing Python integration tests effectively ensures reliable module interactions, API communication, and consistent data handling across environments.
How to Troubleshoot Python Integration Tests?
Effective troubleshooting of Python integration tests ensures stable execution, consistent data, and reliable module interactions across environments.
Why Is It Important to Isolate External Dependencies in Integration Tests?
Isolating external dependencies ensures tests are deterministic and reproducible. You can mock databases, APIs, or message queues so that failures are only due to your code, not network or service instability. This increases reliability and speeds up test execution.
Python integration testing verifies interactions between modules, external dependencies, and data flow using frameworks. It ensures all components of a Python system work correctly together.
This form of testing checks that various entities within a Python-based system work together as a whole.
Note: Run your Python Integration tests at scale across 3000+ browsers and OS combinations. Try LambdaTest Now!
Writing Python integration testing involves verifying how different modules, services, or external dependencies in your application work together.
Using frameworks like pytest or Nose2, you can automate tests to check data flow, API communication, and interactions between components, ensuring the system behaves correctly as a whole.
Prerequisites:
To successfully conduct the integration testing process, a few dependencies are required:
Setting up Python for Integration Testing:
python --version
pip --version
mkdir python_integration_tests
python3 -m venv env
source env/bin/activate
pip install pytest requests response
The terminal should look like this during installation:
pytest==8.3.5
requests==2.32.3
response==0.5.0
Here you will learn how to write integration tests in Python using a Weather App. The app includes different modules that work together to fetch and process weather information from the OpenWeather API. Integration tests will validate that the WeatherService and WeatherDataProcessor classes interact correctly, ensuring smooth data flow and reliable output.
This Python integration test for the Weather App will be executed on a cloud testing platform, cause integration testing often depends on external APIs and real-world environments, and the cloud makes it easier to validate data flow, handle dependencies, and ensure consistent communication between components.
One such cloud testing platform is LambdaTest, a GenAI-native test execution platform that allows you to perform manual and Python automation testing at scale across 30000+ browsers and OS combinations.
To get started with the LambdaTest platform, you need to follow a few steps given below:
capabilities = {
"browserName": "Chrome", # Browsers allowed: Chrome, MicrosoftEdge,pw-chromium, pw-firefox and pw-webkit
"browserVersion": "latest",
"LT:Options": {
"platform": "Windows 11",
"build": "Integration Test Build",
"name": "Python Integration Test (Pytest)",
"user": os.getenv("LT_USERNAME"),
"accessKey": os.getenv("LT_ACCESS_KEY"),
"network": True,
"video": True,
"console": True,
"headless": True,
"tunnel": False, # Add tunnel configuration if testing locally hosted webpage
"tunnelName": "", # Optional
"geoLocation": "", # country code can be fetched from https://www.lambdatest.com/capabilities-generator/
},
}
lt_cdp_url = (
"wss://cdp.lambdatest.com/playwright?capabilities="
+ urllib.parse.quote(json.dumps(capabilities))
)
Code Implementation:
The conftest.py file sets up a remote connection to the LambdaTest cloud platform. It includes a capabilities Python dictionary with key-value pairs for configuring the remote browser, using your Username and Access Key stored in an .env file.
The file also defines fixture functions:
import json
import os
import urllib
import subprocess
import pytest
from playwright.sync_api import sync_playwright
capabilities = {
"browserName": "Chrome", # Browsers allowed: Chrome, MicrosoftEdge, pw-chromium, pw-firefox and pw-webkit
"browserVersion": "latest",
"LT:Options": {
"platform": "Windows 11",
"build": "Integration Test Build",
"name": "Python Integration Test (Pytest)",
"user": os.getenv("LT_USERNAME"),
"accessKey": os.getenv("LT_ACCESS_KEY"),
"network": True,
"video": True,
"console": True,
"headless": True,
"tunnel": False, # Add tunnel configuration if testing locally hosted webpage
"tunnelName": "", # Optional
"geoLocation": "", # country code can be fetched from https://www.lambdatest.com/capabilities-generator/
},
}
# Pytest browser fixture (for cloud testing)
@pytest.fixture(name="browser", scope="module")
def browser():
with sync_playwright() as playwright:
playwrightVersion = (
str(subprocess.getoutput("playwright --version")).strip().split(" ")[1]
)
capabilities["LT:Options"]["playwrightClientVersion"] = playwrightVersion
lt_cdp_url = (
"wss://cdp.lambdatest.com/playwright?capabilities="
+ urllib.parse.quote(json.dumps(capabilities))
)
browser = playwright.chromium.connect(lt_cdp_url, timeout=30000)
yield browser
browser.close()
# Pytest page fixture (for cloud testing)
@pytest.fixture
def page(browser):
page = browser.new_page()
yield page
page.close()
# sets status of test case if passed or failed
@pytest.fixture
def set_test_status(page):
def _set_test_status(status, remark):
page.evaluate(
"_ => {}",
'lambdatest_action: {"action": "setTestStatus", "arguments": {"status":"'
+ status
+ '", "remark": "'
+ remark
+ '"}}',
)
yield _set_test_status
Code Walkthrough:
Now that you have the LambdaTest configuration file ready, let's proceed with creating an integration testing for the Weather App and execute it on the LambdaTest platform.
The Weather App showcases the interaction between the WeatherDataProcessor class and the WeatherService class, which uses the OpenWeather API to get weather information about different cities across the globe.
The integration test will evaluate the proper data exchange between the WeatherService class and the WeatherDataProcessor class.
To get started, let's take a test scenario.
Test Scenario:
Code Implementation:
The weather_service.py file holds the WeatherService class that retrieves the weather information about any given city using the OpenWeather API endpoint.
The requests library's get() method sends a request to the API endpoint with the required parameters. A JSON response is returned from the get_weather() method of the WeatherService class.
import requests
class WeatherService:
BASE_URL = "https://api.openweathermap.org/data/2.5/weather"
def __init__(self, api_key):
self.api_key = api_key
def get_weather(self, city):
params = {
'q': city,
'appid': self.api_key,
'units': 'metric'
}
response = requests.get(self.BASE_URL, params=params)
if response.status_code != 200:
raise ValueError(f"Weather API error: {response.text}")
return response.json()
Code Walkthrough:
You have the WeatherDataProcessor class in the data_processor.py file. The class includes two methods: collect_city_weather and export_to_csv.
You use the collect_city_weather() method to gather weather information from different cities, and the export_to_csv() method to export the collected weather data into a CSV file.
import csv
from datetime import datetime
class WeatherDataProcessor:
def __init__(self, weather_service):
self.weather_service = weather_service
def collect_city_weather(self, cities):
weather_data = []
for city in cities:
try:
weather = self.weather_service.get_weather(city)
processed_data = {
'city': city,
'temperature': weather['main']['temp'],
'humidity': weather['main']['humidity'],
'timestamp': datetime.now().isoformat()
}
weather_data.append(processed_data)
except Exception as e:
print(f"Error fetching weather for {city}: {e}")
return weather_data
def export_to_csv(self, weather_data, filename):
if not weather_data:
return False
keys = weather_data[0].keys()
with open(filename, 'w', newline='') as output_file:
dict_writer = csv.DictWriter(output_file, keys)
dict_writer.writeheader()
dict_writer.writerows(weather_data)
return True
Code Walkthrough:
To perform an integration test for the Weather App, you will create the test_weather_integration.py file. The test will check that your WeatherService and WeatherDataProcessor classes, along with the OpenWeather API, work seamlessly together.
You will verify that weather information for different cities is collected correctly and exported to a CSV file.
You will use the responses module to mock HTTP requests within the requests module context. The os module allows you to interact with your operating system and manage file directories during testing.
import os
import pytest
import responses
from samples.weather_app.weather_service import WeatherService
from samples.weather_app.data_processor import WeatherDataProcessor
from conftest import set_test_status
@pytest.fixture
def mock_weather_service():
# Use responses to mock external API calls
weather_service = WeatherService(api_key='test_key')
return weather_service
@responses.activate
def test_weather_data_collection(mock_weather_service, set_test_status):
# Mock the OpenWeatherMap API response
responses.add(
responses.GET,
"https://api.openweathermap.org/data/2.5/weather",
json={
'main': {
'temp': 20.5,
'humidity': 65
}
},
status=200
)
try:
# Create processor with mocked service
processor = WeatherDataProcessor(mock_weather_service)
# Test data collection
cities = ['London', 'New York', 'Tokyo']
weather_data = processor.collect_city_weather(cities)
assert len(weather_data) == 3
assert all('temperature' in data for data in weather_data)
assert all('humidity' in data for data in weather_data)
set_test_status(status="passed", remark="API builds metadata returned")
except AssertionError as e:
set_test_status(status="failed", remark="API sessions metadata not returned")
raise (e)
@responses.activate
def test_weather_data_export( mock_weather_service, tmp_path, set_test_status):
# Mock API response
responses.add(
responses.GET,
"https://api.openweathermap.org/data/2.5/weather",
json={
'main': {
'temp': 20.5,
'humidity': 65
}
},
status=200
)
try:
processor = WeatherDataProcessor(mock_weather_service)
cities = ['London']
weather_data = processor.collect_city_weather(cities)
# Export to temporary CSV
output_file = tmp_path / "weather_data.csv"
result = processor.export_to_csv(weather_data, output_file)
assert result is True
assert os.path.exists(output_file)
# Verify CSV contents
with open(output_file, 'r') as f:
lines = f.readlines()
# Header + data
assert len(lines) == 2
set_test_status(status="passed", remark="API builds metadata returned")
except AssertionError as e:
set_test_status(status="failed", remark="API sessions metadata not returned")
raise (e)
Code Walkthrough:
Test Execution:
To execute the integration tests, in the root folder of the project, run the command below in your terminal:
pytest tests
When the above command is executed on the terminal, pytest does an autosearch for all test functions and executes them.
To start writing Python integration tests with Playwright, follow this support documentation on Python with Playwright for guidance.
Even well-designed integration tests can fail in unexpected ways, from environment mismatches to hidden data issues or asynchronous glitches
Here are the most common problems Python automation testers face, and practical fixes to get them back on track.
Problem: A common issue occurs when tests run perfectly on a local machine but fail once pushed to Jenkins, GitHub Actions, or GitLab pipelines. Differences in environment variables, OS, Python versions, or dependencies often cause this.
Solution:
Problem: Integration tests often depend on asynchronous operations (e.g., database commits, API responses). If the test checks results too early, it may intermittently fail.
Solution:
Problem: Integration tests that use a real database may leave behind records, leading to test failures in subsequent runs.
Solution:
Problem: Python asyncio services can swallow exceptions if not awaited properly, making tests appear to pass when they shouldn’t.
Solution:
Problem: The test data set in staging differs from production, causing false positives or negatives during integration.
Solution:
Problem: Long-running integration test suites may hang due to unclosed sockets, threads, or file descriptors.
Solution:
Problem: Third-party APIs or internal microservices may return different response times or formats, breaking integration tests unexpectedly.
Solution:
After the completion of unit testing, integration testing is usually the next step before end-to-end testing. Integration tests are necessary to evaluate whether the modules or parts work together without faults. Python integration testing captures how different modules or parts of a Python application interact and exchange data across parts.
The Python programming ecosystem provides several tools and frameworks to help developers and QA testers perform integration tests. Before writing integration tests in Python, ensure that all the needed dependencies are installed for a seamless testing experience. To scale Python integration testing, using a cloud testing platform is advised for easy collaboration, report generation, CI/CD pipelines integration, etc.
Did you find this page helpful?
More Related Hubs