- Test Coverage
- Running Test Cases with Coverage
- Factories and Fakes
- Mocking
- Mocking with Patch
- Mocking with Mock Objects
- Practicing Test Driven Development
(click to expand/hide)
- Definition: Test coverage measures the percentage of executable code lines that tests run.
- Purpose: High test coverage provides confidence in code functionality.
- Functionality: Coverage reports show which lines of code were or were not executed by tests.
- Actionable Insights: They highlight where to write additional tests for untested code.
- Identifying Untested Code: Use the
-m
option with coverage tools to reveal untested lines. - Developing Test Cases: Create tests to execute untested lines, aiming to increase the coverage percentage.
- Happy Paths: Test cases where the expected, correct outcomes occur.
- Sad Paths: Test cases that handle errors or unexpected inputs.
- Comprehensive Testing: Coverage must include both paths for thorough validation.
- Beyond the Percentage: 100% test coverage doesn't guarantee bug-free code.
- Testing with Bad Data: Continue to challenge code with unexpected inputs and edge cases.
(click to expand/hide)
- Test coverage indicates how much of the code is executed by tests.
- It is measured as the percentage of total executable lines that tests run.
- High coverage offers more confidence that the code works correctly.
- Coverage reports identify both tested and untested lines of code.
- They guide developers on where to focus when writing additional tests.
- Run Coverage Tool: Begin with running the coverage tool to get a baseline report.
- Identify Missing Lines: Use
-m
option to highlight lines without test coverage. - Review Code: Examine the untested lines to understand their function within the code.
- Write Test Cases: Develop tests specifically to cover these lines, both happy and sad paths.
- Start by creating an account and asserting its creation.
- Update the account and assert changes are reflected.
- Test deletion of an account and assert it no longer exists.
- For each function (e.g., string representation, dictionary conversion, update, delete), write tests that ensure coverage.
- Happy Paths: Standard operations where functions perform as expected.
- Sad Paths: Error handling and edge cases that might not be covered by initial tests.
- Comprehensive Testing: Achieve 100% coverage by testing all paths, ensuring robustness.
- Even at 100% coverage, continue testing with varied data and scenarios.
- Full coverage does not mean the absence of bugs, so keep challenging the code.
(click to expand/hide)
- Factories help in creating realistic test data.
- Fakes are simulated versions of classes with realistic data for testing.
- FactoryBoy is a Python tool used to generate fake data, similar to Ruby's FactoryGirl.
- SQLAlchemy ORM is used to define data models.
- Model Definition: Includes attributes like id, name, email, phone, disabled status, and date joined.
- Fake Data Requirements: The fields in the Account data model serve as a blueprint for the fake data.
- Import FactoryBoy and define a factory class, such as
AccountFactory
. - Use
Faker
for attributes that have a corresponding provider (e.g., name, email, phone number). - Use
FuzzyChoice
for attributes without a direct provider, such as Booleans. LazyFunction
anddatetime
can generate timestamps when creating fake data instances.
- Instantiate
AccountFactory
and use it like a real model instance. - Create, update, and assert operations on
AccountFactory
as if it were the realAccount
class. - Factories allow for testing with a variety of realistic data scenarios.
- Faker offers a range of standard providers for generating data like addresses, companies, jobs, etc.
- Custom and community providers expand the possibilities of fake data generation.
- FactoryBoy's
FuzzyChoice
and other fuzzy attributes provide random data for various types.
- Factories help generate dynamic test data, replacing the need for fixed test fixtures.
- Fakes are simulated versions of classes that can be used in tests.
- FactoryBoy is used in Python to create factories similar to Ruby's FactoryGirl.
- Start by creating a base factory class with FactoryBoy.
- Define attributes in the factory corresponding to the model class attributes.
- Use Faker within FactoryBoy to generate realistic attribute values.
- ID: Use FactoryBoy's
Sequence
to generate a sequence of numbers. - Name, Email, Phone Number: Use
Faker
to create fake names, emails, and phone numbers. - Disabled (Boolean): Use
FuzzyChoice
to randomly select betweenTrue
orFalse
. - Date Joined: Use
FuzzyDate
to generate random dates from a specified start date.
- Import
AccountFactory
from the factory module. - Replace instances where fixed data from JSON fixtures is used with
AccountFactory
. - Create, update, and assert operations on
AccountFactory
instances just like with real models.
- Eliminate the need for a pre-existing JSON fixture by directly using the
AccountFactory
. - Update test cases to instantiate
AccountFactory
and perform test assertions. - Use
nose tests
to run tests and validate the successful integration of the factory.
- Generate large volumes of realistic test data on the fly.
- Mimic real-world data models without the overhead of setting up and maintaining extensive test fixtures.
- Enhance testing by ensuring no dependency on the order of tests and allowing for randomization.
- Factories and fakes provide a powerful way to generate dynamic test data.
- They allow for more flexible and comprehensive testing scenarios.
- FactoryBoy and Faker offer extensive capabilities to customize test data generation.
(click to expand/hide)
- Mocking involves creating objects that simulate the behavior of real objects.
- Useful when your code depends on external systems (e.g., APIs, databases).
- Avoids issues like overloading external services or handling service downtime during tests.
- Isolates the test to focus solely on your code.
- Allows testing of your code’s interaction with the external system.
- Gives control over the data returned from the mocked system for testing various scenarios.
- Enables testing of error handling by simulating failures and unexpected behavior.
- To isolate tests from remote components or external systems.
- When a part of the application isn't available during testing.
- Patching: Changes the behavior of function calls, including those from third-party libraries.
- Mock Objects: Stand in for entire objects, not just function calls, changing the object’s behavior.
- Python's PyUnit includes Mock and MagicMock objects for this purpose.
- Mock objects mimic real objects' behaviors for testing purposes.
- Mocking is essential for isolating tests from dependencies on external systems.
- Developers can use patches to simulate different conditions and change function behaviors.
- Mock objects can replace entire objects to verify interactions and behaviors.
(click to expand/hide)
- Patching is a mocking technique used to change the behavior of a function call.
- It is particularly useful for simulating interactions with external systems or for creating error conditions during testing.
- When the function calls an external system not under your control.
- When simulating error conditions without causing actual errors.
-
Patching a function’s return value:
- Allows you to control the return value of a function.
- Useful for testing error handlers by returning error condition codes.
- Controls the data returned from a function call by returning any data structure or object the program expects.
-
Replacing a function with another function (Side Effect):
- Enables you to replace the actual function with a custom one to simulate different behaviors.
- Useful when you need to simulate more complex behaviors or a series of function calls or effects.
-
Patching a function's return value:
- Using
with
to patchimdb_info
function to return a status code of 200. - Confirming that the actual function code is bypassed.
- Using
-
Patching a third-party library function:
- Patching
requests.get()
to return a specified value without making a real API call.
- Patching
-
Using a Side Effect:
- Defining custom functions
bye()
andhello()
and using patching to replace the call tohello()
withbye()
during testing.
- Defining custom functions
- Patching allows for precise control over the behavior of functions during testing.
- By using patching, developers can simulate both successful and error conditions.
- Patching can be applied to both functions you've written and third-party library functions.
- Python's
unittest.mock
library provides bothreturn_value
patching andside_effect
patching for comprehensive testing scenarios.
(click to expand/hide)
- Mock objects simulate the behavior of real objects, allowing control over their actions and returns.
- Useful when the function return value is an object with multiple values and methods.
- In Python’s unittest package, the two main mock objects are
Mock
andMagicMock
.MagicMock
includes all magic methods, useful for mimicking containers or objects that implement Python protocols.Mock
is suitable when magic methods are not needed.
- Create an instance of
Mock
orMagicMock
as needed. - Mock objects can have methods called on them without error, even if they aren't defined, making them flexible for tests.
- Attributes can be added during or after the creation of a mock object.
- To mimic a specific class, use the
spec
parameter when creating a mock.- For example, you can mock the
Response
class from therequests
package, setting expected attributes likestatus_code
.
- For example, you can mock the
- Use
patch
to replace a function call with a mock object. - Import
patch
andMagicMock
fromunittest.mock
. - Rewrite the function to be robust, implementing actual logic and external calls.
- Patch the external call, like
requests.get()
, with a mock object. - Set up the mock to behave like the expected object, with correct status codes and methods.
- Call the patched function with the mock in place, and it behaves as if the real object was called, allowing complete control over the test conditions.
- Achieve complete control over test scenarios, simulating both positive and negative conditions.
- Create specific conditions for testing that might be hard or impossible to reproduce in real systems.
- Ensure that testing focuses on your code and not on external systems or dependencies.
- Use mocks judiciously to ensure the code is being tested, not the mocks themselves.
- Mocks should be used to create necessary conditions for testing but shouldn't replace the need to test with real objects and scenarios when possible.
- The lab demonstrates how to mock calls to the IMDb database using the
unittest.mock
library in Python. - The process begins with running nose tests to assess the initial state, which shows low test coverage due to the absence of tests.
- The
IMDb.py
file contains a class that makes calls to the IMDb database with methods likesearch_titles
,get_movie_reviews
, andget_movie_ratings
. - The class uses the
requests.get
method to call the IMDb service, checking for a200
status code and returning JSON data.
- The
test_IMDb.py
file is prepared with the necessary imports (patch
,mock
,Response
,IMDb
) and a global variableIMDb_data
loaded with JSON responses from the IMDb database. - The JSON responses include various scenarios like good searches, invalid API keys, and so on, which will be used to mock the IMDb calls.
- The lab instructions guide through creating test cases by mocking different responses using
@patch
. - The
@patch
decorator is used to replace the behavior of thesearch_titles
method within theIMDb
class. - It's crucial to patch the correct namespace to mock the method calls accurately.
- A mock object is configured to simulate the behavior of the IMDb service, with the capability to return both good and bad data.
- The mock object can be specified to mimic the
Response
class from therequests
package, complete with status codes and methods likejson()
. - The video demonstrates how to patch a function call and set up mock objects to return predefined data or behaviors, such as a
200
status code for successful calls or a404
error for not found scenarios.
- The video concludes by encouraging viewers to apply the techniques to their projects, mocking calls to external systems to ensure test cases are robust and always working as intended.
- The goal is to gain control over test cases by being able to simulate both successful and error responses, and to check error handlers and other system behaviors.
(click to expand/hide)
-
TDD involves three main steps:
- Write test cases for the desired code.
- Write the minimum code required to pass the test cases.
- Refactor the code for robustness and maintainability, with test cases ensuring behavior remains unchanged.
-
The TDD cycle is known as "Red, Green, Refactor":
- Start with test cases (Red).
- Write code to pass tests (Green).
- Refactor for improvement (keeping tests Green).
-
Developing a RESTful web service for counters:
- API endpoint:
/counters
- POST requests create a counter, specified in the path.
- Duplicate names must return
429 Conflict
.
- API endpoint:
-
Creating Test Cases Based on Requirements:
- POST to
/counters/name
should return201 Created
and a counter starting at zero. - A second POST with the same name should return
429 Conflict
.
- POST to
- Test cases drive development by verifying application behavior against requirements.
- Writing test cases first clarifies how code should behave, making coding more straightforward.
- TDD leads to higher code quality and ensures functionality is preserved through changes.
- TDD is a disciplined approach to development, demanding test cases before coding.
- This workflow fosters a clear focus on functionality and leads to better, more reliable code.
- Goal: Demonstrate the Test-Driven Development process.
- Starting Point: A folder called
practice TDD
with requirements installed, astatus.py
module, and acounters.py
file with requirements documented but no code.
- Write a Test: Create a test case for the desired feature based on the requirements.
- Run the Test: Execute the test to see it fail (Red phase).
- Write the Code: Develop the minimum amount of code to pass the test.
- Run Tests Again: Confirm the new code passes the test (Green phase).
- Refactor: Improve the code while keeping the tests passing.
- Requirements:
- The service should track multiple counters.
- RESTful API with an endpoint called
/counters
. - Creating a counter is done by specifying the name in the path (
/counters/<name>
). - Duplicate counter names should return a
409 Conflict
error.
-
Setup for Tests:
- Import the
TestCase
class fromunittest
. - Create a
Countertest
class with a docstring for the tests. - Import the Flask
app
from a module calledcounter
. - Use Flask's
app.test_client
to create a test client for API calls.
- Import the
-
Writing Test Cases:
- Write tests for creating a counter and handling duplicates.
- Use
assertEqual
to check the HTTP status codes returned by API calls.
-
Creating the Counter Module (
counter.py
):- Define a Flask route
/counters/<name>
accepting only POST requests. - Use a global dictionary to store counters.
- Check if the counter already exists before creating a new one.
- Define a Flask route
-
Running Tests:
- Execute the tests using
nosetests
. - Ensure tests initially fail (Red phase), indicating missing functionality.
- Implement the functionality to pass the tests (Green phase).
- Execute the tests using
-
Refactoring Tests:
- Refactor the test setup by creating a
setup
method to avoid repetition. - Modify tests to use the new setup method.
- Refactor the test setup by creating a
- Successfully demonstrated the TDD workflow.
- Created a basic RESTful API for counters.
- Tests written drive the development of the API functionality.