Skip to content

Code snippets

When to patch

The patch method of the unittest.mock library has excellent documentation how it works.

Recently a co-worker asked me, when to use it.

Patching is something to use with unit tests, or more specific:

  • When you only want to test function/class/method A, and not the other functions/classes used inside A, you should patch
test_unit.py
class TestCsvWriteFactory:
    @patch("app.download.service.generator.csv")
    def test_csv_writer_factory(self, mock_csv: MagicMock) -> None:
        """Patch the csv module, so it is not called and excluded from the test, 
        but assert it's write method is called
        """
        csv_writer_factory()
        mock_csv.writer.assert_called_once()
unit.py
import csv
from typing import Any

def csv_writer_factory():
    class FileLike:
        """An object that implements just the write method of the file-like
        interface.
        """

        def write(self, value: Any) -> Any:
            """Write the value by returning it"""
            return value

    return csv.writer(FileLike(), delimiter=",", quotechar='"')

Horrible schema names in fastapi

Reusing pyndantic models with usage of type generics can cause horrible schema names in the resulting openapi spec.

This will show in your openapi spec as a schema named ResponseList_MyResponse_MyMetaResponse_:

@router.get("/myroute")
async def myroute() -> ResponseList[MyResponse, MyMetaResponse]:
    return Responselist(data=MyResponse(), meta=MyMetaResponse())

A quick fix can be add a definition of a model that extends your reusable base model.

from pydantic import BaseModel

class ResponseList[Data, Meta](BaseModel):
    data: list[Data]
    meta: Meta

class MyResponse(BaseModel):    
    id: str

class MyMetaResponse(BaseModel):    
    copyright: str    

class MyListResponse(
    ResponseList[MyResponse, MyMetaResponse]
):
    pass


@router.get("/myroute")
async def myroute() -> MyListResponse:
    return MyListResponse(data=[MyResponse()], meta=MyMetaResponse())

Loading big data fixtures: Showing progress

Recently I felt the need to show progress during a multiprocessing command loading a large set of fixtures

Here a slimmed down version without the data loading itself.

The command is executed with the help of typer

import time
from multiprocessing import Pool
from rich.progress import Progress
import random
import typer

def worker(_: int) -> None:
    time.sleep(random.randint(0, 5) / 10)

def main(processes: int = 4) -> None:
    tasks = range(100)
    with Progress() as progress:
        task_id = progress.add_task("[cyan]Completed...", total=len(tasks))
        with Pool(processes) as pool:
            results = pool.imap(
                worker,
                tasks
            )
            for _ in results:
                progress.advance(task_id)


if __name__ == "__main__":
    typer.run(main)