pydantic-tes .. image:: https://badge.fury.io/py/pydantic-tes.svg :target: https://pypi.python.org/pypi/pydantic-tes/ :alt: pydantic-tes on the Python Package Index (PyPI) A collection of pydantic_ models for GA4GH Task Execution Service_. In addition to the models, this package contains a lightweight client for TES based on them using requests_ and utilities for working and testing it against Funnel_ - a the TES implementation. This Python project can be installed from PyPI using pip. :: $ python3 -m venv .venv $ . .venv/bin/activate $ pip install pydantic-tes Checkout py-tes for an alternative set of Python models and an API client based on the more lightweight attrs package. .. _Funnel: https://ohsu-comp-bio.github.io/funnel/ .. _requests: https://requests.readthedocs.io/en/latest/ .. _GA4GH Task Execution Service: https://github.com/ga4gh/task-execution-schemas .. _pydantic: https://pydantic-docs.helpmanual.io/ .. _py-tes: https://github.com/ohsu-comp-bio/py-tes .. _attrs: https://www.attrs.org/en/stable/ .. :changelog: History .. to_doc 0.2.0 (2025-04-10) Allow creating tes client with extra headers (e.g. for auth) thanks to @BorisYourich. Fixes for running against tesk thanks to @mvdbeek. 0.1.5 (2022-10-06) Messed up 0.1.4 release, retrying. 0.1.4 (2022-10-06) Further hacking around funnel responses to produce validating responses. 0.1.3 (2022-10-05) Another attempt at publishing types. 0.1.2 (2022-10-05) Add support for Python 3.6. Add py.typed to package. 0.1.1 (2022-09-29) Fixes to project publication scripts and process. 0.1.0 (2022-09-29) Inital version.
Latest version: 0.2.0 Released: 2025-04-10
pydantic - CSV Pydantic CSV makes working with CSV files easier and much better than working with Dicts. It uses pydantic BaseModels to store data of every row on the CSV file and also uses type annotations which enables proper type checking and validation. Table of Contents Main features Installation Getting started Using the BasemodelCSVReader Error handling Default values Mapping BaseModel fields to columns Supported type annotation User-defined types Using the BasemodelCSVWriter Modifying the CSV header Copyright and License Credits Main features Use pydantic.BaseModel instead of dictionaries to represent the rows in the CSV file. Take advantage of the BaseModel properties type annotation. BasemodelCSVReader uses the type annotation to perform validation on the data of the CSV file. Automatic type conversion. BasemodelCSVReader supports str, int, float, complex, datetime and bool, as well as any type whose constructor accepts a string as its single argument. Helps you troubleshoot issues with the data in the CSV file. BasemodelCSVReader will show exactly, which line of the CSV file contains errors. Extract only the data you need. It will only parse the properties defined in the BaseModel Familiar syntax. The BasemodelCSVReader is used almost the same way as the DictReader in the standard library. It uses BaseModel features that let you define Field properties or Config so the data can be parsed exactly the way you want. Make the code cleaner. No more extra loops to convert data to the correct type, perform validation, set default values, the BasemodelCSVReader will do all this for you. In addition to the BasemodelCSVReader, the library also provides a BasemodelCSVWriter which enables creating a CSV file using a list of instances of a BaseModel. Because sqlmodel uses pydantic.BaseModels too, you can directly fill a database with data from a CSV Installation shell pip install pydantic-csv Getting started Using the BasemodelCSVReader First, add the necessary imports: ```python from pydantic import BaseModel from pydantic_csv import BasemodelCSVReader ``` Assuming that we have a CSV file with the contents below: text firstname,email,age Elsa,elsa@test.com,26 Astor,astor@test.com,44 Edit,edit@test.com,33 Ella,ella@test.com,22 Let's create a BaseModel that will represent a row in the CSV file above: python class User(BaseModel): firstname: str email: str age: int The BaseModel User has 3 properties, firstname and email is of type str and age is of type int. To load and read the contents of the CSV file we do the same thing as if we would be using the DictReader from the csv module in the Python's standard library. After opening the file we create an instance of the BasemodelCSVReader passing two arguments. The first is the file and the second is the BaseModel that we wish to use to represent the data of every row of the CSV file. Like so: ```python using file on disk with open("") as csv: reader = BasemodelCSVReader(csv, User) for row in reader: print(row) using buffer (has to be a string buffer -> convert beforehand) buffer = io.StringIO() buffer.seek(0) # ensure that we read from the beginning reader = BasemodelCSVReader(buffer, User) for row in reader: print(row) ``` If you run this code you should see an output like this: python User(firstname='Elsa', email='elsa@test.com', age=11) User(firstname='Astor', email='astor@test.com', age=7) User(firstname='Edit', email='edit@test.com', age=3) User(firstname='Ella', email='ella@test.com', age=2) The BasemodelCSVReader internally uses the DictReader from the csv module to read the CSV file which means that you can pass the same arguments that you would pass to the DictReader. The complete argument list is shown below: python BasemodelCSVReader( file_obj: Any, model: Type[BaseModel], *, # Note that you can't provide any value without specifying the parameter name use_alias: bool = True, validate_header: bool = True, fieldnames: Optional[Sequence[str]] = None, restkey: Optional[str] = None, restval: Optional[Any] = None, dialect: str = "excel", **kwargs: Any, ) All keyword arguments supported by DictReader are supported by the BasemodelCSVReader, except use_alias and validate_header. Those are used to change the behaviour of the BasemodelCSVReader as follows: use_alias - The BasemodelCSVReader will search for column names identical to the aliases of the BaseModel Fields (if set, otherwise its names). To avoid this behaviour and use the field names in every case set use_alias = False when creating an instance of the BasemodelCSVReader, see an example below: python reader = BasemodelCSVReader(csv, User, use_alias=False) validate_header - The BasemodelCSVReader will raise a ValueError if the CSV file contains columns with the same name. This validation is performed to avoid data being overwritten. To skip this validation set validate_header=False when creating an instance of the BasemodelCSVReader, see an example below: python reader = BasemodelCSVReader(csv, User, validate_header=False) Important: If two or more columns with the same name exists it tries to instantiate the BaseModel with the data from the column most right. Error handling One of the advantages of using the BasemodelCSVReader is that it makes it easy to detect when the type of data in the CSV file is not what your application's model is expecting. And, the BasemodelCSVReader shows errors that will help to identify the rows with problems in your CSV file. For example, say we change the contents of the CSV file shown in the Getting started section and, modify the age of the user Astor, let's change it to a string value: text firstname,email,age Elsa,elsa@test.com,26 Astor,astor@test.com,test Edit,edit@test.com,33 Ella,ella@test.com,22 Remember that in the BaseModel User the age property is annotated with int. If we run the code again an exception from the pydantic validation will be raised with the message below: text pydantic_csv.exceptions.CSVValueError: [Error on CSV Line number: 3] E 1 validation error for UserOptional E age E Input should be a valid integer, unable to parse string as an integer [type=int_parsing, input_value='not a number', input_type=str] E For further information visit https://errors.pydantic.dev/2.7/v/int_parsing Note that apart from telling what the error was, the BasemodelCSVReader will also show which line of the CSV file contain the data with errors. Default values The BasemodelCSVReader also handles properties with default values. Let's modify the BaseModel User and add a default value for the field email: ```python from pydantic import BaseModel class User(BaseModel): firstname: str email: str = 'Not specified' age: int ``` And we modify the CSV file and remove the email for the user Astor: text firstname,email,age Elsa,elsa@test.com,26 Astor,,44 Edit,edit@test.com,33 Ella,ella@test.com,22 If we run the code we should see the output below: text User(firstname='Elsa', email='elsa@test.com', age=11) User(firstname='Astor', email='Not specified', age=7) User(firstname='Edit', email='edit@test.com', age=3) User(firstname='Ella', email='ella@test.com', age=2) Note that now the object for the user Astor has the default value Not specified assigned to the email property. Default values can also be set using pydantic.Field like so: ```python from pydantic import BaseModel, Field class User(BaseModel): firstname: str email: str = Field(default='Not specified') age: int ``` Mapping BaseModel fields to columns The mapping between a BaseModel field and a column in the CSV file will be done automatically if the names match. However, there are situations that the name of the header for a column is different. We can easily tell the BasemodelCSVReader how the mapping should be done using the method map.\ Assuming that we have a CSV file with the contents below: text First Name,email,age Elsa,elsa@test.com,26 Astor,astor@test.com,44 Edit,edit@test.com,33 Ella,ella@test.com,22 Note that now the column is called First Name and not firstname And we can use the method map, like so: python reader = BasemodelCSVReader(csv, User) reader.map('First Name').to('firstname') Now the BasemodelCSVReader will know how to extract the data from the column First Name and add it to the BaseModel property firstname Supported type annotation At the moment the BasemodelCSVReader supports int, str, float, complex, datetime, and bool. pydantic_csv doesn't parse the date(times) itself. Thus, it relies on the datetime parsing of pydantic. Now they support some common formats and unix timestamps, but if you have a more exotic format you can use a pydantic validator. Assuming that the CSV file has the following contents: text name,email,birthday Edit,edit@test.com,Sunday, 6. January 2002 This would look like this: ```python from pydantic import BaseModel, field_validator from datetime import datetime class User(BaseModel): name: str email: str birthday: datetime @field_validator("birthday", mode="before") def parse_birthday_date(cls, value): return datetime.strptime(value, "%A, %d. %B %Y").date() ``` User-defined types You can use any type for a field as long as its constructor accepts a string: ```python import re from pydantic import BaseModel class SSN: def init(self, val): if re.match(r"\d{9}", val): self.val = f"{val[0:3]}-{val[3:5]}-{val[5:9]}" elif re.match(r"\d{3}-\d{2}-\d{4}", val): self.val = val else: raise ValueError(f"Invalid SSN: {val!r}") class User(BaseModel): name: str ssn: SSN ``` Using the BasemodelCSVWriter Reading a CSV file using the BasemodelCSVReader is great and gives us the type-safety of Pydantic's BaseModels and type annotation, however, there are situations where we would like to use BaseModels for creating CSV files, that's where the BasemodelCSVWriter comes in handy. Using the BasemodelCSVWriter is quite simple. Given that we have a Basemodel User: ```python from pydantic import BaseModel class User(BaseModel): firstname: str lastname: str age: int ``` And in your program we have a list of users: python users = [ User(firstname="John", lastname="Smith", age=40), User(firstname="Daniel", lastname="Nilsson", age=23), User(firstname="Ella", lastname="Fralla", age=28) ] In order to create a CSV using the BasemodelCSVWriter import it from pydantic_csv: python from pydantic_csv import BasemodelCSVReader Initialize it with the required arguments and call the method write: ```python using file on disk with open("") as csv: writer = BasemodelCSVWriter(csv, users, User) writer.write() using buffer (has to be a StringBuffer) writer = BasemodelCSVWriter(buffer, users, User) writer.write() buffer.seek(0) # ensure that the next working steps start at the beginning of the "file" if you need a BytesBuffer just convert it: bytes_buffer: io.BytesIO = io.BytesIO(buffer.read().encode("utf-8")) bytes_buffer.name = buffer.name bytes_buffer.seek(0) # ensure that the next working steps start at the beginning of the "file" ``` That's it! Let's break down the snippet above. First, we open a file called user.csv for writing. After that, an instance of the BasemodelCSVWriter is created. To create a BasemodelCSVWriter we need to pass the file_obj, the list of User instances, and lastly, the type, which in this case is User. The type is required since the writer uses it when trying to figure out the CSV header. By default, it will use the alias of the field otherwise its name defined in the BaseModel, in the case of the BaseModel User the title of each column will be firstname, lastname and age. See below the CSV created out of a list of User: text firstname,lastname,age John,Smith,40 Daniel,Nilsson,23 Ella,Fralla,28 The BasemodelCSVWriter also takes **fmtparams which accepts the same parameters as the csv.writer. For more information see: https://docs.python.org/3/library/csv.html#csv-fmt-params Now, there are situations where we don't want to write the CSV header. In this case, the method write of the BasemodelCSVWriter accepts an extra argument, called skip_header. The default value is False and when set to True it will skip the header. Modifying the CSV header As previously mentioned the BasemodelCSVWriter uses the aliases or names of the fields defined in the BaseModel as the CSV header titles. If you don't want the BasemodelCSVWriter to use the aliases and only the names you can set use_alias to False. This will look like this: python writer = BasemodelCSVWriter(file_obj, users, User, use_alias=False) However, depending on your use case it makes sense to set custom Headers and not use the aliases or names at all. The BasemodelCSVWriter has a map method just for this purpose. Using the User BaseModel with the properties firstname, lastname and age. The snippet below shows how to change firstname to First name and lastname to Last name: ```python with open("", "w") as file: writer = BasemodelCSVWriter(file, users, User) # Add mappings for firstname and lastname writer.map("firstname").to("First Name") writer.map("lastname").to("Last Name") writer.write() ``` The CSV output of the snippet above will be: text First Name,Last Name,age John,Smith,40 Daniel,Nilsson,23 Ella,Fralla,28 Copyright and License Copyright (c) 2024 Nathan Richard. Code released under BSD 3-clause license Credits A huge shoutout to Daniel Furtado (github) and his python package 'dataclass-csv' (pypi | github). The most of the Codebase and Documentation is from him and just adjusted for using pydantic.BaseModel.
pypi package. Binary
Latest version: 0.1.0 Released: 2024-06-28
Pydantic AST Pydantic models covering Python AST types. Installation py pip install pydantic-ast Usage Use it as a drop-in replacement to ast.parse with a more readable representation. ```py import ast import pydantic_ast source = "x = 1" ast.parse(source) pydantic_ast.parse(source) pydantic-ast: body=[Assign(targets=[Name(id='x', ctx=Store())], value=Constant(value=1), type_comment=None)] type_ignores=[] ``` Use it on the command line to quickly get an AST of a Python program or a section of one sh echo '"Hello world"' | pydantic-ast ⇣ body=[Expr(value=Constant(value='Hello world'))] type_ignores=[] Use it on ASTs you got from elsewhere to make them readable, or to inspect parts of them more easily. The AST_to_pydantic class is a ast.NodeTransformer that converts nodes in the AST to pydantic_ast model types when the tree nodes are visited. ```py from pydantic_ast import AST_to_pydantic source = "123 + 345 == expected" my_mystery_ast = ast.parse(source) ast_model = AST_to_pydantic().visit(my_mystery_ast) ast_model body=[Expr(value=Compare(left=BinOp(left=Constant(value=123), op=Add(), right=Constant(value=345)), ops=[Eq()], comparators=[Name(id='expected', ctx=Load())]))] type_ignores=[] ast_model.body [Expr(value=Compare(left=BinOp(left=Constant(value=123), op=Add(), right=Constant(value=345)), ops=[Eq()], comparators=[Name(id='expected', ctx=Load())]))] ``` It's simply much easier to drill down into a tree when you can see the fields in a repr. ```py ast_model.body[0].value Compare(left=BinOp(left=Constant(value=123), op=Add(), right=Constant(value=345)), ops=[Eq()], comparators=[Name(id='expected', ctx=Load())]) ast_model.body[0].value.left BinOp(left=Constant(value=123), op=Add(), right=Constant(value=345)) ast_model.body[0].value.left.left Constant(value=123) ast_model.body[0].value.left.left.value 123 ``` Development To set up pre-commit hooks (to keep the CI bot happy) run pre-commit install-hooks so all git commits trigger the pre-commit checks. I use Conventional Commits. This runs black, flake8, autopep8, pyupgrade, etc. To set up a dev env, I first create a new conda environment and use it in PDM with which python > .pdm-python. To use virtualenv environment instead of conda, skip that. Run pdm install and a .venv will be created if no Python binary path is found in .pdm-python. To run tests, run pdm run python -m pytest and the PDM environment will be used to run the test suite.
pypi package. Binary
Latest version: 0.2.0 Released: 2023-09-19
OGC Environmental Data Retrieval (EDR) API Pydantic This repository contains the edr-pydantic Python package. It provides Pydantic models for the OGC Environmental Data Retrieval (EDR) API. This can, for example, be used to help develop an EDR API using FastAPI. Install shell pip install edr-pydantic Or to install from source: shell pip install git+https://github.com/KNMI/edr-pydantic.git Usage ```python from edr_pydantic.collections import Collection from edr_pydantic.data_queries import EDRQuery, EDRQueryLink, DataQueries from edr_pydantic.extent import Extent, Spatial from edr_pydantic.link import Link from edr_pydantic.observed_property import ObservedProperty from edr_pydantic.parameter import Parameters, Parameter from edr_pydantic.unit import Unit from edr_pydantic.variables import Variables c = Collection( id="hrly_obs", title="Hourly Site Specific observations", description="Observation data for UK observing sites", extent=Extent( spatial=Spatial( bbox=[ [-15.0, 48.0, 5.0, 62.0] ], crs="WGS84" ) ), links=[ Link( href="https://example.org/uk-hourly-site-specific-observations", rel="service-doc" ) ], data_queries=DataQueries( position=EDRQuery( link=EDRQueryLink( href="https://example.org/edr/collections/hrly_obs/position?coords={coords}", rel="data", variables=Variables( query_type="position", output_formats=[ "CoverageJSON" ] ) ) ) ), parameter_names=Parameters({ "Wind Direction": Parameter( unit=Unit( label="degree true" ), observedProperty=ObservedProperty( id="https://codes.wmo.int/common/quantity-kind/_windDirection", label="Wind Direction" ), dataType="integer" ) }) ) print(c.model_dump_json(indent=2, exclude_none=True, by_alias=True)) ``` Will print json { "id": "hrly_obs", "title": "Hourly Site Specific observations", "description": "Observation data for UK observing sites", "links": [ { "href": "https://example.org/uk-hourly-site-specific-observations", "rel": "service-doc" } ], "extent": { "spatial": { "bbox": [ [ -15.0, 48.0, 5.0, 62.0 ] ], "crs": "WGS84" } }, "data_queries": { "position": { "link": { "href": "https://example.org/edr/collections/hrly_obs/position?coords={coords}", "rel": "data", "variables": { "query_type": "position", "output_formats": [ "CoverageJSON" ] } } } }, "parameter_names": { "Wind Direction": { "type": "Parameter", "data-type": "integer", "unit": { "label": "degree true" }, "observedProperty": { "id": "https://codes.wmo.int/common/quantity-kind/_windDirection", "label": "Wind Direction" } } } } IMPORTANT: The arguments by_alias=True to model_dump_json() or model_dump() is required to get the output as shown above. Without by_alias=True the attribute data-type will be wrongly outputted as dataType. This is due an issue in the EDR spec and Pydantic. Contributing Make an editable install from within the repository root shell pip install -e '.[test]' Running tests shell pytest tests/ Linting and typing Linting and typing (mypy) is done using pre-commit hooks. shell pip install pre-commit pre-commit install pre-commit run Related packages CoverageJSON Pydantic geojson-pydantic Real world usage This library is used to build an OGC Environmental Data Retrieval (EDR) API, serving automatic weather data station data from The Royal Netherlands Meteorological Institute (KNMI). See the KNMI Data Platform EDR API. TODOs Help is wanted in the following areas to fully implement the EDR spec: * See TODOs in code listing various small inconsistencies in the spec * In various places there could be more validation on content License Apache License, Version 2.0
Latest version: 0.7.0 Released: 2025-03-05
Pydantic MQL This library can parse and evaluate conditions using pydantic. The usage is similar to the MongoDB Query Language. But instead of filtering documents within a database collection you can use the library to filter arbitrary application data in memory. Example Usage Parsing python from pydantic_mql import Condition test_json = '{"operator": "$eq", "field": "label", "value": "lab"}' condition = Condition.model_validate_json(test_json) print(condition) Serializing python from pydantic_mql import Condition condition = Condition(operator='$and', conditions=[ Condition(operator='$eq', field='label', value='lab'), Condition(operator='$lte', field='rating', value=80) ]) print(condition.model_dump()) Condition Eval python test_data = {'rating': 60, 'label': 'lab'} result = condition.eval(test_data) print(result)
Latest version: 0.1.0 Released: 2023-09-11
pydantic orm Asynchronous database ORM using Pydantic. Installation Install using pip install -U pydantic-orm or poetry add pydantic-orm
pypi package. Binary
Latest version: 0.1.1 Released: 2021-07-23
Pydantic ODM Small async ODM for MongoDB based on Motor and Pydantic
Latest version: 0.2.5 Released: 2021-01-15
pydantic-sdk Coming Soon.
pypi package. Binary
Latest version: 0.0.2 Released: 2023-07-17
PydanticRDF A Python library that bridges Pydantic V2 models and RDF graphs, enabling seamless bidirectional conversion between typed data models and semantic web data. PydanticRDF combines Pydantic's powerful validation with RDFLib's graph capabilities to simplify working with linked data. Features ✅ Type Safety: Define data models with Pydantic V2 and map them to RDF graphs 🔄 Serialization: Convert Pydantic models to RDF triples with customizable predicates 📥 Deserialization: Parse RDF data into validated Pydantic models 🧩 Complex Structures: Support for nested models, lists, and circular references 📊 JSON Schema: Generate valid JSON schemas from RDF models with proper URI handling Installation bash pip install pydantic-rdf Quick Example ```python from rdflib import Namespace from pydantic_rdf import BaseRdfModel, WithPredicate from typing import Annotated Define a model EX = Namespace("http://example.org/") class Person(BaseRdfModel): rdf_type = EX.Person _rdf_namespace = EX name: str age: int email: Annotated[str, WithPredicate(EX.emailAddress)] Create and serialize person = Person(uri=EX.person1, name="John Doe", age=30, email="john@example.com") graph = person.model_dump_rdf() The resulting RDF graph looks like this: @prefix ex: . @prefix rdf: . ex:person1 a ex:Person ; ex:age 30 ; ex:emailAddress "john@example.com" ; ex:name "John Doe" . Deserialize loaded_person = Person.parse_graph(graph, EX.person1) ``` Requirements Python 3.11+ pydantic >= 2.11.3 rdflib >= 7.1.4 Documentation For complete documentation, visit https://omegaice.github.io/pydantic-rdf/ License MIT
Latest version: 0.2.0 Released: 2025-05-03
Pydantic view helper decorator Installation bash pip install pydantic_view Usage ```python In [1]: from pydantic import BaseModel, Field ...: from pydantic_view import view ...: ...: ...: class User(BaseModel): ...: id: int ...: username: str ...: password: str ...: address: str ...: ...: @view("Create", exclude={"id"}) ...: class UserCreate(User): ...: pass ...: ...: @view("Update") ...: class UserUpdate(User): ...: pass ...: ...: @view("Patch") ...: class UserPatch(User): ...: username: str = None ...: password: str = None ...: address: str = None ...: ...: @view("Out", exclude={"password"}) ...: class UserOut(User): ...: pass In [2]: user = User(id=0, username="human", password="iamaman", address="Earth") ...: user.Out() ...: Out[2]: UserOut(id=0, username='human', address='Earth') In [3]: User.Update(id=0, username="human", password="iamasuperman", address="Earth") ...: Out[3]: UserUpdate(id=0, username='human', password='iamasuperman', address='Earth') In [4]: User.Patch(id=0, address="Mars") ...: Out[4]: UserPatch(id=0, username=None, password=None, address='Mars') ``` FastAPI example ```python from typing import Optional from fastapi import FastAPI from fastapi.testclient import TestClient from pydantic import BaseModel, ConfigDict, Field from pydantic_view import view, view_field_validator class UserSettings(BaseModel): model_config = ConfigDict(extra="forbid") public: Optional[str] = None secret: Optional[str] = None @view("Out", exclude={"secret"}) class UserSettingsOut(UserSettings): pass @view("Create") class UserSettingsCreate(UserSettings): pass @view("Update") class UserSettingsUpdate(UserSettings): pass @view("Patch") class UserSettingsPatch(UserSettings): public: str = None secret: str = None class User(BaseModel): model_config = ConfigDict(extra="forbid") id: int username: str password: str = Field(default_factory=lambda: "password") settings: UserSettings @view_field_validator({"Create", "Update", "Patch"}, "username") @classmethod def validate_username(cls, v): if len(v) < 3: raise ValueError return v @view("Out", exclude={"password"}) class UserOut(User): pass @view("Create", exclude={"id"}) class UserCreate(User): settings: UserSettings = Field(default_factory=UserSettings) @view("Update", exclude={"id"}) class UserUpdate(User): pass @view("Patch", exclude={"id"}) class UserPatch(User): username: str = None password: str = None settings: UserSettings = None app = FastAPI() db = {} @app.get("/users/{user_id}", response_model=User.Out) async def get(user_id: int) -> User.Out: return db[user_id] @app.post("/users", response_model=User.Out) async def post(user: User.Create) -> User.Out: user_id = 0 # generate_user_id() db[0] = User(id=user_id, **user.model_dump()) return db[0] @app.put("/users/{user_id}", response_model=User.Out) async def put(user_id: int, user: User.Update) -> User.Out: db[user_id] = User(id=user_id, **user.model_dump()) return db[user_id] @app.patch("/users/{user_id}", response_model=User.Out) async def patch(user_id: int, user: User.Patch) -> User.Out: db[user_id] = User({db[user_id].model_dump(), **user.model_dump(exclude_unset=True)}) return db[user_id] def test_fastapi(): client = TestClient(app) # POST response = client.post( "/users", json={ "username": "admin", "password": "admin", }, ) assert response.status_code == 200, response.text assert response.json() == { "id": 0, "username": "admin", "settings": {"public": None}, } # GET response = client.get("/users/0") assert response.status_code == 200, response.text assert response.json() == { "id": 0, "username": "admin", "settings": {"public": None}, } # PUT response = client.put( "/users/0", json={ "username": "superadmin", "password": "superadmin", "settings": {"public": "foo", "secret": "secret"}, }, ) assert response.status_code == 200, response.text assert response.json() == { "id": 0, "username": "superadmin", "settings": {"public": "foo"}, } # PATCH response = client.patch( "/users/0", json={ "username": "guest", "settings": {"public": "bar"}, }, ) assert response.status_code == 200, response.text assert response.json() == { "id": 0, "username": "guest", "settings": {"public": "bar"}, } ```
pypi package. Binary
Latest version: 2.0.1 Released: 2024-10-14