Package maintenance

BIDS-pydantic

BIDS-pydantic Overview BIDS-pydantic will pull a specified version (from v1.7.0 onwards, tested up to v1.7.0) of the BIDS metadata schema which is used in the JSON BIDS sidecar, from the official BIDS GitHub page, and create corresponding pydantic models, which will provide BIDS data validation using python type annotations. Alternatively, the BIDS-pydantic-models package will only install the models for direct use in your Python software. More information on the use of the models can be found here. Table of Contents Quickstart Installation Usage Development Acknowledgements License How To Contribute Got a great idea for something to implement in BIDS-pydantic, or maybe you have just found a bug? Create an issue to get in touch with the development team and we’ll take it from there. Quickstart If you just want to use the models in your project. Download the pydantic models file for the BIDS schema version you wish to use from the models directory, and add it to your code-base. These files are generated using the bids-pydantic make -a command (see below). Alternatively, you can just run: sh $ pip install bids-pydantic-models More information on the use of the models can be found here. If you want to use the command line tool to generate the models, keep reading this README. Installation Install with: sh $ pip install bids-pydantic BIDS-pydantic can be installed as a module directly from the python package index. For more information how to use a python package in this way please see https://docs.python.org/3/installing/index.html Python Version We recommend using the latest version of Python. BIDS-pydantic supports Python 3.9 and newer. Dependencies These distributions will be installed automatically when installing BIDS-pydantic. pydantic datamodel-code-generator Usage The primary commands can be viewed with the command bids-pydantic: ``` usage: bids-pydantic [-h] {list,make} ... Run one of a set of commands. For example: bids-pydantic list, or bids-pydantic make. Run either command with -h e.g. bids-pydantic make -h to get help for that command. optional arguments: -h, --help show this help message and exit command: {list,make} subcommand to run ``` The list command help can be viewed with the command bids-pydantic list -h: ``` usage: bids-pydantic list [-h] Queries the GitHub API and lists the available supported BIDS schema versions. Only tested up to v1.7.0. optional arguments: -h, --help show this help message and exit ``` The make command help can be viewed with the command bids-pydantic make -h: ``` usage: bids-pydantic make [-h] [--output OUTPUT] [--output-all OUTPUT_ALL] [--schema-version [SCHEMA_VERSION]] Make a new python file(s) containing BIDS compliant pydantic models optional arguments: -h, --help show this help message and exit --output OUTPUT, -o OUTPUT The output python filename to create (will output to stdout console if not specified). --output-all OUTPUT_ALL, -a OUTPUT_ALL Find all parsable schemas and output each to the provided directory. Will create filenames such as bids_schema_model_v_1_7_0.py, etc. Will overwrite any files in that directory with the same name. --schema-version [SCHEMA_VERSION] The BIDS schema version to use. e.g. 1.7.0 - supported versions can be discovered using the list command. If a version is not specified v1.7.0 will be used. --input INPUT, -i INPUT Specify an input BIDS metadata (yml) file to use instead of downloading a version from GitHub. Cannot be used with --schema-version or --output-all ``` Development Development dependencies should be installed using pip install -r requirements/dev.txt -U, and pre-commit install then run to install code-quality Git hooks. Development should be carried out using Python 3.8. Development must comply with a few code styling/quality rules and processes: Before pushing any code, make sure the CHANGELOG.md is updated as per the instructions in the CHANGELOG.md file. tox should also be run to ensure that tests and code-quality checks pass. The README.md file should be updated with any usage or development instructions. Ensure that a good level of test coverage is kept. The test reports will be committed to the CI system when testing is run, and these will be made available during code review. If you wish to view test coverage locally, run coverage report. To ensure these code quality rules are kept to, pre-commit should be installed (see the requirements/dev.txt), and pre-commit install run when first cloning this repo. This will install some pre-commit hooks that will ensure any committed code meets the minimum code-quality and is formatted correctly before being committed to Git. This will ensure that tests will pass on CI system after code is pushed. The tools should also be included in any IDEs/editors used, where possible. To run manually, run precommit run --all-files. The following software tools will be run: mypy pylint black isort pyupgrade Acknowledgements Conversion from schema to pydantic models is carried out using datamodel-code-generator. Data validation is performed using pydantic. License You can check out the full license here This project is licensed under the terms of the MIT license.

pypi package. Binary | Source

Latest version: 0.0.3 Released: 2023-06-08

pika-pydantic

Pika-Pydantic An opinionated Python implementation of the Producer-Consumer Pattern using RabbitMQ on top of pika and pydantic. Introduction This pika_pydantic library is a thin wrapper on top of the pika and pydantic libraries that makes it quick and easy to create Producer-Consumer workers that interface with a RabbitMQ message queue. For more information of why this library was created, see the Backstory section below in the documentation. I was inspired in many ways by what Sebastian created with FastAPI by building on top of good existing libraries. pika_pydantic attempts to follow that method in its own much simpler way for the asynchronous Producer-Consumer pattern using RabbitMQ. If you are creating a long chain of Producers and Consumers then the pika_pydantic library can save quite a lot of boilerplate code and potential errors. Installation To install the pika_pydantic package using pip pip install pika-pydantic Or using poetry poetry add pika-pydantic Dependencies - requires RabbitMQ In addition you need to have a RabbitMQ instance up and running that can receive and route the messages. If you have some familiarity with Docker, the easiest method is to spin up a docker container running RabbitMQ and use that as your message service. The docker-compose-rabbit.yml provides a simple docker-compose configuration script for this. Alternatively, you can install RabbitMQ natively on your development machine or server, or link to a hosted RabbitMQ instance. More details on RabbitMQ installation can be found on the official RabbitMQ documentation Quickstart This simple example creates a simple message Producer-Consumer that passes around a message object. Create the pika connection First we create a pika connection to the RabbitMQ system ```python import pika parameters = pika.URLParameters("amqp://guest:guest@localhost:5672/") connection = pika.BlockingConnection(parameters) ``` This creates a normal pika blocking connection. The pika documentation can be found here Pika-pydantic Channel vs pika Channel Now we deviate from the standard pika method. Instead of using connection.channel() or similar to create a pika.BlockingChannel we use the pika_pydantic.BlockingChannel object instead. This object also initialises queues and adds various other useful methods on top of the standard pika.BlockingChannel object. But before we do that we need to define the data validation and the queues that will constrain and validate our Producers and Consumers. Defining data models We want to pass around a message data object that has a title, and text. This data model is defined using the pika_pydantic.BaseModel which is a wrapper around the standard pydantic.BaseModel ```python import pika_pydantic class MyMessage(pika_pydantic.BaseModel): """A message""" title: str text: str ``` pika_pydantic.BaseModel objects are pydantic.BaseModel objects with some additional elements for encoding and decoding the objects for RabbitMQ. See the pydantic documentation for more details. Defining queues We also define the single message queue we will use in this example by definding an pika_pydantic.Queues enum. The name on the left defines the Enum but also the RabbitMQ queue name. The value on the right defines the data model to use for validation. python class MyQueues(pika_pydantic.Queues): MESSAGE = MyMessage This object is the master that defines the valid queues and the corresponding data that all Producers and Consumers must use. Add more elements to this enum as you add queues and data models. pika_pydantic.Queues objects are a Python enum.Enum class. The RabbitMQ queue name will be set to the same as the enum name (on the left), and the value on the right is the pika_pydantic.BaseModel data model object that all Producers and Consumers on this queue need to use. Initialise the Channel Now we can initialise the channel and we pass it the pika.connection and the pika_pydantic.Queues enum we just defined. python channel = pika_pydantic.BlockingChannel(connection=connection, queues=MyQueues) pika_pydantic.BlockingChannel is a pika.BlockingChannel object with some additional methods attached that allow simpler creation of Consumers (listen()) and Producers (send()) This object declares all the queues, and validates the message data on each queue and does the necessary encoding and decoding of the data for Consumers and Producers. Create a Consumer To create a new Consumer for this message queue we use the new channel.listen(queue, callback) method. This validates the inputs and does the decoding needed for that particular queue. We define a callback as in pika and add the consumer to the channel. ```python def callback(channel, method, frame, data: MyMessage): print(f"Received message with title ({data.title}) and text ({data.text}).") channel.listen(queue=MyQueues.MESSAGE, callback=callback, auto_ack=True) ``` Create a Producer To create a Producer we use the new channel.send(queue, data) method. This takes the data object and does all the validation and encoding needed to pass it to the RabbitMQ queue. python message = MyMessage(title="Important", text="Remember to feed the dog") channel.send(queue=MyQueues.MESSAGE, data=message) Start it running As with standard pika, the channel can start polling so that the defined Consumers start listening for messages on their queue. python channel.start_consuming() Or to not block the thread and process the messages currently in the queue we can use python connection.process_data_events(time_limit=None) Other examples The examples folder provides further examples and a suggested project folder structure that reuses the pika_pydantic elements across multiple Consumers and Producers. The backstory Asynchronous messaging Good code structure generally separates concerns (jobs) between different modules. Microservices takes this one step further and separates jobs into different deployable systems that interact with each other. These different systems are interfaced through various APIs, usually called from one system to another in realtime. But some jobs are long lasting or resource hungry and this is where we can use asynchronous interfaces between the different systems. There is a lot of interest currently in Kafka as a system for managing these asynchronous jobs. But for most projects a simpler message queue such as RabbitMQ will do the job. It provides a way to pass data and a job onto another system, and that other system will pick up the job when it has resources to do so. The Producer-Consumer pattern For many purposes a system does some work and prepares some data. It then passes this on as a job for the next system element to work on when it has resources available. This is the Producer-Consumer Pattern. In a bit more detail A Producer completes some job, often resulting in some data artifact to be passed to the next stage. The Producer publishes this data to a message queue The next job is a Consumer of this message queue. When it has resources available, it picks up the message and the published data and then does it work. This Consumer may itself also be a Producer publishing it's data to a different message queue for the next Consumer in the chain to take forward. RabbitMQ and the Python pika library In the Python world there are good libraries for this, most notably is the pika library that interfaces with a RabbitMQ message queue. pika is relatively simple and very flexible. But for my needs I wanted to use stricter software development principles and the flexibility of pika too flexible. Specifically: My system has many Consumers and many Publishers. I wanted to be able to define the pika boilerplate code to set up the connection, the queues and the channel in one central place for all the different jobs. I also wanted to restrict my Consumers and Publishers to only valid queues, so to do this I wanted to define the valid queues in an Enum to reduce strange bugs. I wanted to ensure that each Producer sending data sends the data in the right format and each Consumer picks up the data and validates to the same format. For this the pydantic library is very helpful to constrain the Producer and Consumer data to be passed. This is how the fastapi library ensures data being passed around that API is validated and structured correctly. I wanted to use this pattern. Contributing If you find this useful, consider adding a star and contributing. Currently this only uses the pika.BlockingChannel implementation. Tests When running tests, a RabbitMQ instance needs to be up and running on your machine as the tests do live tests using that RabbitMQ. If using docker, you can spin up a RabbitMQ instance for testing using docker-compose -f docker-compose-rabbit.yml up The environment variable PIKA_URL can be overwritten to point to your test RabbitMQ instance. Then run tests use pytest pytest

pypi package. Binary | Source

Latest version: 0.1.4 Released: 2022-06-26

pydantic-zarr

pydantic-zarr Pydantic models for Zarr. ⚠️ Disclaimer ⚠️ This project is under a lot of flux -- I want to add zarr version 3 support to this project, but the reference python implementation doesn't support version 3 yet. Also, the key ideas in this repo may change in the process of being formalized over in this specification (currently just a draft). As the ecosystem evolves I will be breaking things (and versioning the project accordingly), so be advised! Installation pip install -U pydantic-zarr Help See the documentation for detailed information about this project. Example ```python import zarr from pydantic_zarr import GroupSpec group = zarr.group(path='foo') array = zarr.create(store = group.store, path='foo/bar', shape=10, dtype='uint8') array.attrs.put({'metadata': 'hello'}) this is a pydantic model spec = GroupSpec.from_zarr(group) print(spec.model_dump()) """ { 'zarr_version': 2, 'attributes': {}, 'members': { 'bar': { 'zarr_version': 2, 'attributes': {'metadata': 'hello'}, 'shape': (10,), 'chunks': (10,), 'dtype': '|u1', 'fill_value': 0, 'order': 'C', 'filters': None, 'dimension_separator': '.', 'compressor': { 'id': 'blosc', 'cname': 'lz4', 'clevel': 5, 'shuffle': 1, 'blocksize': 0, }, } }, } """ ```

pypi package. Binary

Latest version: 0.7.0 Released: 2024-03-20

pydantic-init

pydantic-init Through-Pydantic: Initialize Python Objects from JSON, YAML, and More

pypi package. Binary

Latest version: 0.1.0 Released: 2024-05-15

bump-pydantic

Bump Pydantic ♻️ Bump Pydantic is a tool to help you migrate your code from Pydantic V1 to V2. [!NOTE]\ If you find bugs, please report them on the issue tracker. Table of contents Bump Pydantic ♻️ Table of contents Installation Usage Check diff before applying changes Apply changes Rules BP001: Add default None to Optional[T], Union[T, None] and Any fields BP002: Replace Config class by model_config attribute BP003: Replace Field old parameters to new ones BP004: Replace imports BP005: Replace GenericModel by BaseModel BP006: Replace __root__ by RootModel BP007: Replace decorators BP008: Replace con* functions by Annotated versions BP009: Mark pydantic "protocol" functions in custom types with proper TODOs License Installation The installation is as simple as: bash pip install bump-pydantic Usage bump-pydantic is a CLI tool, hence you can use it from your terminal. It's easy to use. If your project structure is: bash repository/ └── my_package/ └── Then you'll want to do: bash cd /path/to/repository bump-pydantic my_package Check diff before applying changes To check the diff before applying the changes, you can run: bash bump-pydantic --diff Apply changes To apply the changes, you can run: bash bump-pydantic Rules You can find below the list of rules that are applied by bump-pydantic. It's also possible to disable rules by using the --disable option. BP001: Add default None to Optional[T], Union[T, None] and Any fields ✅ Add default None to Optional[T] fields. The following code will be transformed: py class User(BaseModel): name: Optional[str] Into: py class User(BaseModel): name: Optional[str] = None BP002: Replace Config class by model_config attribute ✅ Replace Config class by model_config = ConfigDict(). ✅ Rename old Config attributes to new model_config attributes. ✅ Add a TODO comment in case the transformation can't be done automatically. ✅ Replace Extra enum by string values. The following code will be transformed: ```py from pydantic import BaseModel, Extra class User(BaseModel): name: str class Config: extra = Extra.forbid ``` Into: ```py from pydantic import ConfigDict, BaseModel class User(BaseModel): name: str model_config = ConfigDict(extra="forbid") ``` BP003: Replace Field old parameters to new ones ✅ Replace Field old parameters to new ones. ✅ Replace field: Enum = Field(Enum.VALUE, const=True) by field: Literal[Enum.VALUE] = Enum.VALUE. The following code will be transformed: ```py from typing import List from pydantic import BaseModel, Field class User(BaseModel): name: List[str] = Field(..., min_items=1) ``` Into: ```py from typing import List from pydantic import BaseModel, Field class User(BaseModel): name: List[str] = Field(..., min_length=1) ``` BP004: Replace imports ✅ Replace BaseSettings from pydantic to pydantic_settings. ✅ Replace Color and PaymentCardNumber from pydantic to pydantic_extra_types. BP005: Replace GenericModel by BaseModel ✅ Replace GenericModel by BaseModel. The following code will be transformed: ```py from typing import Generic, TypeVar from pydantic.generics import GenericModel T = TypeVar('T') class User(GenericModel, Generic[T]): name: str ``` Into: ```py from typing import Generic, TypeVar from pydantic import BaseModel T = TypeVar('T') class User(BaseModel, Generic[T]): name: str ``` BP006: Replace __root__ by RootModel ✅ Replace __root__ by RootModel. The following code will be transformed: ```py from typing import List from pydantic import BaseModel class User(BaseModel): age: int name: str class Users(BaseModel): root = List[User] ``` Into: ```py from typing import List from pydantic import RootModel, BaseModel class User(BaseModel): age: int name: str class Users(RootModel[List[User]]): pass ``` BP007: Replace decorators ✅ Replace @validator by @field_validator. ✅ Replace @root_validator by @model_validator. The following code will be transformed: ```py from pydantic import BaseModel, validator, root_validator class User(BaseModel): name: str @validator('name', pre=True) def validate_name(cls, v): return v @root_validator(pre=True) def validate_root(cls, values): return values ``` Into: ```py from pydantic import BaseModel, field_validator, model_validator class User(BaseModel): name: str @field_validator('name', mode='before') def validate_name(cls, v): return v @model_validator(mode='before') def validate_root(cls, values): return values ``` BP008: Replace con* functions by Annotated versions ✅ Replace constr(*args) by Annotated[str, StringConstraints(*args)]. ✅ Replace conint(*args) by Annotated[int, Field(*args)]. ✅ Replace confloat(*args) by Annotated[float, Field(*args)]. ✅ Replace conbytes(*args) by Annotated[bytes, Field(*args)]. ✅ Replace condecimal(*args) by Annotated[Decimal, Field(*args)]. ✅ Replace conset(T, *args) by Annotated[Set[T], Field(*args)]. ✅ Replace confrozenset(T, *args) by Annotated[Set[T], Field(*args)]. ✅ Replace conlist(T, *args) by Annotated[List[T], Field(*args)]. The following code will be transformed: ```py from pydantic import BaseModel, constr class User(BaseModel): name: constr(min_length=1) ``` Into: ```py from pydantic import BaseModel, StringConstraints from typing_extensions import Annotated class User(BaseModel): name: Annotated[str, StringConstraints(min_length=1)] ``` BP009: Mark Pydantic "protocol" functions in custom types with proper TODOs ✅ Mark __get_validators__ as to be replaced by __get_pydantic_core_schema__. ✅ Mark __modify_schema__ as to be replaced by __get_pydantic_json_schema__. The following code will be transformed: ```py class SomeThing: @classmethod def get_validators(cls): yield from [] @classmethod def __modify_schema__(cls, field_schema, field): if field: field_schema['example'] = "Weird example" ``` Into: ``py class SomeThing: @classmethod # TODO[pydantic]: We couldn't refactorget_validators, please create theget_pydantic_core_schema` manually. # Check https://docs.pydantic.dev/latest/migration/#defining-custom-types for more information. def get_validators(cls): yield from [] @classmethod # TODO[pydantic]: We couldn't refactor `__modify_schema__`, please create the `__get_pydantic_json_schema__` manually. # Check https://docs.pydantic.dev/latest/migration/#defining-custom-types for more information. def __modify_schema__(cls, field_schema, field): if field: field_schema['example'] = "Weird example" ``` License This project is licensed under the terms of the MIT license.

pypi package. Binary | Source

Latest version: 0.8.0 Released: 2023-12-28

pydantic-meta

pypi package. Binary

Latest version: 0.3.3 Released: 2023-04-03

pydantic-yaml

Pydantic-YAML Pydantic-YAML adds YAML capabilities to Pydantic, which is an excellent Python library for data validation and settings management. If you aren't familiar with Pydantic, I would suggest you first check out their docs. Documentation on ReadTheDocs.org Basic Usage ```python from enum import Enum from pydantic import BaseModel, validator from pydantic_yaml import parse_yaml_raw_as, to_yaml_str class MyEnum(str, Enum): """A custom enumeration that is YAML-safe.""" a = "a" b = "b" class InnerModel(BaseModel): """A normal pydantic model that can be used as an inner class.""" fld: float = 1.0 class MyModel(BaseModel): """Our custom Pydantic model.""" x: int = 1 e: MyEnum = MyEnum.a m: InnerModel = InnerModel() @validator("x") def _chk_x(cls, v: int) -> int: # noqa """You can add your normal pydantic validators, like this one.""" assert v > 0 return v m1 = MyModel(x=2, e="b", m=InnerModel(fld=1.5)) This dumps to YAML and JSON respectively yml = to_yaml_str(m1) jsn = m1.json() This parses YAML as the MyModel type m2 = parse_yaml_raw_as(MyModel, yml) assert m1 == m2 JSON is also valid YAML, so this works too m3 = parse_yaml_raw_as(MyModel, jsn) assert m1 == m3 ``` With Pydantic v2, you can also dump dataclasses: ```python from pydantic import RootModel from pydantic.dataclasses import dataclass from pydantic.version import VERSION as PYDANTIC_VERSION from pydantic_yaml import to_yaml_str assert PYDANTIC_VERSION >= "2" @dataclass class YourType: foo: str = "bar" obj = YourType(foo="wuz") assert to_yaml_str(RootModelYourType) == 'foo: wuz\n' ``` Configuration Currently we use the JSON dumping of Pydantic to perform most of the magic. This uses the Config inner class, as in Pydantic: python class MyModel(BaseModel): # ... class Config: # You can override these fields, which affect JSON and YAML: json_dumps = my_custom_dumper json_loads = lambda x: MyModel() # As well as other Pydantic configuration: allow_mutation = False You can control some YAML-specfic options via the keyword options: python to_yaml_str(model, indent=4) # Makes it wider to_yaml_str(model, map_indent=9, sequence_indent=7) # ... you monster. You can additionally pass your own YAML instance: python from ruamel.yaml import YAML my_writer = YAML(typ="safe") my_writer.default_flow_style = True to_yaml_file("foo.yaml", model, custom_yaml_writer=my_writer) A separate configuration for YAML specifically will be added later, likely in v2. Breaking Changes for pydantic-yaml V1 The API for pydantic-yaml version 1.0.0 has been greatly simplified! Mixin Class This functionality has currently been removed! YamlModel and YamlModelMixin base classes are no longer needed. The plan is to re-add it before v1 fully releases, to allow the .yaml() or .parse_*() methods. However, this will be availble only for pydantic<2. Versioned Models This functionality has been removed, as it's questionably useful for most users. There is an example in the docs that's available.

pypi package. Binary | Source

Latest version: 1.4.0 Released: 2024-11-11

pydantic-eval

pypi package. Binary

Latest version: 0.0.1 Released: 2025-03-17

asdf-pydantic

asdf-pydantic Create ASDF tags with pydantic models. ```py from asdf_pydantic import AsdfPydanticModel class Rectangle(AsdfPydanticModel): _tag = "asdf://asdf-pydantic/examples/tags/rectangle-1.0.0" width: float height: float After creating extension and install ... af = asdf.AsdfFile() af["rect"] = Rectangle(width=1, height=1) ``` ```yaml ASDF 1.0.0 ASDF_STANDARD 1.5.0 %YAML 1.1 %TAG ! tag:stsci.edu:asdf/ --- !core/asdf-1.1.0 asdf_library: !core/software-1.0.0 { author: The ASDF Developers, homepage: 'http://github.com/asdf-format/asdf', name: asdf, version: 2.14.3} history: extensions: - !core/extension_metadata-1.0.0 extension_class: asdf.extension.BuiltinExtension software: !core/software-1.0.0 { name: asdf, version: 2.14.3} - !core/extension_metadata-1.0.0 { extension_class: mypackage.shapes.ShapesExtension, extension_uri: 'asdf://asdf-pydantic/shapes/extensions/shapes-1.0.0'} rect: ! { height: 1.0, width: 1.0} ... ``` Features [x] Create ASDF tag from your pydantic models with batteries (converters) included. [x] Validates data models as you create them and not only when reading and writing ASDF files. [x] Preserve Python types when deserializing ASDF files. [x] All the cool things that comes with pydantic (e.g., JSON encoder, JSON schema, Pydantic types). [ ] Comes with ASDF schemas (TBD). Installation console pip install asdf-pydantic Usage Define your data model with AsdfPydanticModel. For pydantic fans, this has all the features of pydantic's BaseModel. ```py mypackage/shapes.py from asdf_pydantic import AsdfPydanticModel class Rectangle(AsdfPydanticModel): _tag = "asdf://asdf-pydantic/examples/tags/rectangle-1.0.0" width: float height: float ``` Then create an extension with the converter included with asdf-pydantic: ```py mypackage/extensions.py from asdf.extension import Extension from asdf_pydantic.converter import AsdfPydanticConverter from mypackage.shapes import Rectangle AsdfPydanticConverter.add_models(Rectangle) class ShapesExtension(Extension): extension_uri = "asdf://asdf-pydantic/examples/extensions/shapes-1.0.0" converters = [AsdfPydanticConverter()] tags = [*AsdfPydanticConverter().tags] ``` Install the extension either by entry point specification or add it to asdf.get_config(): ```py import asdf from mypackage.extensions import ShapeExtension asdf.get_config().add_extension(ShapesExtension()) af = asdf.AsdfFile() af["rect"] = Rectangle(width=1, height=1) with open("shapes.asdf", "wb") as fp: af.write_to(fp) ``` {toctree} :maxdepth: 1 model autoapi

pypi package. Binary | Source

Latest version: 1.1.0 Released: 2024-11-18

pydantic-pint

Pydantic Pint Pydantic is a Python library for data validation and data serialization. Pint is a Python library for defining, operating, and manipulating physical quantities. By default, they do not play well with each other. Many projects that have a need for data validation may also need to work with physical quantities. Pydantic Pint aims to bridge that gap by providing Pydantic validation for Pint quantities. ```python from pydantic import BaseModel from pydantic_pint import PydanticPintQuantity from pint import Quantity from typing import Annotated class Box(BaseModel): length: Annotated[Quantity, PydanticPintQuantity("m")] width: Annotated[Quantity, PydanticPintQuantity("m")] box = Box( length="4m", width="2m", ) ``` Getting Started Installation Pydantic Pint is available as pydantic-pint on PyPI. Pydantic Pint requires both Pydantic and Pint to be installed. It also requires typing.Annotated (for older version of python use typing-extensions). shell pip install pydantic-pint Usage Pydantic Pint provides PydanticPintQuantity which enabled Pydantic validation for Pint quantities. For a field of a Pydantic model to have quantity validation, it must be annotated with a PydanticPintQuantity for a given unit. ```python from pydantic import BaseModel from pydantic_pint import PydanticPintQuantity from pint import Quantity from typing import Annotated class Coordinates(BaseModel): latitude: Annotated[Quantity, PydanticPintQuantity("deg")] longitude: Annotated[Quantity, PydanticPintQuantity("deg")] altitude: Annotated[Quantity, PydanticPintQuantity("km")] ``` Users of the model can input anything to the field with a specified unit that is convertible to the units declared in the annotation. For instance, the units for Coordinates.altitude are kilometers, however users can specify meters instead. PydanticPintQuantity will handle the conversion from meters to kilometers. ```python coord = Coordinates( latitude="39.905705 deg", longitude="-75.166519 deg", altitude="12 meters", ) print(coord) > latitude= longitude= altitude= print(f"{coord!r}") > Coordinates(latitude=, longitude=, altitude=) print(coord.model_dump()) > {'latitude': , 'longitude': , 'altitude': } print(coord.model_dump(mode="json")) > {'latitude': '39.905705 degree', 'longitude': '-75.166519 degree', 'altitude': '0.012 kilometer'} print(f"{coord.model_dump_json()!r}") > '{"latitude":"39.905705 degree","longitude":"-75.166519 degree","altitude":"0.012 kilometer"}' ```

pypi package. Binary | Source

Latest version: 0.2 Released: 2025-03-30