Package maintenance

pydantic-tes

pydantic-tes .. image:: https://badge.fury.io/py/pydantic-tes.svg :target: https://pypi.python.org/pypi/pydantic-tes/ :alt: pydantic-tes on the Python Package Index (PyPI) A collection of pydantic_ models for GA4GH Task Execution Service_. In addition to the models, this package contains a lightweight client for TES based on them using requests_ and utilities for working and testing it against Funnel_ - a the TES implementation. This Python project can be installed from PyPI using pip. :: $ python3 -m venv .venv $ . .venv/bin/activate $ pip install pydantic-tes Checkout py-tes for an alternative set of Python models and an API client based on the more lightweight attrs package. .. _Funnel: https://ohsu-comp-bio.github.io/funnel/ .. _requests: https://requests.readthedocs.io/en/latest/ .. _GA4GH Task Execution Service: https://github.com/ga4gh/task-execution-schemas .. _pydantic: https://pydantic-docs.helpmanual.io/ .. _py-tes: https://github.com/ohsu-comp-bio/py-tes .. _attrs: https://www.attrs.org/en/stable/ .. :changelog: History .. to_doc 0.2.0 (2025-04-10) Allow creating tes client with extra headers (e.g. for auth) thanks to @BorisYourich. Fixes for running against tesk thanks to @mvdbeek. 0.1.5 (2022-10-06) Messed up 0.1.4 release, retrying. 0.1.4 (2022-10-06) Further hacking around funnel responses to produce validating responses. 0.1.3 (2022-10-05) Another attempt at publishing types. 0.1.2 (2022-10-05) Add support for Python 3.6. Add py.typed to package. 0.1.1 (2022-09-29) Fixes to project publication scripts and process. 0.1.0 (2022-09-29) Inital version.

pypi package. Binary | Source

Latest version: 0.2.0 Released: 2025-04-10

aas-pydantic

AAS Pydantic A Python package that combines the BaSyx Python SDK for handling Asset Administration Shells (AAS) with Pydantic models for fast and easy modeling and transformation of Industry 4.0 data structures. The package enables: Creating AAS models using Pydantic's intuitive syntax Converting between Pydantic models and BaSyx AAS instances Converting IDTA submodel templates to Pydantic types and vice versa Serialization to JSON/Schema formats for easy integrations with other tools Installation You can install the package using pip: bash pip install aas-pydantic Note that the package requires python 3.10 or higher. Usage The package provides a set of Pydantic models that can be used to create AAS models. The models are based on the BaSyx SDK and are designed to be easily converted to and from AAS instances. In the following example, we create a simple AAS of a Device with one Submodel DeviceConfig. ```python from typing import List, Set, Union from enum import Enum from aas_pydantic import AAS, Submodel, SubmodelElementCollection Define custom enums class StatusEnum(str, Enum): ACTIVE = "active" INACTIVE = "inactive" Define SubmodelElementCollections class DeviceProperties(SubmodelElementCollection): serial_number: str firmware_version: str status: StatusEnum temperature_sensors: List[str] config_params: Set[str] Define Submodels class DeviceConfig(Submodel): id_short: str description: str properties: DeviceProperties measurements: List[float] settings: Union[str, int] Create AAS class class DeviceAAS(AAS): device_info: DeviceConfig ``` We also want to create an instance of this model, which we will use later: python device = DeviceAAS( id="device_1", description="Example device", device_info=DeviceConfig( id_short="device_1_config", description="Device 1 Configuration", properties=DeviceProperties( id_short="device_1_properties", serial_number="1234", firmware_version="1.0", status=StatusEnum.ACTIVE, temperature_sensors=["sensor_1", "sensor_2"], config_params={"param_1", "param_2"} ), measurements=[1.0, 2.0, 3.0], settings="default" ) ) Model Conversion The package provides methods for converting between Pydantic models and BaSyx models. This works both for pydantic types / basyx templates and instances. Convert Pydantic model to BaSyx templates At first, we want to show an submodel template can be created from the modelled type: python submodel_template = aas_pydantic.convert_model_to_submodel_template(DeviceConfig) For inspection of the submodel template, we can serialize it to a file: ```python import basyx.aas.adapter.json with open("submodel_template_DeviceConfig.json", "w") as f: json_string = json.dumps( submodel_template, cls=basyx.aas.adapter.json.AASToJsonEncoder, ) f.write(json_string) ``` We can also convert a whole AAS type to a Basyx Object Store: python aas_template_objectstore = aas_pydantic.convert_model_to_aas_template(DeviceAAS) We can also serialize the AAS template to a file: python with open("aas_template_DeviceAAS.json", "w") as f: basyx.aas.adapter.json.write_aas_json_file( f, aas_template_objectstore ) Convert Pydantic model instance to BaSyx instance We can also convert an instance of the Pydantic model to a BaSyx instance: python basyx_objectstore = aas_pydantic.convert_model_to_aas(example_device) When we convert a pydantic AAS model to a basyx instance, we obtain an object store, holding the AAS and its submodels. We can serialize this object store to a file: python with open("aas_instance_DeviceAAS.json", "w") as f: basyx.aas.adapter.json.write_aas_json_file(f, basyx_objectstore) Only converting a single submodel instance is also possible: python basyx_submodel = aas_pydantic.convert_model_to_submodel(example_device.device_info) Convert BaSyx templates to pydantic types In aas-pydantic you can create pydantic types from BaSyx templates at runtime. This can be useful when you want to create a Pydantic model from an existing AAS data structure. python pydantic_aas_types = aas_pydantic.convert_object_store_to_pydantic_types( aas_template_objectstore ) pydantic_submodel_type = aas_pydantic.convert_submodel_template_to_pydatic_type( submodel_template ) print(pydantic_submodel_type.model_fields) This conversion significantly reduces data complexity since relying of flat object structures instead of the AAS Meta Model. Moreover, since pydantic has many integration, such as JSonSchema or fastAPI, this transformation can be useful for many use cases. Convert BaSyx instances to Pydantic instances In aas-pydantic you can also convert BaSyx instances to Pydantic instances. This can be useful when you want to work with Pydantic models in your application, but need to convert them to BaSyx instances for communication with other systems. For conversion of the instances, you require the previously generated Pydantic types: python pydantic_submodel = aas_pydantic.convert_submodel_to_model_instance(basyx_submodel, model_type=pydantic_submodel_type) pydantic_aas = aas_pydantic.convert_object_store_to_pydantic_models( basyx_objectstore, pydantic_aas_types ) print(pydantic_aas) Working with IDTA Submodel Templates Previously, we have shown how submodel templates can be loaded. Of course, this is also possible for submodel templates obtained from IDTA. For example, we can load to submodel template for Data Model for Asset Location (Version 1.0: ```python import aas_pydantic import basyx.aas.adapter.json with open( "IDTA 02045-1-0_Template_DataModelForAssetLocation.json", "r" ) as file: basyx_object_store = basyx.aas.adapter.json.read_aas_json_file(file) pydantic_types = aas_pydantic.convert_object_store_to_pydantic_types(basyx_object_store) with open("pydantic_types.json", "w") as f: json.dump(pydantic_types[0].model_json_schema(), f, indent=2) ``` This example shows, how simple integration between Submodel templates and JSON Schema can be achieved. The geneated JSON Schema could, e.g. be used in code generation tools for pydantic models to have defined pydantic types for the submodel templates. For more integration, check out aas-middleware, a python package for industrial data and software integration based on AAS. Conversion Capabilities The package provides the following conversion of python type annotations: Primitive types (int, float, str, bool) Collections (List, Set, Tuples) Enums and Literals Unions and Optional types Nested Pydantic models Note that dicts are not supported, since a clear distinction from SubmodelElementCollection to the dict or pydantic type is not possible. The following BaSyx Submodel Element Types are supported: - SubmodelElementCollection - SubmodelElementList - ReferenceElement - RelationshipElement - Property - Blob - File During conversion, the following information will be preserved: - Identifiers - Descriptions - Semantic Identifiers - Values and Value Types Storage of additional information that cannot be kept easily in the BaSyx model (attribute and class names, optional and union types, enums) will be stored in the Concept Descriptions of the BaSyx model. License The package is licensed under the MIT license.

pypi package. Binary

Latest version: 0.1.3 Released: 2025-04-07

pydantic-sdk

pydantic-sdk Coming Soon.

pypi package. Binary

Latest version: 0.0.2 Released: 2023-07-17

pydantic-gql

pydantic-gql A simple GraphQL query builder based on Pydantic models Installation You can install this package with pip. sh $ pip install pydantic-gql Links Usage To use pydantic-gql, you need to define your Pydantic models and then use them to build GraphQL queries. The core classes you'll interact with is the Query class to create queries and the Mutation class to create mutations. (Both queries and mutations are types of "operations".) Queries Defining Pydantic Models First, define your Pydantic models that represent the structure of the data you want to query. Here's an example: ```python from pydantic import BaseModel class Group(BaseModel): id: int name: str class User(BaseModel): id: int name: str groups: list[Group] ``` Building a Query Most GraphQL queries contain a single top-level field. Since this is the most common use case, this library provides Query.from_model() as a convenience method to create a query with one top-level field whose subfields are defined by the Pydantic model. ```python from pydantic_gql import Query query = Query.from_model(User) ``` This will create a query that looks like this: graphql query User{ User { id, name, groups { id, name, }, }, } This method also provides parameters to customise the query, such as the query name, field name, variables (see Using Variables for examples with variables), and arguments. Here's a more complex example: python query = Query.from_model( User, query_name="GetUser", field_name="users", args={"id": 1}, ) This will create a query that looks like this: graphql query GetUser{ users(id: 1) { id, name, groups { id, name, }, }, } Mutations Since both queries and mutations are types of operations, the Mutation class works in the same way as the Query class. Here's an example of how to build a mutation that could create a new user and return their data. ```python from pydantic_gql import Mutation new_user = User(id=1, name="John Doe", groups=[]) mutation = Mutation.from_model(User, "create_user", args=dict(new_user)) ``` This will create a mutation that looks like this: graphql mutation CreateUser { createUser(id: 1, name: "John Doe", groups: []) { id, name, groups { id, name, }, }, } Generating the GraphQL Operation String To get the actual GraphQL query or mutation as a string that you can send to your server, simply convert the Query or Mutation object to a string. python query_string = str(query) You can control the indentation of the resulting string by using format() instead of str(). Valid values for the format specifier are: indent - The default. Indent the resulting string with two spaces. noindent - Do not indent the resulting string. The result will be a single line. A number - Indent the resulting string with the specified number of spaces. A whitespace string - Indent the resulting string with the specified string, e.g. \t. python query_string = format(query, '\t') Using Variables A GraphQL query can define variables at the top and then reference them throughout the rest of the operation. Then when the operation is sent to the server, the variables are passed in a separate dictionary. To define variables for a GraphQL operation, first create a class that inherits from BaseVars and define the variables as attributes with Var[T] as the type annotation. ```python from pydantic_gql import BaseVars, Var class UserVars(BaseVars): age: Var[int] group: Var[str | None] is_admin: Var[bool] = Var(default=False) ``` You can pass the class itself to the .from_model() method to include the variables in the query. You can also reference the class attributes in the operation's arguments directly. python query = Query.from_model( User, variables=UserVars, args={"age": UserVars.age, "group": UserVars.group, "isAdmin": UserVars.is_admin}, ) This will create a query that looks like this: graphql query User($age: Int!, $group: String, $is_admin: Boolean = false){ User(age: $age, group: $group, isAdmin: $is_admin) { id, name, groups { id, name, }, }, } When you want to send the query, you can instantiate the variables class, which itself is a Mapping of variable names to values, and pass it to your preferred HTTP client. python variables = UserVars(age=18, group="admin", is_admin=True) httpx.post(..., json={"query": str(query), "variables": dict(variables)}) More Complex Operations Sometimes you may need to build more complex operations than the ones we've seen so far. For example, you may need to include multiple top-level fields, or you may need to provide arguments to some deeply nested fields. In the following examples we'll be using queries, but the same principles apply to mutations. In these cases, you can build the query manually with the Query constructor. The constructor takes the query name followed by any number of GqlField objects, then optionally variables as a keyword argument. GqlFields themselves can also be constructed with their from_model() convenience method or manually with their constructor. Here's an example of a more complex query: ```python from pydantic import BaseModel, Field from pydantic_gql import Query, GqlField, BaseVars class Vars(BaseVars): min_size: Var[int] = Var(default=0) groups_per_user: Var[int | None] class PageInfo(BaseModel): has_next_page: bool = Field(alias="hasNextPage") end_cursor: str | None = Field(alias="endCursor") class GroupEdge(BaseModel): node: Group cursor: str class GroupConnection(BaseModel): edges: list[GroupEdge] page_info: PageInfo = Field(alias="pageInfo") query = Query( "GetUsersAndGroups", GqlField( name="users", args={"minAge": 18}, fields=( GqlField("id"), GqlField("name"), GqlField.from_model(GroupConnection, "groups", args={"first": Vars.groups_per_user}), ), ) GqlField.from_model(Group, "groups", args={"minSize": Vars.min_size}), variables=Vars, ) ``` This will create a query that looks like this: graphql query GetUsersAndGroups($min_size: Int = 0, $groups_per_user: Int){ users(minAge: 18) { id, name, groups(first: $groups_per_user) { edges { node { id, name, }, cursor, }, pageInfo { hasNextPage, endCursor, }, }, }, groups(minSize: $min_size) { id, name, }, } Connections (Pagination) The previous example demonstrates how to build a query that uses pagination. However, since pagination is a common pattern (see the GraphQL Connections Specification), this library provides a Connection class which is generic over the node type. You can use this class to easily build pagination queries. Here's an example of how to use the Connection class: ```python from pydantic_gql.connections import Connection query = Query.from_model( Connection[User], "users", args={"first": 10}, ) ``` This will create a query that looks like this: graphql query User{ users(first: 10) { edges { node { id, name, groups { id, name, }, }, cursor, }, pageInfo { hasNextPage, endCursor, }, }, }

pypi package. Binary | Source

Latest version: 1.2.2 Released: 2024-10-28

drf-pydantic

Use pydantic with Django REST framework Introduction Performance Installation Usage General Pydantic Validation Updating Field Values Validation Errors Existing Models Nested Models Manual Serializer Configuration Per-Field Configuration Custom Serializer Additional Properties Introduction Pydantic is a Python library used to perform data serialization and validation. Django REST framework is a framework built on top of Django used to write REST APIs. If you develop DRF APIs and rely on pydantic for data validation/(de)serialization, then drf-pydantic is for you :heart_eyes:. [!NOTE] The latest version of drf_pydantic only supports pydantic v2. Support for pydantic v1 is available in the 1.* version. Performance Translation between pydantic models and DRF serializers is done during class creation (e.g., when you first import the model). This means there will be zero runtime impact when using drf_pydantic in your application. [!NOTE] There will be a minor penalty if validate_pydantic is set to True due to pydantic model validation. This is minimal compared to an-already present overhead of DRF itself because pydantic runs its validation in rust while DRF is pure python. Installation shell pip install drf-pydantic Usage General Use drf_pydantic.BaseModel instead of pydantic.BaseModel when creating your models: ```python from drf_pydantic import BaseModel class MyModel(BaseModel): name: str addresses: list[str] ``` MyModel.drf_serializer is equivalent to the following DRF Serializer class: python class MyModelSerializer: name = CharField(allow_null=False, required=True) addresses = ListField( allow_empty=True, allow_null=False, child=CharField(allow_null=False), required=True, ) Whenever you need a DRF serializer, you can get it from the model like this: python my_value = MyModel.drf_serializer(data={"name": "Van", "addresses": ["Gym"]}) my_value.is_valid(raise_exception=True) [!NOTE] Models created using drf_pydantic are fully identical to those created by pydantic. The only change is the addition of the drf_serializer and drf_config attributes. Pydantic Validation By default, the generated serializer only uses DRF's validation; however, pydantic models are often more complex and their numerous validation rules cannot be fully translated to DRF. To enable pydantic validators to run whenever the generated DRF serializer validates its data (e.g., via .is_valid()), set "validate_pydantic": True within the drf_config property of your model: ```python from drf_pydantic import BaseModel class MyModel(BaseModel): name: str addresses: list[str] drf_config = {"validate_pydantic": True} my_serializer = MyModel.drf_serializer(data={"name": "Van", "addresses": []}) my_serializer.is_valid() # this will also validate MyModel ``` With this option enabled, every time you validate data using your DRF serializer, the parent pydantic model is also validated. If it fails, its ValidationError exception will be wrapped within DRF's ValidationError. Per-field and non-field (object-level) errors are wrapped similarly to how DRF handles them. This ensures your complex pydantic validation logic is properly evaluated wherever a DRF serializer is used. [!NOTE] All drf_config values are properly inherited by child classes, just like pydantic's model_config. Updating Field Values By default, drf_pydantic updates values in the DRF serializer with those from the validated pydantic model: ```python from drf_pydantic import BaseModel class MyModel(BaseModel): name: str addresses: list[str] @pydantic.field_validator("name") @classmethod def validate_name(cls, v): assert isinstance(v, str) return v.strip().title() drf_config = {"validate_pydantic": True} my_serializer = MyModel.drf_serializer(data={"name": "van herrington", "addresses": []}) my_serializer.is_valid() print(my_serializer.data) # {"name": "Van Herrington", "addresses": []} ``` This is handy when you dynamically modify field values within your pydantic validators. You can disable this behavior by setting "backpopulate_after_validation": False: ```python class MyModel(BaseModel): ... drf_config = {"validate_pydantic": True, "backpopulate_after_validation": False} ``` Validation Errors By default, pydantic's ValidationError is wrapped within DRF's ValidationError. If you want to raise pydantic's ValidationError directly, set "validation_error": "pydantic" in the drf_config property of your model: ```python import pydantic from drf_pydantic import BaseModel class MyModel(BaseModel): name: str addresses: list[str] @pydantic.field_validator("name") @classmethod def validate_name(cls, v): assert isinstance(v, str) if v != "Billy": raise ValueError("Wrong door") return v drf_config = {"validate_pydantic": True, "validation_error": "pydantic"} my_serializer = MyModel.drf_serializer(data={"name": "Van", "addresses": []}) my_serializer.is_valid() # this will raise pydantic.ValidationError ``` [!NOTE] When a model is invalid from both DRF's and pydantic's perspectives and exceptions are enabled (.is_valid(raise_exception=True)), DRF's ValidationError will be raised regardless of the validation_error setting, because DRF validation always runs first. [!CAUTION] Setting validation_error to pydantic has side effects: It may break your views because they expect DRF's ValidationError. Calling .is_valid() will always raise pydantic.ValidationError if the data is invalid, even without setting .is_valid(raise_exception=True). Existing Models If you have an existing code base and want to add the drf_serializer attribute only to some of your models, you can extend your existing pydantic models by adding drf_pydantic.BaseModel as a parent class to the models you want to extend. Your existing pydantic models: ```python from pydantic import BaseModel class Pet(BaseModel): name: str class Dog(Pet): breed: str ``` Update your Dog model and get serializer via the drf_serializer: ```python from drf_pydantic import BaseModel as DRFBaseModel from pydantic import BaseModel class Pet(BaseModel): name: str class Dog(DRFBaseModel, Pet): breed: str Dog.drf_serializer ``` [!IMPORTANT] Inheritance order is important: drf_pydantic.BaseModel must always come before pydantic.BaseModel. Nested Models If you have nested models and want to generate a serializer for only one of them, you don't need to update all models. Simply update the model you need, and drf_pydantic will automatically generate serializers for all standard nested pydantic models: ```python from drf_pydantic import BaseModel as DRFBaseModel from pydantic import BaseModel class Apartment(BaseModel): floor: int tenant: str class Building(BaseModel): address: str apartments: list[Apartment] class Block(DRFBaseModel): buildings: list[Building] Block.drf_serializer ``` Manual Serializer Configuration If drf_pydantic doesn't generate the serializer you need, you can configure the DRF serializer fields for each pydantic field manually, or create a custom serializer for the model altogether. [!IMPORTANT] When manually configuring the serializer, you are responsible for setting all properties of the fields (e.g., allow_null, required, default, etc.). drf_pydantic does not perform any introspection for fields that are manually configured or for any fields if a custom serializer is used. Per-Field Configuration ```python from typing import Annotated from drf_pydantic import BaseModel from rest_framework.serializers import IntegerField class Person(BaseModel): name: str age: Annotated[float, IntegerField(min_value=0, max_value=100)] ``` Custom Serializer In the example below, Person will use MyCustomSerializer as its DRF serializer. Employee will have its own serializer generated by drf_pydantic since it doesn't inherit a user-defined drf_serializer attribute. Company will use Person's manually defined serializer for its ceo field. ```python from drf_pydantic import BaseModel, DrfPydanticSerializer from rest_framework.serializers import CharField, IntegerField class MyCustomSerializer(DrfPydanticSerializer): name = CharField(allow_null=False, required=True) age = IntegerField(allow_null=False, required=True) class Person(BaseModel): name: str age: float drf_serializer = MyCustomSerializer class Employee(Person): salary: float class Company(BaseModel): ceo: Person ``` [!IMPORTANT] Added in version v2.6.0 Manual drf_serializer must have base class of DrfPydanticSerializer in order for Pydantic Validation to work properly. You can still use standard Serializer from rest_framework, but automatic pydantic model validation will not work consistently and you will get a warning. Additional Properties Additional field properties are mapped as follows (pydantic -> DRF): description -> help_text title -> label StringConstraints -> min_length and max_length pattern -> Uses the specialized RegexField serializer field max_digits and decimal_places are carried over (used for Decimal types, with the current decimal context precision) ge / gt -> min_value (only for numeric types) le / lt -> max_value (only for numeric types)

pypi package. Binary | Source

Latest version: 2.7.1 Released: 2025-03-28

pydantic-csv

pydantic - CSV Pydantic CSV makes working with CSV files easier and much better than working with Dicts. It uses pydantic BaseModels to store data of every row on the CSV file and also uses type annotations which enables proper type checking and validation. Table of Contents Main features Installation Getting started Using the BasemodelCSVReader Error handling Default values Mapping BaseModel fields to columns Supported type annotation User-defined types Using the BasemodelCSVWriter Modifying the CSV header Copyright and License Credits Main features Use pydantic.BaseModel instead of dictionaries to represent the rows in the CSV file. Take advantage of the BaseModel properties type annotation. BasemodelCSVReader uses the type annotation to perform validation on the data of the CSV file. Automatic type conversion. BasemodelCSVReader supports str, int, float, complex, datetime and bool, as well as any type whose constructor accepts a string as its single argument. Helps you troubleshoot issues with the data in the CSV file. BasemodelCSVReader will show exactly, which line of the CSV file contains errors. Extract only the data you need. It will only parse the properties defined in the BaseModel Familiar syntax. The BasemodelCSVReader is used almost the same way as the DictReader in the standard library. It uses BaseModel features that let you define Field properties or Config so the data can be parsed exactly the way you want. Make the code cleaner. No more extra loops to convert data to the correct type, perform validation, set default values, the BasemodelCSVReader will do all this for you. In addition to the BasemodelCSVReader, the library also provides a BasemodelCSVWriter which enables creating a CSV file using a list of instances of a BaseModel. Because sqlmodel uses pydantic.BaseModels too, you can directly fill a database with data from a CSV Installation shell pip install pydantic-csv Getting started Using the BasemodelCSVReader First, add the necessary imports: ```python from pydantic import BaseModel from pydantic_csv import BasemodelCSVReader ``` Assuming that we have a CSV file with the contents below: text firstname,email,age Elsa,elsa@test.com,26 Astor,astor@test.com,44 Edit,edit@test.com,33 Ella,ella@test.com,22 Let's create a BaseModel that will represent a row in the CSV file above: python class User(BaseModel): firstname: str email: str age: int The BaseModel User has 3 properties, firstname and email is of type str and age is of type int. To load and read the contents of the CSV file we do the same thing as if we would be using the DictReader from the csv module in the Python's standard library. After opening the file we create an instance of the BasemodelCSVReader passing two arguments. The first is the file and the second is the BaseModel that we wish to use to represent the data of every row of the CSV file. Like so: ```python using file on disk with open("") as csv: reader = BasemodelCSVReader(csv, User) for row in reader: print(row) using buffer (has to be a string buffer -> convert beforehand) buffer = io.StringIO() buffer.seek(0) # ensure that we read from the beginning reader = BasemodelCSVReader(buffer, User) for row in reader: print(row) ``` If you run this code you should see an output like this: python User(firstname='Elsa', email='elsa@test.com', age=11) User(firstname='Astor', email='astor@test.com', age=7) User(firstname='Edit', email='edit@test.com', age=3) User(firstname='Ella', email='ella@test.com', age=2) The BasemodelCSVReader internally uses the DictReader from the csv module to read the CSV file which means that you can pass the same arguments that you would pass to the DictReader. The complete argument list is shown below: python BasemodelCSVReader( file_obj: Any, model: Type[BaseModel], *, # Note that you can't provide any value without specifying the parameter name use_alias: bool = True, validate_header: bool = True, fieldnames: Optional[Sequence[str]] = None, restkey: Optional[str] = None, restval: Optional[Any] = None, dialect: str = "excel", **kwargs: Any, ) All keyword arguments supported by DictReader are supported by the BasemodelCSVReader, except use_alias and validate_header. Those are used to change the behaviour of the BasemodelCSVReader as follows: use_alias - The BasemodelCSVReader will search for column names identical to the aliases of the BaseModel Fields (if set, otherwise its names). To avoid this behaviour and use the field names in every case set use_alias = False when creating an instance of the BasemodelCSVReader, see an example below: python reader = BasemodelCSVReader(csv, User, use_alias=False) validate_header - The BasemodelCSVReader will raise a ValueError if the CSV file contains columns with the same name. This validation is performed to avoid data being overwritten. To skip this validation set validate_header=False when creating an instance of the BasemodelCSVReader, see an example below: python reader = BasemodelCSVReader(csv, User, validate_header=False) Important: If two or more columns with the same name exists it tries to instantiate the BaseModel with the data from the column most right. Error handling One of the advantages of using the BasemodelCSVReader is that it makes it easy to detect when the type of data in the CSV file is not what your application's model is expecting. And, the BasemodelCSVReader shows errors that will help to identify the rows with problems in your CSV file. For example, say we change the contents of the CSV file shown in the Getting started section and, modify the age of the user Astor, let's change it to a string value: text firstname,email,age Elsa,elsa@test.com,26 Astor,astor@test.com,test Edit,edit@test.com,33 Ella,ella@test.com,22 Remember that in the BaseModel User the age property is annotated with int. If we run the code again an exception from the pydantic validation will be raised with the message below: text pydantic_csv.exceptions.CSVValueError: [Error on CSV Line number: 3] E 1 validation error for UserOptional E age E Input should be a valid integer, unable to parse string as an integer [type=int_parsing, input_value='not a number', input_type=str] E For further information visit https://errors.pydantic.dev/2.7/v/int_parsing Note that apart from telling what the error was, the BasemodelCSVReader will also show which line of the CSV file contain the data with errors. Default values The BasemodelCSVReader also handles properties with default values. Let's modify the BaseModel User and add a default value for the field email: ```python from pydantic import BaseModel class User(BaseModel): firstname: str email: str = 'Not specified' age: int ``` And we modify the CSV file and remove the email for the user Astor: text firstname,email,age Elsa,elsa@test.com,26 Astor,,44 Edit,edit@test.com,33 Ella,ella@test.com,22 If we run the code we should see the output below: text User(firstname='Elsa', email='elsa@test.com', age=11) User(firstname='Astor', email='Not specified', age=7) User(firstname='Edit', email='edit@test.com', age=3) User(firstname='Ella', email='ella@test.com', age=2) Note that now the object for the user Astor has the default value Not specified assigned to the email property. Default values can also be set using pydantic.Field like so: ```python from pydantic import BaseModel, Field class User(BaseModel): firstname: str email: str = Field(default='Not specified') age: int ``` Mapping BaseModel fields to columns The mapping between a BaseModel field and a column in the CSV file will be done automatically if the names match. However, there are situations that the name of the header for a column is different. We can easily tell the BasemodelCSVReader how the mapping should be done using the method map.\ Assuming that we have a CSV file with the contents below: text First Name,email,age Elsa,elsa@test.com,26 Astor,astor@test.com,44 Edit,edit@test.com,33 Ella,ella@test.com,22 Note that now the column is called First Name and not firstname And we can use the method map, like so: python reader = BasemodelCSVReader(csv, User) reader.map('First Name').to('firstname') Now the BasemodelCSVReader will know how to extract the data from the column First Name and add it to the BaseModel property firstname Supported type annotation At the moment the BasemodelCSVReader supports int, str, float, complex, datetime, and bool. pydantic_csv doesn't parse the date(times) itself. Thus, it relies on the datetime parsing of pydantic. Now they support some common formats and unix timestamps, but if you have a more exotic format you can use a pydantic validator. Assuming that the CSV file has the following contents: text name,email,birthday Edit,edit@test.com,Sunday, 6. January 2002 This would look like this: ```python from pydantic import BaseModel, field_validator from datetime import datetime class User(BaseModel): name: str email: str birthday: datetime @field_validator("birthday", mode="before") def parse_birthday_date(cls, value): return datetime.strptime(value, "%A, %d. %B %Y").date() ``` User-defined types You can use any type for a field as long as its constructor accepts a string: ```python import re from pydantic import BaseModel class SSN: def init(self, val): if re.match(r"\d{9}", val): self.val = f"{val[0:3]}-{val[3:5]}-{val[5:9]}" elif re.match(r"\d{3}-\d{2}-\d{4}", val): self.val = val else: raise ValueError(f"Invalid SSN: {val!r}") class User(BaseModel): name: str ssn: SSN ``` Using the BasemodelCSVWriter Reading a CSV file using the BasemodelCSVReader is great and gives us the type-safety of Pydantic's BaseModels and type annotation, however, there are situations where we would like to use BaseModels for creating CSV files, that's where the BasemodelCSVWriter comes in handy. Using the BasemodelCSVWriter is quite simple. Given that we have a Basemodel User: ```python from pydantic import BaseModel class User(BaseModel): firstname: str lastname: str age: int ``` And in your program we have a list of users: python users = [ User(firstname="John", lastname="Smith", age=40), User(firstname="Daniel", lastname="Nilsson", age=23), User(firstname="Ella", lastname="Fralla", age=28) ] In order to create a CSV using the BasemodelCSVWriter import it from pydantic_csv: python from pydantic_csv import BasemodelCSVReader Initialize it with the required arguments and call the method write: ```python using file on disk with open("") as csv: writer = BasemodelCSVWriter(csv, users, User) writer.write() using buffer (has to be a StringBuffer) writer = BasemodelCSVWriter(buffer, users, User) writer.write() buffer.seek(0) # ensure that the next working steps start at the beginning of the "file" if you need a BytesBuffer just convert it: bytes_buffer: io.BytesIO = io.BytesIO(buffer.read().encode("utf-8")) bytes_buffer.name = buffer.name bytes_buffer.seek(0) # ensure that the next working steps start at the beginning of the "file" ``` That's it! Let's break down the snippet above. First, we open a file called user.csv for writing. After that, an instance of the BasemodelCSVWriter is created. To create a BasemodelCSVWriter we need to pass the file_obj, the list of User instances, and lastly, the type, which in this case is User. The type is required since the writer uses it when trying to figure out the CSV header. By default, it will use the alias of the field otherwise its name defined in the BaseModel, in the case of the BaseModel User the title of each column will be firstname, lastname and age. See below the CSV created out of a list of User: text firstname,lastname,age John,Smith,40 Daniel,Nilsson,23 Ella,Fralla,28 The BasemodelCSVWriter also takes **fmtparams which accepts the same parameters as the csv.writer. For more information see: https://docs.python.org/3/library/csv.html#csv-fmt-params Now, there are situations where we don't want to write the CSV header. In this case, the method write of the BasemodelCSVWriter accepts an extra argument, called skip_header. The default value is False and when set to True it will skip the header. Modifying the CSV header As previously mentioned the BasemodelCSVWriter uses the aliases or names of the fields defined in the BaseModel as the CSV header titles. If you don't want the BasemodelCSVWriter to use the aliases and only the names you can set use_alias to False. This will look like this: python writer = BasemodelCSVWriter(file_obj, users, User, use_alias=False) However, depending on your use case it makes sense to set custom Headers and not use the aliases or names at all. The BasemodelCSVWriter has a map method just for this purpose. Using the User BaseModel with the properties firstname, lastname and age. The snippet below shows how to change firstname to First name and lastname to Last name: ```python with open("", "w") as file: writer = BasemodelCSVWriter(file, users, User) # Add mappings for firstname and lastname writer.map("firstname").to("First Name") writer.map("lastname").to("Last Name") writer.write() ``` The CSV output of the snippet above will be: text First Name,Last Name,age John,Smith,40 Daniel,Nilsson,23 Ella,Fralla,28 Copyright and License Copyright (c) 2024 Nathan Richard. Code released under BSD 3-clause license Credits A huge shoutout to Daniel Furtado (github) and his python package 'dataclass-csv' (pypi | github). The most of the Codebase and Documentation is from him and just adjusted for using pydantic.BaseModel.

pypi package. Binary

Latest version: 0.1.0 Released: 2024-06-28

pydantic-orm

pydantic orm Asynchronous database ORM using Pydantic. Installation Install using pip install -U pydantic-orm or poetry add pydantic-orm

pypi package. Binary

Latest version: 0.1.1 Released: 2021-07-23

edr-pydantic

OGC Environmental Data Retrieval (EDR) API Pydantic This repository contains the edr-pydantic Python package. It provides Pydantic models for the OGC Environmental Data Retrieval (EDR) API. This can, for example, be used to help develop an EDR API using FastAPI. Install shell pip install edr-pydantic Or to install from source: shell pip install git+https://github.com/KNMI/edr-pydantic.git Usage ```python from edr_pydantic.collections import Collection from edr_pydantic.data_queries import EDRQuery, EDRQueryLink, DataQueries from edr_pydantic.extent import Extent, Spatial from edr_pydantic.link import Link from edr_pydantic.observed_property import ObservedProperty from edr_pydantic.parameter import Parameters, Parameter from edr_pydantic.unit import Unit from edr_pydantic.variables import Variables c = Collection( id="hrly_obs", title="Hourly Site Specific observations", description="Observation data for UK observing sites", extent=Extent( spatial=Spatial( bbox=[ [-15.0, 48.0, 5.0, 62.0] ], crs="WGS84" ) ), links=[ Link( href="https://example.org/uk-hourly-site-specific-observations", rel="service-doc" ) ], data_queries=DataQueries( position=EDRQuery( link=EDRQueryLink( href="https://example.org/edr/collections/hrly_obs/position?coords={coords}", rel="data", variables=Variables( query_type="position", output_formats=[ "CoverageJSON" ] ) ) ) ), parameter_names=Parameters({ "Wind Direction": Parameter( unit=Unit( label="degree true" ), observedProperty=ObservedProperty( id="https://codes.wmo.int/common/quantity-kind/_windDirection", label="Wind Direction" ), dataType="integer" ) }) ) print(c.model_dump_json(indent=2, exclude_none=True, by_alias=True)) ``` Will print json { "id": "hrly_obs", "title": "Hourly Site Specific observations", "description": "Observation data for UK observing sites", "links": [ { "href": "https://example.org/uk-hourly-site-specific-observations", "rel": "service-doc" } ], "extent": { "spatial": { "bbox": [ [ -15.0, 48.0, 5.0, 62.0 ] ], "crs": "WGS84" } }, "data_queries": { "position": { "link": { "href": "https://example.org/edr/collections/hrly_obs/position?coords={coords}", "rel": "data", "variables": { "query_type": "position", "output_formats": [ "CoverageJSON" ] } } } }, "parameter_names": { "Wind Direction": { "type": "Parameter", "data-type": "integer", "unit": { "label": "degree true" }, "observedProperty": { "id": "https://codes.wmo.int/common/quantity-kind/_windDirection", "label": "Wind Direction" } } } } IMPORTANT: The arguments by_alias=True to model_dump_json() or model_dump() is required to get the output as shown above. Without by_alias=True the attribute data-type will be wrongly outputted as dataType. This is due an issue in the EDR spec and Pydantic. Contributing Make an editable install from within the repository root shell pip install -e '.[test]' Running tests shell pytest tests/ Linting and typing Linting and typing (mypy) is done using pre-commit hooks. shell pip install pre-commit pre-commit install pre-commit run Related packages CoverageJSON Pydantic geojson-pydantic Real world usage This library is used to build an OGC Environmental Data Retrieval (EDR) API, serving automatic weather data station data from The Royal Netherlands Meteorological Institute (KNMI). See the KNMI Data Platform EDR API. TODOs Help is wanted in the following areas to fully implement the EDR spec: * See TODOs in code listing various small inconsistencies in the spec * In various places there could be more validation on content License Apache License, Version 2.0

pypi package. Binary | Source

Latest version: 0.7.0 Released: 2025-03-05

pydantic-rpc

πŸš€ PydanticRPC PydanticRPC is a Python library that enables you to rapidly expose Pydantic models via gRPC/Connect RPC services without writing any protobuf files. Instead, it automatically generates protobuf files on the fly from the method signatures of your Python objects and the type signatures of your Pydantic models. Below is an example of a simple gRPC service that exposes a PydanticAI agent: ```python import asyncio from openai import AsyncOpenAI from pydantic_ai import Agent from pydantic_ai.models.openai import OpenAIModel from pydantic_rpc import AsyncIOServer, Message Message is just an alias for Pydantic's BaseModel class. class CityLocation(Message): city: str country: str class Olympics(Message): year: int def prompt(self): return f"Where were the Olympics held in {self.year}?" class OlympicsLocationAgent: def init(self): client = AsyncOpenAI( base_url="http://localhost:11434/v1", api_key="ollama_api_key", ) ollama_model = OpenAIModel( model_name="llama3.2", openai_client=client, ) self._agent = Agent(ollama_model) async def ask(self, req: Olympics) -> CityLocation: result = await self._agent.run(req.prompt()) return result.data if name == "main": s = AsyncIOServer() loop = asyncio.get_event_loop() loop.run_until_complete(s.run(OlympicsLocationAgent())) ``` And here is an example of a simple Connect RPC service that exposes the same agent as an ASGI application: ```python import asyncio from openai import AsyncOpenAI from pydantic_ai import Agent from pydantic_ai.models.openai import OpenAIModel from pydantic_rpc import ConnecpyASGIApp, Message class CityLocation(Message): city: str country: str class Olympics(Message): year: int def prompt(self): return f"Where were the Olympics held in {self.year}?" class OlympicsLocationAgent: def init(self): client = AsyncOpenAI( base_url="http://localhost:11434/v1", api_key="ollama_api_key", ) ollama_model = OpenAIModel( model_name="llama3.2", openai_client=client, ) self._agent = Agent(ollama_model, result_type=CityLocation) async def ask(self, req: Olympics) -> CityLocation: result = await self._agent.run(req.prompt()) return result.data app = ConnecpyASGIApp() app.mount(OlympicsLocationAgent()) ``` πŸ’‘ Key Features πŸ”„ Automatic Protobuf Generation: Automatically creates protobuf files matching the method signatures of your Python objects. βš™οΈ Dynamic Code Generation: Generates server and client stubs using grpcio-tools. βœ… Pydantic Integration: Uses pydantic for robust type validation and serialization. πŸ“„ Pprotobuf File Export: Exports the generated protobuf files for use in other languages. For gRPC: πŸ’š Health Checking: Built-in support for gRPC health checks using grpc_health.v1. πŸ”Ž Server Reflection: Built-in support for gRPC server reflection. ⚑ Asynchronous Support: Easily create asynchronous gRPC services with AsyncIOServer. For gRPC-Web: 🌐 WSGI/ASGI Support: Create gRPC-Web services that can run as WSGI or ASGI applications powered by Sonora. For Connect-RPC: 🌐 Connecpy Support: Partially supports Connect-RPC via Connecpy. πŸ› οΈ Pre-generated Protobuf Files and Code: Pre-generate proto files and corresponding code via the CLI. By setting the environment variable (PYDANTIC_RPC_SKIP_GENERATION), you can skip runtime generation. πŸ“¦ Installation Install PydanticRPC via pip: bash pip install pydantic-rpc πŸš€ Getting Started πŸ”§ Synchronous Service Example ```python from pydantic_rpc import Server, Message class HelloRequest(Message): name: str class HelloReply(Message): message: str class Greeter: # Define methods that accepts a request and returns a response. def say_hello(self, request: HelloRequest) -> HelloReply: return HelloReply(message=f"Hello, {request.name}!") if name == "main": server = Server() server.run(Greeter()) ``` βš™οΈ Asynchronous Service Example ```python import asyncio from pydantic_rpc import AsyncIOServer, Message class HelloRequest(Message): name: str class HelloReply(Message): message: str class Greeter: async def say_hello(self, request: HelloRequest) -> HelloReply: return HelloReply(message=f"Hello, {request.name}!") if name == "main": server = AsyncIOServer() loop = asyncio.get_event_loop() loop.run_until_complete(server.run(Greeter())) ``` 🌐 ASGI Application Example ```python from pydantic_rpc import ASGIApp, Message class HelloRequest(Message): name: str class HelloReply(Message): message: str class Greeter: def say_hello(self, request: HelloRequest) -> HelloReply: return HelloReply(message=f"Hello, {request.name}!") async def app(scope, receive, send): """ASGI application. Args: scope (dict): The ASGI scope. receive (callable): The receive function. send (callable): The send function. """ pass Please note that app is any ASGI application, such as FastAPI or Starlette. app = ASGIApp(app) app.mount(Greeter()) ``` 🌐 WSGI Application Example ```python from pydantic_rpc import WSGIApp, Message class HelloRequest(Message): name: str class HelloReply(Message): message: str class Greeter: def say_hello(self, request: HelloRequest) -> HelloReply: return HelloReply(message=f"Hello, {request.name}!") def app(environ, start_response): """WSGI application. Args: environ (dict): The WSGI environment. start_response (callable): The start_response function. """ pass Please note that app is any WSGI application, such as Flask or Django. app = WSGIApp(app) app.mount(Greeter()) ``` πŸ† Connecpy (Connect-RPC) Example PydanticRPC also partially supports Connect-RPC via connecpy. Check out β€œgreeting_connecpy.py” for an example: bash uv run greeting_connecpy.py This will launch a Connecpy-based ASGI application that uses the same Pydantic models to serve Connect-RPC requests. [!NOTE] Please install protoc-gen-connecpy to run the Connecpy example. Install Go. Please follow the instruction described in https://go.dev/doc/install. Install protoc-gen-connecpy: bash go install github.com/connecpy/protoc-gen-connecpy@latest ♻️ Skipping Protobuf Generation By default, PydanticRPC generates .proto files and code at runtime. If you wish to skip the code-generation step (for example, in production environment), set the environment variable below: bash export PYDANTIC_RPC_SKIP_GENERATION=true When this variable is set to "true", PydanticRPC will load existing pre-generated modules rather than generating them on the fly. πŸ’Ž Advanced Features 🌊 Response Streaming PydanticRPC supports streaming responses only for asynchronous gRPC and gRPC-Web services. If a service class method’s return type is typing.AsyncIterator[T], the method is considered a streaming method. Please see the sample code below: ```python import asyncio from typing import Annotated, AsyncIterator from openai import AsyncOpenAI from pydantic import Field from pydantic_ai import Agent from pydantic_ai.models.openai import OpenAIModel from pydantic_rpc import AsyncIOServer, Message Message is just a pydantic BaseModel alias class CityLocation(Message): city: Annotated[str, Field(description="The city where the Olympics were held")] country: Annotated[ str, Field(description="The country where the Olympics were held") ] class OlympicsQuery(Message): year: Annotated[int, Field(description="The year of the Olympics", ge=1896)] def prompt(self): return f"Where were the Olympics held in {self.year}?" class OlympicsDurationQuery(Message): start: Annotated[int, Field(description="The start year of the Olympics", ge=1896)] end: Annotated[int, Field(description="The end year of the Olympics", ge=1896)] def prompt(self): return f"From {self.start} to {self.end}, how many Olympics were held? Please provide the list of countries and cities." class StreamingResult(Message): answer: Annotated[str, Field(description="The answer to the query")] class OlympicsAgent: def init(self): client = AsyncOpenAI( base_url='http://localhost:11434/v1', api_key='ollama_api_key', ) ollama_model = OpenAIModel( model_name='llama3.2', openai_client=client, ) self._agent = Agent(ollama_model) async def ask(self, req: OlympicsQuery) -> CityLocation: result = await self._agent.run(req.prompt(), result_type=CityLocation) return result.data async def ask_stream( self, req: OlympicsDurationQuery ) -> AsyncIterator[StreamingResult]: async with self._agent.run_stream(req.prompt(), result_type=str) as result: async for data in result.stream_text(delta=True): yield StreamingResult(answer=data) if name == "main": s = AsyncIOServer() loop = asyncio.get_event_loop() loop.run_until_complete(s.run(OlympicsAgent())) ``` In the example above, the ask_stream method returns an AsyncIterator[StreamingResult] object, which is considered a streaming method. The StreamingResult class is a Pydantic model that defines the response type of the streaming method. You can use any Pydantic model as the response type. Now, you can call the ask_stream method of the server described above using your preferred gRPC client tool. The example below uses buf curl. ```console % buf curl --data '{"start": 1980, "end": 2024}' -v http://localhost:50051/olympicsagent.v1.OlympicsAgent/AskStream --protocol grpc --http2-prior-knowledge buf: * Using server reflection to resolve "olympicsagent.v1.OlympicsAgent" buf: * Dialing (tcp) localhost:50051... buf: * Connected to [::1]:50051 buf: > (#1) POST /grpc.reflection.v1.ServerReflection/ServerReflectionInfo buf: > (#1) Accept-Encoding: identity buf: > (#1) Content-Type: application/grpc+proto buf: > (#1) Grpc-Accept-Encoding: gzip buf: > (#1) Grpc-Timeout: 119997m buf: > (#1) Te: trailers buf: > (#1) User-Agent: grpc-go-connect/1.12.0 (go1.21.4) buf/1.28.1 buf: > (#1) buf: } (#1) [5 bytes data] buf: } (#1) [32 bytes data] buf: < (#1) HTTP/2.0 200 OK buf: < (#1) Content-Type: application/grpc buf: < (#1) Grpc-Message: Method not found! buf: < (#1) Grpc-Status: 12 buf: < (#1) buf: * (#1) Call complete buf: > (#2) POST /grpc.reflection.v1alpha.ServerReflection/ServerReflectionInfo buf: > (#2) Accept-Encoding: identity buf: > (#2) Content-Type: application/grpc+proto buf: > (#2) Grpc-Accept-Encoding: gzip buf: > (#2) Grpc-Timeout: 119967m buf: > (#2) Te: trailers buf: > (#2) User-Agent: grpc-go-connect/1.12.0 (go1.21.4) buf/1.28.1 buf: > (#2) buf: } (#2) [5 bytes data] buf: } (#2) [32 bytes data] buf: < (#2) HTTP/2.0 200 OK buf: < (#2) Content-Type: application/grpc buf: < (#2) Grpc-Accept-Encoding: identity, deflate, gzip buf: < (#2) buf: { (#2) [5 bytes data] buf: { (#2) [434 bytes data] buf: * Server reflection has resolved file "olympicsagent.proto" buf: * Invoking RPC olympicsagent.v1.OlympicsAgent.AskStream buf: > (#3) POST /olympicsagent.v1.OlympicsAgent/AskStream buf: > (#3) Accept-Encoding: identity buf: > (#3) Content-Type: application/grpc+proto buf: > (#3) Grpc-Accept-Encoding: gzip buf: > (#3) Grpc-Timeout: 119947m buf: > (#3) Te: trailers buf: > (#3) User-Agent: grpc-go-connect/1.12.0 (go1.21.4) buf/1.28.1 buf: > (#3) buf: } (#3) [5 bytes data] buf: } (#3) [6 bytes data] buf: * (#3) Finished upload buf: < (#3) HTTP/2.0 200 OK buf: < (#3) Content-Type: application/grpc buf: < (#3) Grpc-Accept-Encoding: identity, deflate, gzip buf: < (#3) buf: { (#3) [5 bytes data] buf: { (#3) [25 bytes data] { "answer": "Here's a list of Summer" } buf: { (#3) [5 bytes data] buf: { (#3) [31 bytes data] { "answer": " and Winter Olympics from 198" } buf: { (#3) [5 bytes data] buf: { (#3) [29 bytes data] { "answer": "0 to 2024:\n\nSummer Olympics" } buf: { (#3) [5 bytes data] buf: { (#3) [20 bytes data] { "answer": ":\n1. 1980 - Moscow" } buf: { (#3) [5 bytes data] buf: { (#3) [20 bytes data] { "answer": ", Soviet Union\n2. " } buf: { (#3) [5 bytes data] buf: { (#3) [32 bytes data] { "answer": "1984 - Los Angeles, California" } buf: { (#3) [5 bytes data] buf: { (#3) [15 bytes data] { "answer": ", USA\n3. 1988" } buf: { (#3) [5 bytes data] buf: { (#3) [26 bytes data] { "answer": " - Seoul, South Korea\n4." } buf: { (#3) [5 bytes data] buf: { (#3) [27 bytes data] { "answer": " 1992 - Barcelona, Spain\n" } buf: { (#3) [5 bytes data] buf: { (#3) [20 bytes data] { "answer": "5. 1996 - Atlanta," } buf: { (#3) [5 bytes data] buf: { (#3) [22 bytes data] { "answer": " Georgia, USA\n6. 200" } buf: { (#3) [5 bytes data] buf: { (#3) [26 bytes data] { "answer": "0 - Sydney, Australia\n7." } buf: { (#3) [5 bytes data] buf: { (#3) [25 bytes data] { "answer": " 2004 - Athens, Greece\n" } buf: { (#3) [5 bytes data] buf: { (#3) [20 bytes data] { "answer": "8. 2008 - Beijing," } buf: { (#3) [5 bytes data] buf: { (#3) [18 bytes data] { "answer": " China\n9. 2012 -" } buf: { (#3) [5 bytes data] buf: { (#3) [29 bytes data] { "answer": " London, United Kingdom\n10." } buf: { (#3) [5 bytes data] buf: { (#3) [24 bytes data] { "answer": " 2016 - Rio de Janeiro" } buf: { (#3) [5 bytes data] buf: { (#3) [18 bytes data] { "answer": ", Brazil\n11. 202" } buf: { (#3) [5 bytes data] buf: { (#3) [24 bytes data] { "answer": "0 - Tokyo, Japan (held" } buf: { (#3) [5 bytes data] buf: { (#3) [21 bytes data] { "answer": " in 2021 due to the" } buf: { (#3) [5 bytes data] buf: { (#3) [26 bytes data] { "answer": " COVID-19 pandemic)\n12. " } buf: { (#3) [5 bytes data] buf: { (#3) [28 bytes data] { "answer": "2024 - Paris, France\n\nNote" } buf: { (#3) [5 bytes data] buf: { (#3) [41 bytes data] { "answer": ": The Olympics were held without a host" } buf: { (#3) [5 bytes data] buf: { (#3) [26 bytes data] { "answer": " city for one year (2022" } buf: { (#3) [5 bytes data] buf: { (#3) [42 bytes data] { "answer": ", due to the Russian invasion of Ukraine" } buf: { (#3) [5 bytes data] buf: { (#3) [29 bytes data] { "answer": ").\n\nWinter Olympics:\n1. 198" } buf: { (#3) [5 bytes data] buf: { (#3) [27 bytes data] { "answer": "0 - Lake Placid, New York" } buf: { (#3) [5 bytes data] buf: { (#3) [15 bytes data] { "answer": ", USA\n2. 1984" } buf: { (#3) [5 bytes data] buf: { (#3) [27 bytes data] { "answer": " - Sarajevo, Yugoslavia (" } buf: { (#3) [5 bytes data] buf: { (#3) [30 bytes data] { "answer": "now Bosnia and Herzegovina)\n" } buf: { (#3) [5 bytes data] buf: { (#3) [20 bytes data] { "answer": "3. 1988 - Calgary," } buf: { (#3) [5 bytes data] buf: { (#3) [25 bytes data] { "answer": " Alberta, Canada\n4. 199" } buf: { (#3) [5 bytes data] buf: { (#3) [26 bytes data] { "answer": "2 - Albertville, France\n" } buf: { (#3) [5 bytes data] buf: { (#3) [13 bytes data] { "answer": "5. 1994 - L" } buf: { (#3) [5 bytes data] buf: { (#3) [24 bytes data] { "answer": "illehammer, Norway\n6. " } buf: { (#3) [5 bytes data] buf: { (#3) [23 bytes data] { "answer": "1998 - Nagano, Japan\n" } buf: { (#3) [5 bytes data] buf: { (#3) [16 bytes data] { "answer": "7. 2002 - Salt" } buf: { (#3) [5 bytes data] buf: { (#3) [24 bytes data] { "answer": " Lake City, Utah, USA\n" } buf: { (#3) [5 bytes data] buf: { (#3) [18 bytes data] { "answer": "8. 2006 - Torino" } buf: { (#3) [5 bytes data] buf: { (#3) [17 bytes data] { "answer": ", Italy\n9. 2010" } buf: { (#3) [5 bytes data] buf: { (#3) [40 bytes data] { "answer": " - Vancouver, British Columbia, Canada" } buf: { (#3) [5 bytes data] buf: { (#3) [13 bytes data] { "answer": "\n10. 2014 -" } buf: { (#3) [5 bytes data] buf: { (#3) [20 bytes data] { "answer": " Sochi, Russia\n11." } buf: { (#3) [5 bytes data] buf: { (#3) [16 bytes data] { "answer": " 2018 - Pyeong" } buf: { (#3) [5 bytes data] buf: { (#3) [24 bytes data] { "answer": "chang, South Korea\n12." } buf: < (#3) buf: < (#3) Grpc-Message: buf: < (#3) Grpc-Status: 0 buf: * (#3) Call complete buf: < (#2) buf: < (#2) Grpc-Message: buf: < (#2) Grpc-Status: 0 buf: * (#2) Call complete % ``` πŸ”— Multiple Services with Custom Interceptors PydanticRPC supports defining and running multiple services in a single server: ```python from datetime import datetime import grpc from grpc import ServicerContext from pydantic_rpc import Server, Message class FooRequest(Message): name: str age: int d: dict[str, str] class FooResponse(Message): name: str age: int d: dict[str, str] class BarRequest(Message): names: list[str] class BarResponse(Message): names: list[str] class FooService: def foo(self, request: FooRequest) -> FooResponse: return FooResponse(name=request.name, age=request.age, d=request.d) class MyMessage(Message): name: str age: int o: int | datetime class Request(Message): name: str age: int d: dict[str, str] m: MyMessage class Response(Message): name: str age: int d: dict[str, str] m: MyMessage | str class BarService: def bar(self, req: BarRequest, ctx: ServicerContext) -> BarResponse: return BarResponse(names=req.names) class CustomInterceptor(grpc.ServerInterceptor): def intercept_service(self, continuation, handler_call_details): # do something print(handler_call_details.method) return continuation(handler_call_details) async def app(scope, receive, send): pass if name == "main": s = Server(10, CustomInterceptor()) s.run( FooService(), BarService(), ) ``` 🩺 [TODO] Custom Health Check TODO πŸ—„οΈ Protobuf file and code (Python files) generation using CLI You can genereate protobuf files and code for a given module and a specified class using pydantic-rpc CLI command: bash pydantic-rpc a_module.py aClassName Using this generated proto file and tools as protoc, buf and BSR, you could generate code for any desired language other than Python. πŸ“– Data Type Mapping | Python Type | Protobuf Type | |--------------------------------|---------------------------| | str | string | | bytes | bytes | | bool | bool | | int | int32 | | float | float, double | | list[T], tuple[T] | repeated T | | dict[K, V] | map | | datetime.datetime | google.protobuf.Timestamp | | datetime.timedelta | google.protobuf.Duration | | typing.Union[A, B] | oneof A, B | | subclass of enum.Enum | enum | | subclass of pydantic.BaseModel | message | TODO [ ] Streaming Support [x] unary-stream [ ] stream-unary [ ] stream-stream [ ] Betterproto Support [ ] Sonora-connect Support [ ] Custom Health Check Support [ ] Add more examples [ ] Add tests πŸ“œ License This project is licensed under the MIT License. See the LICENSE file for details.

pypi package. Binary

Latest version: 0.6.1 Released: 2025-02-22

pydantic-ast

Pydantic AST Pydantic models covering Python AST types. Installation py pip install pydantic-ast Usage Use it as a drop-in replacement to ast.parse with a more readable representation. ```py import ast import pydantic_ast source = "x = 1" ast.parse(source) pydantic_ast.parse(source) pydantic-ast: body=[Assign(targets=[Name(id='x', ctx=Store())], value=Constant(value=1), type_comment=None)] type_ignores=[] ``` Use it on the command line to quickly get an AST of a Python program or a section of one sh echo '"Hello world"' | pydantic-ast ⇣ body=[Expr(value=Constant(value='Hello world'))] type_ignores=[] Use it on ASTs you got from elsewhere to make them readable, or to inspect parts of them more easily. The AST_to_pydantic class is a ast.NodeTransformer that converts nodes in the AST to pydantic_ast model types when the tree nodes are visited. ```py from pydantic_ast import AST_to_pydantic source = "123 + 345 == expected" my_mystery_ast = ast.parse(source) ast_model = AST_to_pydantic().visit(my_mystery_ast) ast_model body=[Expr(value=Compare(left=BinOp(left=Constant(value=123), op=Add(), right=Constant(value=345)), ops=[Eq()], comparators=[Name(id='expected', ctx=Load())]))] type_ignores=[] ast_model.body [Expr(value=Compare(left=BinOp(left=Constant(value=123), op=Add(), right=Constant(value=345)), ops=[Eq()], comparators=[Name(id='expected', ctx=Load())]))] ``` It's simply much easier to drill down into a tree when you can see the fields in a repr. ```py ast_model.body[0].value Compare(left=BinOp(left=Constant(value=123), op=Add(), right=Constant(value=345)), ops=[Eq()], comparators=[Name(id='expected', ctx=Load())]) ast_model.body[0].value.left BinOp(left=Constant(value=123), op=Add(), right=Constant(value=345)) ast_model.body[0].value.left.left Constant(value=123) ast_model.body[0].value.left.left.value 123 ``` Development To set up pre-commit hooks (to keep the CI bot happy) run pre-commit install-hooks so all git commits trigger the pre-commit checks. I use Conventional Commits. This runs black, flake8, autopep8, pyupgrade, etc. To set up a dev env, I first create a new conda environment and use it in PDM with which python > .pdm-python. To use virtualenv environment instead of conda, skip that. Run pdm install and a .venv will be created if no Python binary path is found in .pdm-python. To run tests, run pdm run python -m pytest and the PDM environment will be used to run the test suite.

pypi package. Binary

Latest version: 0.2.0 Released: 2023-09-19