============ pydantic-aws ============ .. image:: https://img.shields.io/pypi/v/pydantic_aws.svg :target: https://pypi.python.org/pypi/pydantic_aws .. image:: https://img.shields.io/travis/aaronlelevier/pydantic_aws.svg :target: https://travis-ci.com/aaronlelevier/pydantic_aws .. image:: https://readthedocs.org/projects/pydantic-aws/badge/?version=latest :target: https://pydantic-aws.readthedocs.io/en/latest/?version=latest :alt: Documentation Status .. image:: https://pyup.io/repos/github/aaronlelevier/pydantic_aws/shield.svg :target: https://pyup.io/repos/github/aaronlelevier/pydantic_aws/ :alt: Updates Pydantic data validation for AWS models Free software: MIT license Documentation: https://pydantic-aws.readthedocs.io. Features TODO Credits This package was created with Cookiecutter_ and the audreyr/cookiecutter-pypackage_ project template. .. Cookiecutter: https://github.com/audreyr/cookiecutter .. audreyr/cookiecutter-pypackage: https://github.com/audreyr/cookiecutter-pypackage
Latest version: 0.1.0 Released: 2024-08-24
pydantic orm Asynchronous database ORM using Pydantic. Installation Install using pip install -U pydantic-orm or poetry add pydantic-orm
pypi package. Binary
Latest version: 0.1.1 Released: 2021-07-23
jwt-pydantic JWT claim sets are becoming more complex and harder to manage. Writing validators for these claims checks is time consuming. This package uses the power of Pydantic models, to make life a bit easier. We have also included a Starlette middleware, which can be easily used in FastAPI, as shown here. Example Let's say our JWT token has the claims set below: python claims = { "firstname": "David", "surname": "Bowie", "best_album": "Hunky Dory" } We can use jwt-pydantic to simplify the generation and verification of such tokens. First we declare the Pydantic model, by subclassing JWTPydantic: ```python from jwt_pydantic import JWTPydantic class MyJWT(JWTPydantic): firstname: str surname: str best_album: str ``` To generate a new JWT token, using the claims above, we do the following: python token = MyJWT.new_token(claims=claims, key="SECRET_KEY") We can then verify this token easily as follows python MyJWT.verify_token(token, key="SECRET_KEY") We can also return the decoded JWT token as our Pydantic model, to be used elsewhere: python decoded_jwt = MyJWT(token, key="SECRET_KEY") print(decoded_jwt.firstname) # David FastAPI Middleware It is also easy to declare a new JWTPydantic model and use this in middleware, as shown below. ```python main.py from fastapi import FastAPI from jwt_pydantic import JWTPydantic, JWTPydanticMiddleware SECRET_KEY = "mykey" class MyJWT(JWTPydantic): foo: int app = FastAPI() app.add_middleware( JWTPydanticMiddleware, header_name="jwt", jwt_pydantic_model=MyJWT, jwt_key=SECRET_KEY, ) @app.get("/") def homepage(): return "Hello world" ``` We can run this code easily using uvicorn (uvicorn main:app --reload), and then using python on a different shell, we can test this to show it in action: python import requests requests.get('http://127.0.0.1:8000/', headers={'jwt': MyJWT.new_token({'foo': 1}, 'mykey')}) # b'Hello World' If we want to change the response when the JWT token is bad, you can override the method in bad_response in JWTPydanticMiddleware, such as below: python class MyMiddleware(JWTPydanticMiddleware): def bad_response(self, token_error: str) -> JSONResponse: """Changing standard response to be a JSONResponse""" return JSONResponse( {"bad_token": token_error}, status_code=403 ) python-jose keyword arguments JWTPydantic uses python-jose to manage the JWT tokens. The extra features that are provided using this package can be easily used through the keyword argument jose_opts. For instance, we can add the 'at_hash' claim to our JWT token by specifying the keyword argument access_token. python MyJWT.new_token( claims, SECRET_KEY, jose_opts={"access_token": "1234"}, )
pypi package. Binary
Latest version: 0.0.7 Released: 2023-07-19
OGC Environmental Data Retrieval (EDR) API Pydantic This repository contains the edr-pydantic Python package. It provides Pydantic models for the OGC Environmental Data Retrieval (EDR) API. This can, for example, be used to help develop an EDR API using FastAPI. Install shell pip install edr-pydantic Or to install from source: shell pip install git+https://github.com/KNMI/edr-pydantic.git Usage ```python from edr_pydantic.collections import Collection from edr_pydantic.data_queries import EDRQuery, EDRQueryLink, DataQueries from edr_pydantic.extent import Extent, Spatial from edr_pydantic.link import Link from edr_pydantic.observed_property import ObservedProperty from edr_pydantic.parameter import Parameters, Parameter from edr_pydantic.unit import Unit from edr_pydantic.variables import Variables c = Collection( id="hrly_obs", title="Hourly Site Specific observations", description="Observation data for UK observing sites", extent=Extent( spatial=Spatial( bbox=[ [-15.0, 48.0, 5.0, 62.0] ], crs="WGS84" ) ), links=[ Link( href="https://example.org/uk-hourly-site-specific-observations", rel="service-doc" ) ], data_queries=DataQueries( position=EDRQuery( link=EDRQueryLink( href="https://example.org/edr/collections/hrly_obs/position?coords={coords}", rel="data", variables=Variables( query_type="position", output_formats=[ "CoverageJSON" ] ) ) ) ), parameter_names=Parameters({ "Wind Direction": Parameter( unit=Unit( label="degree true" ), observedProperty=ObservedProperty( id="https://codes.wmo.int/common/quantity-kind/_windDirection", label="Wind Direction" ), dataType="integer" ) }) ) print(c.model_dump_json(indent=2, exclude_none=True, by_alias=True)) ``` Will print json { "id": "hrly_obs", "title": "Hourly Site Specific observations", "description": "Observation data for UK observing sites", "links": [ { "href": "https://example.org/uk-hourly-site-specific-observations", "rel": "service-doc" } ], "extent": { "spatial": { "bbox": [ [ -15.0, 48.0, 5.0, 62.0 ] ], "crs": "WGS84" } }, "data_queries": { "position": { "link": { "href": "https://example.org/edr/collections/hrly_obs/position?coords={coords}", "rel": "data", "variables": { "query_type": "position", "output_formats": [ "CoverageJSON" ] } } } }, "parameter_names": { "Wind Direction": { "type": "Parameter", "data-type": "integer", "unit": { "label": "degree true" }, "observedProperty": { "id": "https://codes.wmo.int/common/quantity-kind/_windDirection", "label": "Wind Direction" } } } } IMPORTANT: The arguments by_alias=True to model_dump_json() or model_dump() is required to get the output as shown above. Without by_alias=True the attribute data-type will be wrongly outputted as dataType. This is due an issue in the EDR spec and Pydantic. Contributing Make an editable install from within the repository root shell pip install -e '.[test]' Running tests shell pytest tests/ Linting and typing Linting and typing (mypy) is done using pre-commit hooks. shell pip install pre-commit pre-commit install pre-commit run Related packages CoverageJSON Pydantic geojson-pydantic Real world usage This library is used to build an OGC Environmental Data Retrieval (EDR) API, serving automatic weather data station data from The Royal Netherlands Meteorological Institute (KNMI). See the KNMI Data Platform EDR API. TODOs Help is wanted in the following areas to fully implement the EDR spec: * See TODOs in code listing various small inconsistencies in the spec * In various places there could be more validation on content License Apache License, Version 2.0
Latest version: 0.7.0 Released: 2025-03-05
π PydanticRPC PydanticRPC is a Python library that enables you to rapidly expose Pydantic models via gRPC/Connect RPC services without writing any protobuf files. Instead, it automatically generates protobuf files on the fly from the method signatures of your Python objects and the type signatures of your Pydantic models. Below is an example of a simple gRPC service that exposes a PydanticAI agent: ```python import asyncio from openai import AsyncOpenAI from pydantic_ai import Agent from pydantic_ai.models.openai import OpenAIModel from pydantic_rpc import AsyncIOServer, Message Message is just an alias for Pydantic's BaseModel class. class CityLocation(Message): city: str country: str class Olympics(Message): year: int def prompt(self): return f"Where were the Olympics held in {self.year}?" class OlympicsLocationAgent: def init(self): client = AsyncOpenAI( base_url="http://localhost:11434/v1", api_key="ollama_api_key", ) ollama_model = OpenAIModel( model_name="llama3.2", openai_client=client, ) self._agent = Agent(ollama_model) async def ask(self, req: Olympics) -> CityLocation: result = await self._agent.run(req.prompt()) return result.data if name == "main": s = AsyncIOServer() loop = asyncio.get_event_loop() loop.run_until_complete(s.run(OlympicsLocationAgent())) ``` And here is an example of a simple Connect RPC service that exposes the same agent as an ASGI application: ```python import asyncio from openai import AsyncOpenAI from pydantic_ai import Agent from pydantic_ai.models.openai import OpenAIModel from pydantic_rpc import ConnecpyASGIApp, Message class CityLocation(Message): city: str country: str class Olympics(Message): year: int def prompt(self): return f"Where were the Olympics held in {self.year}?" class OlympicsLocationAgent: def init(self): client = AsyncOpenAI( base_url="http://localhost:11434/v1", api_key="ollama_api_key", ) ollama_model = OpenAIModel( model_name="llama3.2", openai_client=client, ) self._agent = Agent(ollama_model, result_type=CityLocation) async def ask(self, req: Olympics) -> CityLocation: result = await self._agent.run(req.prompt()) return result.data app = ConnecpyASGIApp() app.mount(OlympicsLocationAgent()) ``` π‘ Key Features π Automatic Protobuf Generation: Automatically creates protobuf files matching the method signatures of your Python objects. βοΈ Dynamic Code Generation: Generates server and client stubs using grpcio-tools. β Pydantic Integration: Uses pydantic for robust type validation and serialization. π Pprotobuf File Export: Exports the generated protobuf files for use in other languages. For gRPC: π Health Checking: Built-in support for gRPC health checks using grpc_health.v1. π Server Reflection: Built-in support for gRPC server reflection. β‘ Asynchronous Support: Easily create asynchronous gRPC services with AsyncIOServer. For gRPC-Web: π WSGI/ASGI Support: Create gRPC-Web services that can run as WSGI or ASGI applications powered by Sonora. For Connect-RPC: π Connecpy Support: Partially supports Connect-RPC via Connecpy. π οΈ Pre-generated Protobuf Files and Code: Pre-generate proto files and corresponding code via the CLI. By setting the environment variable (PYDANTIC_RPC_SKIP_GENERATION), you can skip runtime generation. π¦ Installation Install PydanticRPC via pip: bash pip install pydantic-rpc π Getting Started π§ Synchronous Service Example ```python from pydantic_rpc import Server, Message class HelloRequest(Message): name: str class HelloReply(Message): message: str class Greeter: # Define methods that accepts a request and returns a response. def say_hello(self, request: HelloRequest) -> HelloReply: return HelloReply(message=f"Hello, {request.name}!") if name == "main": server = Server() server.run(Greeter()) ``` βοΈ Asynchronous Service Example ```python import asyncio from pydantic_rpc import AsyncIOServer, Message class HelloRequest(Message): name: str class HelloReply(Message): message: str class Greeter: async def say_hello(self, request: HelloRequest) -> HelloReply: return HelloReply(message=f"Hello, {request.name}!") if name == "main": server = AsyncIOServer() loop = asyncio.get_event_loop() loop.run_until_complete(server.run(Greeter())) ``` π ASGI Application Example ```python from pydantic_rpc import ASGIApp, Message class HelloRequest(Message): name: str class HelloReply(Message): message: str class Greeter: def say_hello(self, request: HelloRequest) -> HelloReply: return HelloReply(message=f"Hello, {request.name}!") async def app(scope, receive, send): """ASGI application. Args: scope (dict): The ASGI scope. receive (callable): The receive function. send (callable): The send function. """ pass Please note that app is any ASGI application, such as FastAPI or Starlette. app = ASGIApp(app) app.mount(Greeter()) ``` π WSGI Application Example ```python from pydantic_rpc import WSGIApp, Message class HelloRequest(Message): name: str class HelloReply(Message): message: str class Greeter: def say_hello(self, request: HelloRequest) -> HelloReply: return HelloReply(message=f"Hello, {request.name}!") def app(environ, start_response): """WSGI application. Args: environ (dict): The WSGI environment. start_response (callable): The start_response function. """ pass Please note that app is any WSGI application, such as Flask or Django. app = WSGIApp(app) app.mount(Greeter()) ``` π Connecpy (Connect-RPC) Example PydanticRPC also partially supports Connect-RPC via connecpy. Check out βgreeting_connecpy.pyβ for an example: bash uv run greeting_connecpy.py This will launch a Connecpy-based ASGI application that uses the same Pydantic models to serve Connect-RPC requests. [!NOTE] Please install protoc-gen-connecpy to run the Connecpy example. Install Go. Please follow the instruction described in https://go.dev/doc/install. Install protoc-gen-connecpy: bash go install github.com/connecpy/protoc-gen-connecpy@latest β»οΈ Skipping Protobuf Generation By default, PydanticRPC generates .proto files and code at runtime. If you wish to skip the code-generation step (for example, in production environment), set the environment variable below: bash export PYDANTIC_RPC_SKIP_GENERATION=true When this variable is set to "true", PydanticRPC will load existing pre-generated modules rather than generating them on the fly. π Advanced Features π Response Streaming PydanticRPC supports streaming responses only for asynchronous gRPC and gRPC-Web services. If a service class methodβs return type is typing.AsyncIterator[T], the method is considered a streaming method. Please see the sample code below: ```python import asyncio from typing import Annotated, AsyncIterator from openai import AsyncOpenAI from pydantic import Field from pydantic_ai import Agent from pydantic_ai.models.openai import OpenAIModel from pydantic_rpc import AsyncIOServer, Message Message is just a pydantic BaseModel alias class CityLocation(Message): city: Annotated[str, Field(description="The city where the Olympics were held")] country: Annotated[ str, Field(description="The country where the Olympics were held") ] class OlympicsQuery(Message): year: Annotated[int, Field(description="The year of the Olympics", ge=1896)] def prompt(self): return f"Where were the Olympics held in {self.year}?" class OlympicsDurationQuery(Message): start: Annotated[int, Field(description="The start year of the Olympics", ge=1896)] end: Annotated[int, Field(description="The end year of the Olympics", ge=1896)] def prompt(self): return f"From {self.start} to {self.end}, how many Olympics were held? Please provide the list of countries and cities." class StreamingResult(Message): answer: Annotated[str, Field(description="The answer to the query")] class OlympicsAgent: def init(self): client = AsyncOpenAI( base_url='http://localhost:11434/v1', api_key='ollama_api_key', ) ollama_model = OpenAIModel( model_name='llama3.2', openai_client=client, ) self._agent = Agent(ollama_model) async def ask(self, req: OlympicsQuery) -> CityLocation: result = await self._agent.run(req.prompt(), result_type=CityLocation) return result.data async def ask_stream( self, req: OlympicsDurationQuery ) -> AsyncIterator[StreamingResult]: async with self._agent.run_stream(req.prompt(), result_type=str) as result: async for data in result.stream_text(delta=True): yield StreamingResult(answer=data) if name == "main": s = AsyncIOServer() loop = asyncio.get_event_loop() loop.run_until_complete(s.run(OlympicsAgent())) ``` In the example above, the ask_stream method returns an AsyncIterator[StreamingResult] object, which is considered a streaming method. The StreamingResult class is a Pydantic model that defines the response type of the streaming method. You can use any Pydantic model as the response type. Now, you can call the ask_stream method of the server described above using your preferred gRPC client tool. The example below uses buf curl. ```console % buf curl --data '{"start": 1980, "end": 2024}' -v http://localhost:50051/olympicsagent.v1.OlympicsAgent/AskStream --protocol grpc --http2-prior-knowledge buf: * Using server reflection to resolve "olympicsagent.v1.OlympicsAgent" buf: * Dialing (tcp) localhost:50051... buf: * Connected to [::1]:50051 buf: > (#1) POST /grpc.reflection.v1.ServerReflection/ServerReflectionInfo buf: > (#1) Accept-Encoding: identity buf: > (#1) Content-Type: application/grpc+proto buf: > (#1) Grpc-Accept-Encoding: gzip buf: > (#1) Grpc-Timeout: 119997m buf: > (#1) Te: trailers buf: > (#1) User-Agent: grpc-go-connect/1.12.0 (go1.21.4) buf/1.28.1 buf: > (#1) buf: } (#1) [5 bytes data] buf: } (#1) [32 bytes data] buf: < (#1) HTTP/2.0 200 OK buf: < (#1) Content-Type: application/grpc buf: < (#1) Grpc-Message: Method not found! buf: < (#1) Grpc-Status: 12 buf: < (#1) buf: * (#1) Call complete buf: > (#2) POST /grpc.reflection.v1alpha.ServerReflection/ServerReflectionInfo buf: > (#2) Accept-Encoding: identity buf: > (#2) Content-Type: application/grpc+proto buf: > (#2) Grpc-Accept-Encoding: gzip buf: > (#2) Grpc-Timeout: 119967m buf: > (#2) Te: trailers buf: > (#2) User-Agent: grpc-go-connect/1.12.0 (go1.21.4) buf/1.28.1 buf: > (#2) buf: } (#2) [5 bytes data] buf: } (#2) [32 bytes data] buf: < (#2) HTTP/2.0 200 OK buf: < (#2) Content-Type: application/grpc buf: < (#2) Grpc-Accept-Encoding: identity, deflate, gzip buf: < (#2) buf: { (#2) [5 bytes data] buf: { (#2) [434 bytes data] buf: * Server reflection has resolved file "olympicsagent.proto" buf: * Invoking RPC olympicsagent.v1.OlympicsAgent.AskStream buf: > (#3) POST /olympicsagent.v1.OlympicsAgent/AskStream buf: > (#3) Accept-Encoding: identity buf: > (#3) Content-Type: application/grpc+proto buf: > (#3) Grpc-Accept-Encoding: gzip buf: > (#3) Grpc-Timeout: 119947m buf: > (#3) Te: trailers buf: > (#3) User-Agent: grpc-go-connect/1.12.0 (go1.21.4) buf/1.28.1 buf: > (#3) buf: } (#3) [5 bytes data] buf: } (#3) [6 bytes data] buf: * (#3) Finished upload buf: < (#3) HTTP/2.0 200 OK buf: < (#3) Content-Type: application/grpc buf: < (#3) Grpc-Accept-Encoding: identity, deflate, gzip buf: < (#3) buf: { (#3) [5 bytes data] buf: { (#3) [25 bytes data] { "answer": "Here's a list of Summer" } buf: { (#3) [5 bytes data] buf: { (#3) [31 bytes data] { "answer": " and Winter Olympics from 198" } buf: { (#3) [5 bytes data] buf: { (#3) [29 bytes data] { "answer": "0 to 2024:\n\nSummer Olympics" } buf: { (#3) [5 bytes data] buf: { (#3) [20 bytes data] { "answer": ":\n1. 1980 - Moscow" } buf: { (#3) [5 bytes data] buf: { (#3) [20 bytes data] { "answer": ", Soviet Union\n2. " } buf: { (#3) [5 bytes data] buf: { (#3) [32 bytes data] { "answer": "1984 - Los Angeles, California" } buf: { (#3) [5 bytes data] buf: { (#3) [15 bytes data] { "answer": ", USA\n3. 1988" } buf: { (#3) [5 bytes data] buf: { (#3) [26 bytes data] { "answer": " - Seoul, South Korea\n4." } buf: { (#3) [5 bytes data] buf: { (#3) [27 bytes data] { "answer": " 1992 - Barcelona, Spain\n" } buf: { (#3) [5 bytes data] buf: { (#3) [20 bytes data] { "answer": "5. 1996 - Atlanta," } buf: { (#3) [5 bytes data] buf: { (#3) [22 bytes data] { "answer": " Georgia, USA\n6. 200" } buf: { (#3) [5 bytes data] buf: { (#3) [26 bytes data] { "answer": "0 - Sydney, Australia\n7." } buf: { (#3) [5 bytes data] buf: { (#3) [25 bytes data] { "answer": " 2004 - Athens, Greece\n" } buf: { (#3) [5 bytes data] buf: { (#3) [20 bytes data] { "answer": "8. 2008 - Beijing," } buf: { (#3) [5 bytes data] buf: { (#3) [18 bytes data] { "answer": " China\n9. 2012 -" } buf: { (#3) [5 bytes data] buf: { (#3) [29 bytes data] { "answer": " London, United Kingdom\n10." } buf: { (#3) [5 bytes data] buf: { (#3) [24 bytes data] { "answer": " 2016 - Rio de Janeiro" } buf: { (#3) [5 bytes data] buf: { (#3) [18 bytes data] { "answer": ", Brazil\n11. 202" } buf: { (#3) [5 bytes data] buf: { (#3) [24 bytes data] { "answer": "0 - Tokyo, Japan (held" } buf: { (#3) [5 bytes data] buf: { (#3) [21 bytes data] { "answer": " in 2021 due to the" } buf: { (#3) [5 bytes data] buf: { (#3) [26 bytes data] { "answer": " COVID-19 pandemic)\n12. " } buf: { (#3) [5 bytes data] buf: { (#3) [28 bytes data] { "answer": "2024 - Paris, France\n\nNote" } buf: { (#3) [5 bytes data] buf: { (#3) [41 bytes data] { "answer": ": The Olympics were held without a host" } buf: { (#3) [5 bytes data] buf: { (#3) [26 bytes data] { "answer": " city for one year (2022" } buf: { (#3) [5 bytes data] buf: { (#3) [42 bytes data] { "answer": ", due to the Russian invasion of Ukraine" } buf: { (#3) [5 bytes data] buf: { (#3) [29 bytes data] { "answer": ").\n\nWinter Olympics:\n1. 198" } buf: { (#3) [5 bytes data] buf: { (#3) [27 bytes data] { "answer": "0 - Lake Placid, New York" } buf: { (#3) [5 bytes data] buf: { (#3) [15 bytes data] { "answer": ", USA\n2. 1984" } buf: { (#3) [5 bytes data] buf: { (#3) [27 bytes data] { "answer": " - Sarajevo, Yugoslavia (" } buf: { (#3) [5 bytes data] buf: { (#3) [30 bytes data] { "answer": "now Bosnia and Herzegovina)\n" } buf: { (#3) [5 bytes data] buf: { (#3) [20 bytes data] { "answer": "3. 1988 - Calgary," } buf: { (#3) [5 bytes data] buf: { (#3) [25 bytes data] { "answer": " Alberta, Canada\n4. 199" } buf: { (#3) [5 bytes data] buf: { (#3) [26 bytes data] { "answer": "2 - Albertville, France\n" } buf: { (#3) [5 bytes data] buf: { (#3) [13 bytes data] { "answer": "5. 1994 - L" } buf: { (#3) [5 bytes data] buf: { (#3) [24 bytes data] { "answer": "illehammer, Norway\n6. " } buf: { (#3) [5 bytes data] buf: { (#3) [23 bytes data] { "answer": "1998 - Nagano, Japan\n" } buf: { (#3) [5 bytes data] buf: { (#3) [16 bytes data] { "answer": "7. 2002 - Salt" } buf: { (#3) [5 bytes data] buf: { (#3) [24 bytes data] { "answer": " Lake City, Utah, USA\n" } buf: { (#3) [5 bytes data] buf: { (#3) [18 bytes data] { "answer": "8. 2006 - Torino" } buf: { (#3) [5 bytes data] buf: { (#3) [17 bytes data] { "answer": ", Italy\n9. 2010" } buf: { (#3) [5 bytes data] buf: { (#3) [40 bytes data] { "answer": " - Vancouver, British Columbia, Canada" } buf: { (#3) [5 bytes data] buf: { (#3) [13 bytes data] { "answer": "\n10. 2014 -" } buf: { (#3) [5 bytes data] buf: { (#3) [20 bytes data] { "answer": " Sochi, Russia\n11." } buf: { (#3) [5 bytes data] buf: { (#3) [16 bytes data] { "answer": " 2018 - Pyeong" } buf: { (#3) [5 bytes data] buf: { (#3) [24 bytes data] { "answer": "chang, South Korea\n12." } buf: < (#3) buf: < (#3) Grpc-Message: buf: < (#3) Grpc-Status: 0 buf: * (#3) Call complete buf: < (#2) buf: < (#2) Grpc-Message: buf: < (#2) Grpc-Status: 0 buf: * (#2) Call complete % ``` π Multiple Services with Custom Interceptors PydanticRPC supports defining and running multiple services in a single server: ```python from datetime import datetime import grpc from grpc import ServicerContext from pydantic_rpc import Server, Message class FooRequest(Message): name: str age: int d: dict[str, str] class FooResponse(Message): name: str age: int d: dict[str, str] class BarRequest(Message): names: list[str] class BarResponse(Message): names: list[str] class FooService: def foo(self, request: FooRequest) -> FooResponse: return FooResponse(name=request.name, age=request.age, d=request.d) class MyMessage(Message): name: str age: int o: int | datetime class Request(Message): name: str age: int d: dict[str, str] m: MyMessage class Response(Message): name: str age: int d: dict[str, str] m: MyMessage | str class BarService: def bar(self, req: BarRequest, ctx: ServicerContext) -> BarResponse: return BarResponse(names=req.names) class CustomInterceptor(grpc.ServerInterceptor): def intercept_service(self, continuation, handler_call_details): # do something print(handler_call_details.method) return continuation(handler_call_details) async def app(scope, receive, send): pass if name == "main": s = Server(10, CustomInterceptor()) s.run( FooService(), BarService(), ) ``` π©Ί [TODO] Custom Health Check TODO ποΈ Protobuf file and code (Python files) generation using CLI You can genereate protobuf files and code for a given module and a specified class using pydantic-rpc CLI command: bash pydantic-rpc a_module.py aClassName Using this generated proto file and tools as protoc, buf and BSR, you could generate code for any desired language other than Python. π Data Type Mapping | Python Type | Protobuf Type | |--------------------------------|---------------------------| | str | string | | bytes | bytes | | bool | bool | | int | int32 | | float | float, double | | list[T], tuple[T] | repeated T | | dict[K, V] | map | | datetime.datetime | google.protobuf.Timestamp | | datetime.timedelta | google.protobuf.Duration | | typing.Union[A, B] | oneof A, B | | subclass of enum.Enum | enum | | subclass of pydantic.BaseModel | message | TODO [ ] Streaming Support [x] unary-stream [ ] stream-unary [ ] stream-stream [ ] Betterproto Support [ ] Sonora-connect Support [ ] Custom Health Check Support [ ] Add more examples [ ] Add tests π License This project is licensed under the MIT License. See the LICENSE file for details.
pypi package. Binary
Latest version: 0.6.1 Released: 2025-02-22
pydantic-tes .. image:: https://badge.fury.io/py/pydantic-tes.svg :target: https://pypi.python.org/pypi/pydantic-tes/ :alt: pydantic-tes on the Python Package Index (PyPI) A collection of pydantic_ models for GA4GH Task Execution Service_. In addition to the models, this package contains a lightweight client for TES based on them using requests_ and utilities for working and testing it against Funnel_ - a the TES implementation. This Python project can be installed from PyPI using pip. :: $ python3 -m venv .venv $ . .venv/bin/activate $ pip install pydantic-tes Checkout py-tes for an alternative set of Python models and an API client based on the more lightweight attrs package. .. _Funnel: https://ohsu-comp-bio.github.io/funnel/ .. _requests: https://requests.readthedocs.io/en/latest/ .. _GA4GH Task Execution Service: https://github.com/ga4gh/task-execution-schemas .. _pydantic: https://pydantic-docs.helpmanual.io/ .. _py-tes: https://github.com/ohsu-comp-bio/py-tes .. _attrs: https://www.attrs.org/en/stable/ .. :changelog: History .. to_doc 0.2.0 (2025-04-10) Allow creating tes client with extra headers (e.g. for auth) thanks to @BorisYourich. Fixes for running against tesk thanks to @mvdbeek. 0.1.5 (2022-10-06) Messed up 0.1.4 release, retrying. 0.1.4 (2022-10-06) Further hacking around funnel responses to produce validating responses. 0.1.3 (2022-10-05) Another attempt at publishing types. 0.1.2 (2022-10-05) Add support for Python 3.6. Add py.typed to package. 0.1.1 (2022-09-29) Fixes to project publication scripts and process. 0.1.0 (2022-09-29) Inital version.
Latest version: 0.2.0 Released: 2025-04-10
Pydantic AST Pydantic models covering Python AST types. Installation py pip install pydantic-ast Usage Use it as a drop-in replacement to ast.parse with a more readable representation. ```py import ast import pydantic_ast source = "x = 1" ast.parse(source) pydantic_ast.parse(source) pydantic-ast: body=[Assign(targets=[Name(id='x', ctx=Store())], value=Constant(value=1), type_comment=None)] type_ignores=[] ``` Use it on the command line to quickly get an AST of a Python program or a section of one sh echo '"Hello world"' | pydantic-ast β£ body=[Expr(value=Constant(value='Hello world'))] type_ignores=[] Use it on ASTs you got from elsewhere to make them readable, or to inspect parts of them more easily. The AST_to_pydantic class is a ast.NodeTransformer that converts nodes in the AST to pydantic_ast model types when the tree nodes are visited. ```py from pydantic_ast import AST_to_pydantic source = "123 + 345 == expected" my_mystery_ast = ast.parse(source) ast_model = AST_to_pydantic().visit(my_mystery_ast) ast_model body=[Expr(value=Compare(left=BinOp(left=Constant(value=123), op=Add(), right=Constant(value=345)), ops=[Eq()], comparators=[Name(id='expected', ctx=Load())]))] type_ignores=[] ast_model.body [Expr(value=Compare(left=BinOp(left=Constant(value=123), op=Add(), right=Constant(value=345)), ops=[Eq()], comparators=[Name(id='expected', ctx=Load())]))] ``` It's simply much easier to drill down into a tree when you can see the fields in a repr. ```py ast_model.body[0].value Compare(left=BinOp(left=Constant(value=123), op=Add(), right=Constant(value=345)), ops=[Eq()], comparators=[Name(id='expected', ctx=Load())]) ast_model.body[0].value.left BinOp(left=Constant(value=123), op=Add(), right=Constant(value=345)) ast_model.body[0].value.left.left Constant(value=123) ast_model.body[0].value.left.left.value 123 ``` Development To set up pre-commit hooks (to keep the CI bot happy) run pre-commit install-hooks so all git commits trigger the pre-commit checks. I use Conventional Commits. This runs black, flake8, autopep8, pyupgrade, etc. To set up a dev env, I first create a new conda environment and use it in PDM with which python > .pdm-python. To use virtualenv environment instead of conda, skip that. Run pdm install and a .venv will be created if no Python binary path is found in .pdm-python. To run tests, run pdm run python -m pytest and the PDM environment will be used to run the test suite.
pypi package. Binary
Latest version: 0.2.0 Released: 2023-09-19
pydantic-sdk Coming Soon.
pypi package. Binary
Latest version: 0.0.2 Released: 2023-07-17
pydantic - CSV Pydantic CSV makes working with CSV files easier and much better than working with Dicts. It uses pydantic BaseModels to store data of every row on the CSV file and also uses type annotations which enables proper type checking and validation. Table of Contents Main features Installation Getting started Using the BasemodelCSVReader Error handling Default values Mapping BaseModel fields to columns Supported type annotation User-defined types Using the BasemodelCSVWriter Modifying the CSV header Copyright and License Credits Main features Use pydantic.BaseModel instead of dictionaries to represent the rows in the CSV file. Take advantage of the BaseModel properties type annotation. BasemodelCSVReader uses the type annotation to perform validation on the data of the CSV file. Automatic type conversion. BasemodelCSVReader supports str, int, float, complex, datetime and bool, as well as any type whose constructor accepts a string as its single argument. Helps you troubleshoot issues with the data in the CSV file. BasemodelCSVReader will show exactly, which line of the CSV file contains errors. Extract only the data you need. It will only parse the properties defined in the BaseModel Familiar syntax. The BasemodelCSVReader is used almost the same way as the DictReader in the standard library. It uses BaseModel features that let you define Field properties or Config so the data can be parsed exactly the way you want. Make the code cleaner. No more extra loops to convert data to the correct type, perform validation, set default values, the BasemodelCSVReader will do all this for you. In addition to the BasemodelCSVReader, the library also provides a BasemodelCSVWriter which enables creating a CSV file using a list of instances of a BaseModel. Because sqlmodel uses pydantic.BaseModels too, you can directly fill a database with data from a CSV Installation shell pip install pydantic-csv Getting started Using the BasemodelCSVReader First, add the necessary imports: ```python from pydantic import BaseModel from pydantic_csv import BasemodelCSVReader ``` Assuming that we have a CSV file with the contents below: text firstname,email,age Elsa,elsa@test.com,26 Astor,astor@test.com,44 Edit,edit@test.com,33 Ella,ella@test.com,22 Let's create a BaseModel that will represent a row in the CSV file above: python class User(BaseModel): firstname: str email: str age: int The BaseModel User has 3 properties, firstname and email is of type str and age is of type int. To load and read the contents of the CSV file we do the same thing as if we would be using the DictReader from the csv module in the Python's standard library. After opening the file we create an instance of the BasemodelCSVReader passing two arguments. The first is the file and the second is the BaseModel that we wish to use to represent the data of every row of the CSV file. Like so: ```python using file on disk with open("") as csv: reader = BasemodelCSVReader(csv, User) for row in reader: print(row) using buffer (has to be a string buffer -> convert beforehand) buffer = io.StringIO() buffer.seek(0) # ensure that we read from the beginning reader = BasemodelCSVReader(buffer, User) for row in reader: print(row) ``` If you run this code you should see an output like this: python User(firstname='Elsa', email='elsa@test.com', age=11) User(firstname='Astor', email='astor@test.com', age=7) User(firstname='Edit', email='edit@test.com', age=3) User(firstname='Ella', email='ella@test.com', age=2) The BasemodelCSVReader internally uses the DictReader from the csv module to read the CSV file which means that you can pass the same arguments that you would pass to the DictReader. The complete argument list is shown below: python BasemodelCSVReader( file_obj: Any, model: Type[BaseModel], *, # Note that you can't provide any value without specifying the parameter name use_alias: bool = True, validate_header: bool = True, fieldnames: Optional[Sequence[str]] = None, restkey: Optional[str] = None, restval: Optional[Any] = None, dialect: str = "excel", **kwargs: Any, ) All keyword arguments supported by DictReader are supported by the BasemodelCSVReader, except use_alias and validate_header. Those are used to change the behaviour of the BasemodelCSVReader as follows: use_alias - The BasemodelCSVReader will search for column names identical to the aliases of the BaseModel Fields (if set, otherwise its names). To avoid this behaviour and use the field names in every case set use_alias = False when creating an instance of the BasemodelCSVReader, see an example below: python reader = BasemodelCSVReader(csv, User, use_alias=False) validate_header - The BasemodelCSVReader will raise a ValueError if the CSV file contains columns with the same name. This validation is performed to avoid data being overwritten. To skip this validation set validate_header=False when creating an instance of the BasemodelCSVReader, see an example below: python reader = BasemodelCSVReader(csv, User, validate_header=False) Important: If two or more columns with the same name exists it tries to instantiate the BaseModel with the data from the column most right. Error handling One of the advantages of using the BasemodelCSVReader is that it makes it easy to detect when the type of data in the CSV file is not what your application's model is expecting. And, the BasemodelCSVReader shows errors that will help to identify the rows with problems in your CSV file. For example, say we change the contents of the CSV file shown in the Getting started section and, modify the age of the user Astor, let's change it to a string value: text firstname,email,age Elsa,elsa@test.com,26 Astor,astor@test.com,test Edit,edit@test.com,33 Ella,ella@test.com,22 Remember that in the BaseModel User the age property is annotated with int. If we run the code again an exception from the pydantic validation will be raised with the message below: text pydantic_csv.exceptions.CSVValueError: [Error on CSV Line number: 3] E 1 validation error for UserOptional E age E Input should be a valid integer, unable to parse string as an integer [type=int_parsing, input_value='not a number', input_type=str] E For further information visit https://errors.pydantic.dev/2.7/v/int_parsing Note that apart from telling what the error was, the BasemodelCSVReader will also show which line of the CSV file contain the data with errors. Default values The BasemodelCSVReader also handles properties with default values. Let's modify the BaseModel User and add a default value for the field email: ```python from pydantic import BaseModel class User(BaseModel): firstname: str email: str = 'Not specified' age: int ``` And we modify the CSV file and remove the email for the user Astor: text firstname,email,age Elsa,elsa@test.com,26 Astor,,44 Edit,edit@test.com,33 Ella,ella@test.com,22 If we run the code we should see the output below: text User(firstname='Elsa', email='elsa@test.com', age=11) User(firstname='Astor', email='Not specified', age=7) User(firstname='Edit', email='edit@test.com', age=3) User(firstname='Ella', email='ella@test.com', age=2) Note that now the object for the user Astor has the default value Not specified assigned to the email property. Default values can also be set using pydantic.Field like so: ```python from pydantic import BaseModel, Field class User(BaseModel): firstname: str email: str = Field(default='Not specified') age: int ``` Mapping BaseModel fields to columns The mapping between a BaseModel field and a column in the CSV file will be done automatically if the names match. However, there are situations that the name of the header for a column is different. We can easily tell the BasemodelCSVReader how the mapping should be done using the method map.\ Assuming that we have a CSV file with the contents below: text First Name,email,age Elsa,elsa@test.com,26 Astor,astor@test.com,44 Edit,edit@test.com,33 Ella,ella@test.com,22 Note that now the column is called First Name and not firstname And we can use the method map, like so: python reader = BasemodelCSVReader(csv, User) reader.map('First Name').to('firstname') Now the BasemodelCSVReader will know how to extract the data from the column First Name and add it to the BaseModel property firstname Supported type annotation At the moment the BasemodelCSVReader supports int, str, float, complex, datetime, and bool. pydantic_csv doesn't parse the date(times) itself. Thus, it relies on the datetime parsing of pydantic. Now they support some common formats and unix timestamps, but if you have a more exotic format you can use a pydantic validator. Assuming that the CSV file has the following contents: text name,email,birthday Edit,edit@test.com,Sunday, 6. January 2002 This would look like this: ```python from pydantic import BaseModel, field_validator from datetime import datetime class User(BaseModel): name: str email: str birthday: datetime @field_validator("birthday", mode="before") def parse_birthday_date(cls, value): return datetime.strptime(value, "%A, %d. %B %Y").date() ``` User-defined types You can use any type for a field as long as its constructor accepts a string: ```python import re from pydantic import BaseModel class SSN: def init(self, val): if re.match(r"\d{9}", val): self.val = f"{val[0:3]}-{val[3:5]}-{val[5:9]}" elif re.match(r"\d{3}-\d{2}-\d{4}", val): self.val = val else: raise ValueError(f"Invalid SSN: {val!r}") class User(BaseModel): name: str ssn: SSN ``` Using the BasemodelCSVWriter Reading a CSV file using the BasemodelCSVReader is great and gives us the type-safety of Pydantic's BaseModels and type annotation, however, there are situations where we would like to use BaseModels for creating CSV files, that's where the BasemodelCSVWriter comes in handy. Using the BasemodelCSVWriter is quite simple. Given that we have a Basemodel User: ```python from pydantic import BaseModel class User(BaseModel): firstname: str lastname: str age: int ``` And in your program we have a list of users: python users = [ User(firstname="John", lastname="Smith", age=40), User(firstname="Daniel", lastname="Nilsson", age=23), User(firstname="Ella", lastname="Fralla", age=28) ] In order to create a CSV using the BasemodelCSVWriter import it from pydantic_csv: python from pydantic_csv import BasemodelCSVReader Initialize it with the required arguments and call the method write: ```python using file on disk with open("") as csv: writer = BasemodelCSVWriter(csv, users, User) writer.write() using buffer (has to be a StringBuffer) writer = BasemodelCSVWriter(buffer, users, User) writer.write() buffer.seek(0) # ensure that the next working steps start at the beginning of the "file" if you need a BytesBuffer just convert it: bytes_buffer: io.BytesIO = io.BytesIO(buffer.read().encode("utf-8")) bytes_buffer.name = buffer.name bytes_buffer.seek(0) # ensure that the next working steps start at the beginning of the "file" ``` That's it! Let's break down the snippet above. First, we open a file called user.csv for writing. After that, an instance of the BasemodelCSVWriter is created. To create a BasemodelCSVWriter we need to pass the file_obj, the list of User instances, and lastly, the type, which in this case is User. The type is required since the writer uses it when trying to figure out the CSV header. By default, it will use the alias of the field otherwise its name defined in the BaseModel, in the case of the BaseModel User the title of each column will be firstname, lastname and age. See below the CSV created out of a list of User: text firstname,lastname,age John,Smith,40 Daniel,Nilsson,23 Ella,Fralla,28 The BasemodelCSVWriter also takes **fmtparams which accepts the same parameters as the csv.writer. For more information see: https://docs.python.org/3/library/csv.html#csv-fmt-params Now, there are situations where we don't want to write the CSV header. In this case, the method write of the BasemodelCSVWriter accepts an extra argument, called skip_header. The default value is False and when set to True it will skip the header. Modifying the CSV header As previously mentioned the BasemodelCSVWriter uses the aliases or names of the fields defined in the BaseModel as the CSV header titles. If you don't want the BasemodelCSVWriter to use the aliases and only the names you can set use_alias to False. This will look like this: python writer = BasemodelCSVWriter(file_obj, users, User, use_alias=False) However, depending on your use case it makes sense to set custom Headers and not use the aliases or names at all. The BasemodelCSVWriter has a map method just for this purpose. Using the User BaseModel with the properties firstname, lastname and age. The snippet below shows how to change firstname to First name and lastname to Last name: ```python with open("", "w") as file: writer = BasemodelCSVWriter(file, users, User) # Add mappings for firstname and lastname writer.map("firstname").to("First Name") writer.map("lastname").to("Last Name") writer.write() ``` The CSV output of the snippet above will be: text First Name,Last Name,age John,Smith,40 Daniel,Nilsson,23 Ella,Fralla,28 Copyright and License Copyright (c) 2024 Nathan Richard. Code released under BSD 3-clause license Credits A huge shoutout to Daniel Furtado (github) and his python package 'dataclass-csv' (pypi | github). The most of the Codebase and Documentation is from him and just adjusted for using pydantic.BaseModel.
pypi package. Binary
Latest version: 0.1.0 Released: 2024-06-28