diff --git a/index.html b/index.html index f553acf..fc02e9d 100644 --- a/index.html +++ b/index.html @@ -45,4 +45,4 @@ assert len(all_users) == 50 session.close() -

We have successfully seed 50 users to User table in app.db.

I know at this point you want to know more, so let's dive deep into the documents and get started.


Last update: January 27, 2024
\ No newline at end of file +

We have successfully seed 50 users to User table in app.db.

You can find the source code for this example here.

I know at this point you want to know more, so let's dive deep into the documents and get started.


Last update: January 27, 2024
\ No newline at end of file diff --git a/models/index.html b/models/index.html index a5827b2..4f46a1d 100644 --- a/models/index.html +++ b/models/index.html @@ -83,4 +83,4 @@ def exception_404_handler(cls, ctx: IExecutionContext, exc: Exception) -> Response: return JSONResponse(dict(detail="Resource not found."), status_code=404)

In the provided code snippet:

With these configurations, the application is now ready for testing.

ellar runserver --reload
-
Additionally, please remember to uncomment the configurations for the OpenAPIModule in the server.py file to enable visualization and interaction with the /users endpoint.

Once done, you can access the OpenAPI documentation at http://127.0.0.1:8000/docs.


Last update: January 23, 2024
\ No newline at end of file + Additionally, please remember to uncomment the configurations for the OpenAPIModule in the server.py file to enable visualization and interaction with the /users endpoint.

Once done, you can access the OpenAPI documentation at http://127.0.0.1:8000/docs.

You can find the source code for this project here.


Last update: January 27, 2024
\ No newline at end of file diff --git a/search/search_index.json b/search/search_index.json index ebd7a3d..b71c175 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-,:!=\\[\\]()\"`/]+|\\.(?!\\d)|&[lg]t;|(?!\\b)(?=[A-Z][a-z])"},"docs":[{"location":"","text":".md-content .md-typeset h1 { display: none; } EllarSQL is an SQL database Ellar Module. EllarSQL is an SQL database module, leveraging the robust capabilities of SQLAlchemy to seamlessly interact with SQL databases through Python code and objects. EllarSQL is meticulously designed to streamline the integration of SQLAlchemy within your Ellar application. It introduces discerning usage patterns around pivotal objects such as model , session , and engine , ensuring an efficient and coherent workflow. Notably, EllarSQL refrains from altering the fundamental workings or usage of SQLAlchemy. This documentation is focused on the meticulous setup of EllarSQL. For an in-depth exploration of SQLAlchemy, we recommend referring to the comprehensive SQLAlchemy documentation . Feature Highlights \u00b6 EllarSQL comes packed with a set of awesome features designed: Migration : Enjoy an async-first migration solution that seamlessly handles both single and multiple database setups and for both async and sync database engines configuration. Single/Multiple Database : EllarSQL provides an intuitive setup for models with different databases, allowing you to manage your data across various sources effortlessly. Pagination : EllarSQL introduces SQLAlchemy Paginator for API/Templated routes, along with support for other fantastic SQLAlchemy pagination tools. Unlimited Compatibility : EllarSQL plays nice with the entire SQLAlchemy ecosystem. Whether you're using third-party tools or exploring the vast SQLAlchemy landscape, EllarSQL seamlessly integrates with your preferred tooling. Requirements \u00b6 EllarSQL core dependencies includes: Python >= 3.8 Ellar >= 0.6.7 SQLAlchemy >= 2.0.16 Alembic >= 1.10.0 Installation \u00b6 pip install ellar-sql Quick Example \u00b6 Let's create a simple User model. from ellar_sql import model class User ( model . Model ): id : model . Mapped [ int ] = model . mapped_column ( model . Integer , primary_key = True ) username : model . Mapped [ str ] = model . mapped_column ( model . String , unique = True , nullable = False ) email : model . Mapped [ str ] = model . mapped_column ( model . String ) Let's create app.db with User table in it. For that we need to set up EllarSQLService as shown below: from ellar_sql import EllarSQLService db_service = EllarSQLService ( databases = 'sqlite:///app.db' , echo = True , ) db_service . create_all () If you check your execution directory, you will see sqlite directory with app.db . Let's populate our User table. To do, we need a session, which is available at db_service.session_factory from ellar_sql import EllarSQLService , model class User ( model . Model ): id : model . Mapped [ int ] = model . mapped_column ( model . Integer , primary_key = True ) username : model . Mapped [ str ] = model . mapped_column ( model . String , unique = True , nullable = False ) email : model . Mapped [ str ] = model . mapped_column ( model . String ) db_service = EllarSQLService ( databases = 'sqlite:///app.db' , echo = True , ) db_service . create_all () session = db_service . session_factory () for i in range ( 50 ): session . add ( User ( username = f 'username- { i + 1 } ' , email = f 'user { i + 1 } doe@example.com' )) session . commit () rows = session . execute ( model . select ( User )) . scalars () all_users = [ row . dict () for row in rows ] assert len ( all_users ) == 50 session . close () We have successfully seed 50 users to User table in app.db . I know at this point you want to know more, so let's dive deep into the documents and get started .","title":"Index"},{"location":"#feature-highlights","text":"EllarSQL comes packed with a set of awesome features designed: Migration : Enjoy an async-first migration solution that seamlessly handles both single and multiple database setups and for both async and sync database engines configuration. Single/Multiple Database : EllarSQL provides an intuitive setup for models with different databases, allowing you to manage your data across various sources effortlessly. Pagination : EllarSQL introduces SQLAlchemy Paginator for API/Templated routes, along with support for other fantastic SQLAlchemy pagination tools. Unlimited Compatibility : EllarSQL plays nice with the entire SQLAlchemy ecosystem. Whether you're using third-party tools or exploring the vast SQLAlchemy landscape, EllarSQL seamlessly integrates with your preferred tooling.","title":"Feature Highlights"},{"location":"#requirements","text":"EllarSQL core dependencies includes: Python >= 3.8 Ellar >= 0.6.7 SQLAlchemy >= 2.0.16 Alembic >= 1.10.0","title":"Requirements"},{"location":"#installation","text":"pip install ellar-sql","title":"Installation"},{"location":"#quick-example","text":"Let's create a simple User model. from ellar_sql import model class User ( model . Model ): id : model . Mapped [ int ] = model . mapped_column ( model . Integer , primary_key = True ) username : model . Mapped [ str ] = model . mapped_column ( model . String , unique = True , nullable = False ) email : model . Mapped [ str ] = model . mapped_column ( model . String ) Let's create app.db with User table in it. For that we need to set up EllarSQLService as shown below: from ellar_sql import EllarSQLService db_service = EllarSQLService ( databases = 'sqlite:///app.db' , echo = True , ) db_service . create_all () If you check your execution directory, you will see sqlite directory with app.db . Let's populate our User table. To do, we need a session, which is available at db_service.session_factory from ellar_sql import EllarSQLService , model class User ( model . Model ): id : model . Mapped [ int ] = model . mapped_column ( model . Integer , primary_key = True ) username : model . Mapped [ str ] = model . mapped_column ( model . String , unique = True , nullable = False ) email : model . Mapped [ str ] = model . mapped_column ( model . String ) db_service = EllarSQLService ( databases = 'sqlite:///app.db' , echo = True , ) db_service . create_all () session = db_service . session_factory () for i in range ( 50 ): session . add ( User ( username = f 'username- { i + 1 } ' , email = f 'user { i + 1 } doe@example.com' )) session . commit () rows = session . execute ( model . select ( User )) . scalars () all_users = [ row . dict () for row in rows ] assert len ( all_users ) == 50 session . close () We have successfully seed 50 users to User table in app.db . I know at this point you want to know more, so let's dive deep into the documents and get started .","title":"Quick Example"},{"location":"advance/","text":"","title":"Index"},{"location":"migrations/","text":"Migrations \u00b6 EllarSQL also extends Alembic package to add migration functionality and make database operations accessible through EllarCLI commandline interface. EllarSQL with Alembic does not override Alembic action rather provide Alembic all the configs/information it needs to for a proper migration/database operations. Its also still possible to use Alembic outside EllarSQL setup when necessary. This section is inspired by Flask Migrate Quick Example \u00b6 We assume you have set up EllarSQLModule in your application, and you have specified migration_options . Create a simple User model as shown below: from ellar_sql import model class User ( model . Model ): id = model . Column ( model . Integer , primary_key = True ) name = model . Column ( model . String ( 128 )) Initialize migration template \u00b6 With the Model setup, run the command below # Initialize the database ellar db init Executing this command will incorporate a migrations folder into your application structure. Ensure that the contents of this folder are included in version control alongside your other source files. Following the initialization, you can generate an initial migration using the command: # Generate the initial migration ellar db migrate -m \"Initial migration.\" Few things to do after generating a migration file: Review and edit the migration script Alembic may not detect certain changes automatically, such as table and column name modifications or unnamed constraints. Refer to the Alembic autogenerate documentation for a comprehensive list of limitations. Add the finalized migration script to version control Ensure that the edited script is committed along with your source code changes Apply the changes described in the migration script to your database ellar db upgrade Whenever there are changes to the database models, it's necessary to repeat the migrate and upgrade commands. For synchronizing the database on another system, simply refresh the migrations folder from the source control repository and execute the upgrade command. This ensures that the database structure aligns with the latest changes in the models. Multiple Database Migration \u00b6 If your application utilizes multiple databases, a distinct Alembic template for migration is required. To enable this, include -m or --multi with the db init command, as demonstrated below: ellar db init --multi Command Reference \u00b6 All Alembic commands are expose to Ellar CLI under db group after a successful EllarSQLModule setup. To see all the commands that are available run this command: ellar db --help # output Usage: ellar db [ OPTIONS ] COMMAND [ ARGS ] ... - Perform Alembic Database Commands - Options: --help Show this message and exit. Commands: branches - Show current branch points check Check if there are any new operations to migrate current - Display the current revision for each database. downgrade - Revert to a previous version edit - Edit a revision file heads - Show current available heads in the script directory history - List changeset scripts in chronological order. init Creates a new migration repository. merge - Merge two revisions together, creating a new revision file migrate - Autogenerate a new revision file ( Alias for 'revision... revision - Create a new revision file. show - Show the revision denoted by the given symbol. stamp - ' stamp ' the revision table with the given revision; don' t... upgrade - Upgrade to a later version ellar db --help Shows a list of available commands. ellar db revision [--message MESSAGE] [--autogenerate] [--sql] [--head HEAD] [--splice] [--branch-label BRANCH_LABEL] [--version-path VERSION_PATH] [--rev-id REV_ID] Creates an empty revision script. The script needs to be edited manually with the upgrade and downgrade changes. See Alembic\u2019s documentation for instructions on how to write migration scripts. An optional migration message can be included. ellar db migrate [--message MESSAGE] [--sql] [--head HEAD] [--splice] [--branch-label BRANCH_LABEL] [--version-path VERSION_PATH] [--rev-id REV_ID] Equivalent to revision --autogenerate. The migration script is populated with changes detected automatically. The generated script should be reviewed and edited as not all types of changes can be detected automatically. This command does not make any changes to the database, just creates the revision script. ellar db check Checks that a migrate command would not generate any changes. If pending changes are detected, the command exits with a non-zero status code. ellar db edit Edit a revision script using $EDITOR. ellar db upgrade [--sql] [--tag TAG] Upgrades the database. If revision isn\u2019t given, then \"head\" is assumed. ellar db downgrade [--sql] [--tag TAG] Downgrades the database. If revision isn\u2019t given, then -1 is assumed. ellar db stamp [--sql] [--tag TAG] Sets the revision in the database to the one given as an argument, without performing any migrations. ellar db current [--verbose] Shows the current revision of the database. ellar db history [--rev-range REV_RANGE] [--verbose] Shows the list of migrations. If a range isn\u2019t given, then the entire history is shown. ellar db show Show the revision denoted by the given symbol. ellar db merge [--message MESSAGE] [--branch-label BRANCH_LABEL] [--rev-id REV_ID] Merge two revisions together. Create a new revision file. ellar db heads [--verbose] [--resolve-dependencies] Show current available heads in the revision script directory. ellar db branches [--verbose] Show current branch points.","title":"index"},{"location":"migrations/#migrations","text":"EllarSQL also extends Alembic package to add migration functionality and make database operations accessible through EllarCLI commandline interface. EllarSQL with Alembic does not override Alembic action rather provide Alembic all the configs/information it needs to for a proper migration/database operations. Its also still possible to use Alembic outside EllarSQL setup when necessary. This section is inspired by Flask Migrate","title":"Migrations"},{"location":"migrations/#quick-example","text":"We assume you have set up EllarSQLModule in your application, and you have specified migration_options . Create a simple User model as shown below: from ellar_sql import model class User ( model . Model ): id = model . Column ( model . Integer , primary_key = True ) name = model . Column ( model . String ( 128 ))","title":"Quick Example"},{"location":"migrations/#initialize-migration-template","text":"With the Model setup, run the command below # Initialize the database ellar db init Executing this command will incorporate a migrations folder into your application structure. Ensure that the contents of this folder are included in version control alongside your other source files. Following the initialization, you can generate an initial migration using the command: # Generate the initial migration ellar db migrate -m \"Initial migration.\" Few things to do after generating a migration file: Review and edit the migration script Alembic may not detect certain changes automatically, such as table and column name modifications or unnamed constraints. Refer to the Alembic autogenerate documentation for a comprehensive list of limitations. Add the finalized migration script to version control Ensure that the edited script is committed along with your source code changes Apply the changes described in the migration script to your database ellar db upgrade Whenever there are changes to the database models, it's necessary to repeat the migrate and upgrade commands. For synchronizing the database on another system, simply refresh the migrations folder from the source control repository and execute the upgrade command. This ensures that the database structure aligns with the latest changes in the models.","title":"Initialize migration template"},{"location":"migrations/#multiple-database-migration","text":"If your application utilizes multiple databases, a distinct Alembic template for migration is required. To enable this, include -m or --multi with the db init command, as demonstrated below: ellar db init --multi","title":"Multiple Database Migration"},{"location":"migrations/#command-reference","text":"All Alembic commands are expose to Ellar CLI under db group after a successful EllarSQLModule setup. To see all the commands that are available run this command: ellar db --help # output Usage: ellar db [ OPTIONS ] COMMAND [ ARGS ] ... - Perform Alembic Database Commands - Options: --help Show this message and exit. Commands: branches - Show current branch points check Check if there are any new operations to migrate current - Display the current revision for each database. downgrade - Revert to a previous version edit - Edit a revision file heads - Show current available heads in the script directory history - List changeset scripts in chronological order. init Creates a new migration repository. merge - Merge two revisions together, creating a new revision file migrate - Autogenerate a new revision file ( Alias for 'revision... revision - Create a new revision file. show - Show the revision denoted by the given symbol. stamp - ' stamp ' the revision table with the given revision; don' t... upgrade - Upgrade to a later version ellar db --help Shows a list of available commands. ellar db revision [--message MESSAGE] [--autogenerate] [--sql] [--head HEAD] [--splice] [--branch-label BRANCH_LABEL] [--version-path VERSION_PATH] [--rev-id REV_ID] Creates an empty revision script. The script needs to be edited manually with the upgrade and downgrade changes. See Alembic\u2019s documentation for instructions on how to write migration scripts. An optional migration message can be included. ellar db migrate [--message MESSAGE] [--sql] [--head HEAD] [--splice] [--branch-label BRANCH_LABEL] [--version-path VERSION_PATH] [--rev-id REV_ID] Equivalent to revision --autogenerate. The migration script is populated with changes detected automatically. The generated script should be reviewed and edited as not all types of changes can be detected automatically. This command does not make any changes to the database, just creates the revision script. ellar db check Checks that a migrate command would not generate any changes. If pending changes are detected, the command exits with a non-zero status code. ellar db edit Edit a revision script using $EDITOR. ellar db upgrade [--sql] [--tag TAG] Upgrades the database. If revision isn\u2019t given, then \"head\" is assumed. ellar db downgrade [--sql] [--tag TAG] Downgrades the database. If revision isn\u2019t given, then -1 is assumed. ellar db stamp [--sql] [--tag TAG] Sets the revision in the database to the one given as an argument, without performing any migrations. ellar db current [--verbose] Shows the current revision of the database. ellar db history [--rev-range REV_RANGE] [--verbose] Shows the list of migrations. If a range isn\u2019t given, then the entire history is shown. ellar db show Show the revision denoted by the given symbol. ellar db merge [--message MESSAGE] [--branch-label BRANCH_LABEL] [--rev-id REV_ID] Merge two revisions together. Create a new revision file. ellar db heads [--verbose] [--resolve-dependencies] Show current available heads in the revision script directory. ellar db branches [--verbose] Show current branch points.","title":"Command Reference"},{"location":"migrations/env/","text":"Alembic Env \u00b6 In the generated migration template, EllarSQL adopts an async-first approach for handling migration file generation. This approach simplifies the execution of migrations for both Session , Engine , AsyncSession , and AsyncEngine , but it also introduces a certain level of complexity. from logging.config import fileConfig from alembic import context from ellar.app import current_injector from ellar.threading import execute_coroutine_with_sync_worker from ellar_sql.migrations import SingleDatabaseAlembicEnvMigration from ellar_sql.services import EllarSQLService # this is the Alembic Config object, which provides # access to the values within the .ini file in use. config = context . config # Interpret the config file for Python logging. # This line sets up loggers basically. fileConfig ( config . config_file_name ) # type:ignore[arg-type] # logger = logging.getLogger(\"alembic.env\") # other values from the config, defined by the needs of env.py, # can be acquired: # my_important_option = config.get_main_option(\"my_important_option\") # ... etc. async def main () -> None : db_service : EllarSQLService = current_injector . get ( EllarSQLService ) # initialize migration class alembic_env_migration = SingleDatabaseAlembicEnvMigration ( db_service ) if context . is_offline_mode (): alembic_env_migration . run_migrations_offline ( context ) # type:ignore[arg-type] else : await alembic_env_migration . run_migrations_online ( context ) # type:ignore[arg-type] execute_coroutine_with_sync_worker ( main ()) The EllarSQL migration package provides two main migration classes: SingleDatabaseAlembicEnvMigration : Manages migrations for a single database configuration, catering to both Engine and AsyncEngine . MultipleDatabaseAlembicEnvMigration : Manages migrations for multiple database configurations, covering both Engine and AsyncEngine . Customizing the Env file \u00b6 To customize or edit the Env file, it is recommended to inherit from either SingleDatabaseAlembicEnvMigration or MultipleDatabaseAlembicEnvMigration based on your specific configuration. Make the necessary changes within the inherited class. If you prefer to write something from scratch, then the abstract class AlembicEnvMigrationBase is the starting point. This class includes three abstract methods and expects a EllarSQLService during initialization, as demonstrated below: class AlembicEnvMigrationBase : def __init__ ( self , db_service : EllarSQLService ) -> None : self . db_service = db_service self . use_two_phase = db_service . migration_options . use_two_phase @abstractmethod def default_process_revision_directives ( self , context : \"MigrationContext\" , revision : RevisionArgs , directives : t . List [ \"MigrationScript\" ], ) -> t . Any : pass @abstractmethod def run_migrations_offline ( self , context : \"EnvironmentContext\" ) -> None : pass @abstractmethod async def run_migrations_online ( self , context : \"EnvironmentContext\" ) -> None : pass The run_migrations_online and run_migrations_offline are all similar to the same function from Alembic env.py template. The default_process_revision_directives is a callback is used to prevent an auto-migration from being generated when there are no changes to the schema described in details here Example \u00b6 import logging from logging.config import fileConfig from alembic import context from ellar_sql.migrations import AlembicEnvMigrationBase from ellar_sql.model.database_binds import get_metadata from ellar.app import current_injector from ellar.threading import execute_coroutine_with_sync_worker from ellar_sql.services import EllarSQLService # This is the Alembic Config object, which provides # access to the values within the .ini file in use. config = context . config logger = logging . getLogger ( \"alembic.env\" ) # Interpret the config file for Python logging. # This line sets up loggers essentially. fileConfig ( config . config_file_name ) # type:ignore[arg-type] class MyCustomMigrationEnv ( AlembicEnvMigrationBase ): def default_process_revision_directives ( self , context , revision , directives , ) -> None : if getattr ( context . config . cmd_opts , \"autogenerate\" , False ): script = directives [ 0 ] if script . upgrade_ops . is_empty (): directives [:] = [] logger . info ( \"No changes in schema detected.\" ) def run_migrations_offline ( self , context : \"EnvironmentContext\" ) -> None : \"\"\"Run migrations in 'offline' mode. This configures the context with just a URL and not an Engine, though an Engine is acceptable here as well. By skipping the Engine creation we don't even need a DBAPI to be available. Calls to context.execute() here emit the given string to the script output. \"\"\" pass async def run_migrations_online ( self , context : \"EnvironmentContext\" ) -> None : \"\"\"Run migrations in 'online' mode. In this scenario, we need to create an Engine and associate a connection with the context. \"\"\" key , engine = self . db_service . engines . popitem () metadata = get_metadata ( key , certain = True ) . metadata conf_args = {} conf_args . setdefault ( \"process_revision_directives\" , self . default_process_revision_directives ) with engine . connect () as connection : context . configure ( connection = connection , target_metadata = metadata , ** conf_args ) with context . begin_transaction (): context . run_migrations () async def main () -> None : db_service : EllarSQLService = current_injector . get ( EllarSQLService ) # initialize migration class alembic_env_migration = MyCustomMigrationEnv ( db_service ) if context . is_offline_mode (): alembic_env_migration . run_migrations_offline ( context ) else : await alembic_env_migration . run_migrations_online ( context ) execute_coroutine_with_sync_worker ( main ()) This migration environment class, MyCustomMigrationEnv , inherits from AlembicEnvMigrationBase and provides the necessary methods for offline and online migrations. It utilizes the EllarSQLService to obtain the database engines and metadata for the migration process. The main function initializes and executes the migration class, with specific handling for offline and online modes.","title":"customizing env"},{"location":"migrations/env/#alembic-env","text":"In the generated migration template, EllarSQL adopts an async-first approach for handling migration file generation. This approach simplifies the execution of migrations for both Session , Engine , AsyncSession , and AsyncEngine , but it also introduces a certain level of complexity. from logging.config import fileConfig from alembic import context from ellar.app import current_injector from ellar.threading import execute_coroutine_with_sync_worker from ellar_sql.migrations import SingleDatabaseAlembicEnvMigration from ellar_sql.services import EllarSQLService # this is the Alembic Config object, which provides # access to the values within the .ini file in use. config = context . config # Interpret the config file for Python logging. # This line sets up loggers basically. fileConfig ( config . config_file_name ) # type:ignore[arg-type] # logger = logging.getLogger(\"alembic.env\") # other values from the config, defined by the needs of env.py, # can be acquired: # my_important_option = config.get_main_option(\"my_important_option\") # ... etc. async def main () -> None : db_service : EllarSQLService = current_injector . get ( EllarSQLService ) # initialize migration class alembic_env_migration = SingleDatabaseAlembicEnvMigration ( db_service ) if context . is_offline_mode (): alembic_env_migration . run_migrations_offline ( context ) # type:ignore[arg-type] else : await alembic_env_migration . run_migrations_online ( context ) # type:ignore[arg-type] execute_coroutine_with_sync_worker ( main ()) The EllarSQL migration package provides two main migration classes: SingleDatabaseAlembicEnvMigration : Manages migrations for a single database configuration, catering to both Engine and AsyncEngine . MultipleDatabaseAlembicEnvMigration : Manages migrations for multiple database configurations, covering both Engine and AsyncEngine .","title":"Alembic Env"},{"location":"migrations/env/#customizing-the-env-file","text":"To customize or edit the Env file, it is recommended to inherit from either SingleDatabaseAlembicEnvMigration or MultipleDatabaseAlembicEnvMigration based on your specific configuration. Make the necessary changes within the inherited class. If you prefer to write something from scratch, then the abstract class AlembicEnvMigrationBase is the starting point. This class includes three abstract methods and expects a EllarSQLService during initialization, as demonstrated below: class AlembicEnvMigrationBase : def __init__ ( self , db_service : EllarSQLService ) -> None : self . db_service = db_service self . use_two_phase = db_service . migration_options . use_two_phase @abstractmethod def default_process_revision_directives ( self , context : \"MigrationContext\" , revision : RevisionArgs , directives : t . List [ \"MigrationScript\" ], ) -> t . Any : pass @abstractmethod def run_migrations_offline ( self , context : \"EnvironmentContext\" ) -> None : pass @abstractmethod async def run_migrations_online ( self , context : \"EnvironmentContext\" ) -> None : pass The run_migrations_online and run_migrations_offline are all similar to the same function from Alembic env.py template. The default_process_revision_directives is a callback is used to prevent an auto-migration from being generated when there are no changes to the schema described in details here","title":"Customizing the Env file"},{"location":"migrations/env/#example","text":"import logging from logging.config import fileConfig from alembic import context from ellar_sql.migrations import AlembicEnvMigrationBase from ellar_sql.model.database_binds import get_metadata from ellar.app import current_injector from ellar.threading import execute_coroutine_with_sync_worker from ellar_sql.services import EllarSQLService # This is the Alembic Config object, which provides # access to the values within the .ini file in use. config = context . config logger = logging . getLogger ( \"alembic.env\" ) # Interpret the config file for Python logging. # This line sets up loggers essentially. fileConfig ( config . config_file_name ) # type:ignore[arg-type] class MyCustomMigrationEnv ( AlembicEnvMigrationBase ): def default_process_revision_directives ( self , context , revision , directives , ) -> None : if getattr ( context . config . cmd_opts , \"autogenerate\" , False ): script = directives [ 0 ] if script . upgrade_ops . is_empty (): directives [:] = [] logger . info ( \"No changes in schema detected.\" ) def run_migrations_offline ( self , context : \"EnvironmentContext\" ) -> None : \"\"\"Run migrations in 'offline' mode. This configures the context with just a URL and not an Engine, though an Engine is acceptable here as well. By skipping the Engine creation we don't even need a DBAPI to be available. Calls to context.execute() here emit the given string to the script output. \"\"\" pass async def run_migrations_online ( self , context : \"EnvironmentContext\" ) -> None : \"\"\"Run migrations in 'online' mode. In this scenario, we need to create an Engine and associate a connection with the context. \"\"\" key , engine = self . db_service . engines . popitem () metadata = get_metadata ( key , certain = True ) . metadata conf_args = {} conf_args . setdefault ( \"process_revision_directives\" , self . default_process_revision_directives ) with engine . connect () as connection : context . configure ( connection = connection , target_metadata = metadata , ** conf_args ) with context . begin_transaction (): context . run_migrations () async def main () -> None : db_service : EllarSQLService = current_injector . get ( EllarSQLService ) # initialize migration class alembic_env_migration = MyCustomMigrationEnv ( db_service ) if context . is_offline_mode (): alembic_env_migration . run_migrations_offline ( context ) else : await alembic_env_migration . run_migrations_online ( context ) execute_coroutine_with_sync_worker ( main ()) This migration environment class, MyCustomMigrationEnv , inherits from AlembicEnvMigrationBase and provides the necessary methods for offline and online migrations. It utilizes the EllarSQLService to obtain the database engines and metadata for the migration process. The main function initializes and executes the migration class, with specific handling for offline and online modes.","title":"Example"},{"location":"models/","text":"Quick Start \u00b6 In this segment, we will walk through the process of configuring EllarSQL within your Ellar application, ensuring that all essential services are registered, configurations are set, and everything is prepared for immediate use. Before we delve into the setup instructions, it is assumed that you possess a comprehensive understanding of how Ellar Modules operate. Installation \u00b6 Let us install all the required packages, assuming that your Python environment has been properly configured: For Existing Project: \u00b6 pip install ellar-sql For New Project : \u00b6 pip install ellar ellar-cli ellar-sql After a successful package installation, we need to scaffold a new project using ellar cli tool ellar new db-learning This will scaffold db-learning project with necessary file structure shown below. path/to/db-learning/ \u251c\u2500 db_learning/ \u2502 \u251c\u2500 apps/ \u2502 \u2502 \u251c\u2500 __init__.py \u2502 \u251c\u2500 core/ \u2502 \u251c\u2500 config.py \u2502 \u251c\u2500 domain \u2502 \u251c\u2500 root_module.py \u2502 \u251c\u2500 server.py \u2502 \u251c\u2500 __init__.py \u251c\u2500 tests/ \u2502 \u251c\u2500 __init__.py \u251c\u2500 pyproject.toml \u251c\u2500 README.md Next, in db_learning/ directory, we need to create a models.py . It will hold all our SQLAlchemy ORM Models for now. Creating a Model \u00b6 In models.py , we use ellar_sql.model.Model to create our SQLAlchemy ORM Models. db_learning/model.py from ellar_sql import model class User ( model . Model ): id : model . Mapped [ int ] = model . mapped_column ( model . Integer , primary_key = True ) username : model . Mapped [ str ] = model . mapped_column ( model . String , unique = True , nullable = False ) email : model . Mapped [ str ] = model . mapped_column ( model . String ) Info ellar_sql.model also exposes sqlalchemy , sqlalchemy.orm and sqlalchemy.event imports just for ease of import reference Create A UserController \u00b6 Let's create a controller that exposes our user data. db_learning/controller.py import ellar.common as ecm from ellar.pydantic import EmailStr from ellar_sql import model , get_or_404 from .models import User @ecm . Controller class UsersController ( ecm . ControllerBase ): @ecm . post ( \"/users\" ) def create_user ( self , username : ecm . Body [ str ], email : ecm . Body [ EmailStr ], session : ecm . Inject [ model . Session ]): user = User ( username = username , email = email ) session . add ( user ) session . commit () session . refresh ( user ) return user . dict () @ecm . get ( \"/users/{user_id:int}\" ) def user_by_id ( self , user_id : int ): user = get_or_404 ( User , user_id ) return user . dict () @ecm . get ( \"/\" ) async def user_list ( self , session : ecm . Inject [ model . Session ]): stmt = model . select ( User ) rows = session . execute ( stmt . offset ( 0 ) . limit ( 100 )) . scalars () return [ row . dict () for row in rows ] @ecm . get ( \"/{user_id:int}\" ) async def user_delete ( self , user_id : int , session : ecm . Inject [ model . Session ]): user = get_or_404 ( User , user_id ) session . delete ( user ) return { 'detail' : f 'User id= { user_id } Deleted successfully' } EllarSQLModule Setup \u00b6 In the root_module.py file, two main tasks need to be performed: Register the UsersController to make the /users endpoint available when starting the application. Configure the EllarSQLModule , which will set up and register essential services such as EllarSQLService , Session , and Engine . db_learning/root_module.py from ellar.common import Module , exception_handler , IExecutionContext , JSONResponse , Response , IApplicationStartup from ellar.app import App from ellar.core import ModuleBase from ellar_sql import EllarSQLModule , EllarSQLService from .controller import UsersController @Module ( modules = [ EllarSQLModule . setup ( databases = { 'default' : { 'url' : 'sqlite:///app.db' , 'echo' : True } }, migration_options = { 'directory' : 'migrations' } )], controllers = [ UsersController ] ) class ApplicationModule ( ModuleBase , IApplicationStartup ): async def on_startup ( self , app : App ) -> None : db_service = app . injector . get ( EllarSQLService ) db_service . create_all () @exception_handler ( 404 ) def exception_404_handler ( cls , ctx : IExecutionContext , exc : Exception ) -> Response : return JSONResponse ( dict ( detail = \"Resource not found.\" ), status_code = 404 ) In the provided code snippet: We registered UserController and EllarSQLModule with specific configurations for the database and migration options. For more details on EllarSQLModule configurations . In the on_startup method, we obtained the EllarSQLService from the Ellar Dependency Injection container using EllarSQLModule . Subsequently, we invoked the create_all() method to generate the necessary SQLAlchemy tables. With these configurations, the application is now ready for testing. ellar runserver --reload Additionally, please remember to uncomment the configurations for the OpenAPIModule in the server.py file to enable visualization and interaction with the /users endpoint. Once done, you can access the OpenAPI documentation at http://127.0.0.1:8000/docs .","title":"Get Started"},{"location":"models/#quick-start","text":"In this segment, we will walk through the process of configuring EllarSQL within your Ellar application, ensuring that all essential services are registered, configurations are set, and everything is prepared for immediate use. Before we delve into the setup instructions, it is assumed that you possess a comprehensive understanding of how Ellar Modules operate.","title":"Quick Start"},{"location":"models/#installation","text":"Let us install all the required packages, assuming that your Python environment has been properly configured:","title":"Installation"},{"location":"models/#for-existing-project","text":"pip install ellar-sql","title":"For Existing Project:"},{"location":"models/#for-new-project","text":"pip install ellar ellar-cli ellar-sql After a successful package installation, we need to scaffold a new project using ellar cli tool ellar new db-learning This will scaffold db-learning project with necessary file structure shown below. path/to/db-learning/ \u251c\u2500 db_learning/ \u2502 \u251c\u2500 apps/ \u2502 \u2502 \u251c\u2500 __init__.py \u2502 \u251c\u2500 core/ \u2502 \u251c\u2500 config.py \u2502 \u251c\u2500 domain \u2502 \u251c\u2500 root_module.py \u2502 \u251c\u2500 server.py \u2502 \u251c\u2500 __init__.py \u251c\u2500 tests/ \u2502 \u251c\u2500 __init__.py \u251c\u2500 pyproject.toml \u251c\u2500 README.md Next, in db_learning/ directory, we need to create a models.py . It will hold all our SQLAlchemy ORM Models for now.","title":"For New Project:"},{"location":"models/#creating-a-model","text":"In models.py , we use ellar_sql.model.Model to create our SQLAlchemy ORM Models. db_learning/model.py from ellar_sql import model class User ( model . Model ): id : model . Mapped [ int ] = model . mapped_column ( model . Integer , primary_key = True ) username : model . Mapped [ str ] = model . mapped_column ( model . String , unique = True , nullable = False ) email : model . Mapped [ str ] = model . mapped_column ( model . String ) Info ellar_sql.model also exposes sqlalchemy , sqlalchemy.orm and sqlalchemy.event imports just for ease of import reference","title":"Creating a Model"},{"location":"models/#create-a-usercontroller","text":"Let's create a controller that exposes our user data. db_learning/controller.py import ellar.common as ecm from ellar.pydantic import EmailStr from ellar_sql import model , get_or_404 from .models import User @ecm . Controller class UsersController ( ecm . ControllerBase ): @ecm . post ( \"/users\" ) def create_user ( self , username : ecm . Body [ str ], email : ecm . Body [ EmailStr ], session : ecm . Inject [ model . Session ]): user = User ( username = username , email = email ) session . add ( user ) session . commit () session . refresh ( user ) return user . dict () @ecm . get ( \"/users/{user_id:int}\" ) def user_by_id ( self , user_id : int ): user = get_or_404 ( User , user_id ) return user . dict () @ecm . get ( \"/\" ) async def user_list ( self , session : ecm . Inject [ model . Session ]): stmt = model . select ( User ) rows = session . execute ( stmt . offset ( 0 ) . limit ( 100 )) . scalars () return [ row . dict () for row in rows ] @ecm . get ( \"/{user_id:int}\" ) async def user_delete ( self , user_id : int , session : ecm . Inject [ model . Session ]): user = get_or_404 ( User , user_id ) session . delete ( user ) return { 'detail' : f 'User id= { user_id } Deleted successfully' }","title":"Create A UserController"},{"location":"models/#ellarsqlmodule-setup","text":"In the root_module.py file, two main tasks need to be performed: Register the UsersController to make the /users endpoint available when starting the application. Configure the EllarSQLModule , which will set up and register essential services such as EllarSQLService , Session , and Engine . db_learning/root_module.py from ellar.common import Module , exception_handler , IExecutionContext , JSONResponse , Response , IApplicationStartup from ellar.app import App from ellar.core import ModuleBase from ellar_sql import EllarSQLModule , EllarSQLService from .controller import UsersController @Module ( modules = [ EllarSQLModule . setup ( databases = { 'default' : { 'url' : 'sqlite:///app.db' , 'echo' : True } }, migration_options = { 'directory' : 'migrations' } )], controllers = [ UsersController ] ) class ApplicationModule ( ModuleBase , IApplicationStartup ): async def on_startup ( self , app : App ) -> None : db_service = app . injector . get ( EllarSQLService ) db_service . create_all () @exception_handler ( 404 ) def exception_404_handler ( cls , ctx : IExecutionContext , exc : Exception ) -> Response : return JSONResponse ( dict ( detail = \"Resource not found.\" ), status_code = 404 ) In the provided code snippet: We registered UserController and EllarSQLModule with specific configurations for the database and migration options. For more details on EllarSQLModule configurations . In the on_startup method, we obtained the EllarSQLService from the Ellar Dependency Injection container using EllarSQLModule . Subsequently, we invoked the create_all() method to generate the necessary SQLAlchemy tables. With these configurations, the application is now ready for testing. ellar runserver --reload Additionally, please remember to uncomment the configurations for the OpenAPIModule in the server.py file to enable visualization and interaction with the /users endpoint. Once done, you can access the OpenAPI documentation at http://127.0.0.1:8000/docs .","title":"EllarSQLModule Setup"},{"location":"models/configuration/","text":"EllarSQLModule Config \u00b6 EllarSQLModule is an Ellar Dynamic Module that offers two ways of configuration: EllarSQLModule.register_setup() : This method registers a ModuleSetup that depends on the application config. EllarSQLModule.setup() : This method immediately sets up the module with the provided options. While we've explored many examples using EllarSQLModule.setup() , this section will focus on the usage of EllarSQLModule.register_setup() . Before delving into that, let's first explore the setup options available for EllarSQLModule . EllarSQLModule Configuration Parameters \u00b6 databases : typing.Union[str, typing.Dict[str, typing.Any]] : This field describes the options for your database engine, utilized by SQLAlchemy Engine , Metadata , and Sessions . There are three methods for setting these options, as illustrated below: ## CASE 1 databases = \"sqlite//:memory:\" # This will result to # databases = { # 'default': { # 'url': 'sqlite//:memory:' # } # } ## CASE 2 databases = { 'default' : \"sqlite//:memory:\" , 'db2' : \"sqlite//:memory:\" , } # This will result to # databases = { # 'default': { # 'url': 'sqlite//:memory:' # }, # 'db2': { # 'url': 'sqlite//:memory:' # }, # } ## CASE 3 - With Extra Engine Options databases = { 'default' : { \"url\" : \"sqlite//:memory:\" , \"echo\" : True , \"connect_args\" : { \"check_same_thread\" : False } } } migration_options : typing.Union[typing.Dict[str, typing.Any], MigrationOption] : The migration options can be specified either in a dictionary object or as a MigrationOption schema instance. These configurations are essential for defining the necessary settings for database migrations. The available options include: directory = migrations :directory to save alembic migration templates/env and migration versions use_two_phase = True : bool value that indicates use of two in migration SQLAlchemy session context_configure = {compare_type: True, render_as_batch: True, include_object: callable} : key-value pair that will be passed to EnvironmentContext.configure . Default context_configure set by EllarSQL: compare_type=True : This option configures the automatic migration generation subsystem to detect column type changes. render_as_batch=True : This option generates migration scripts using batch mode , an operational mode that works around limitations of many ALTER commands in the SQLite database by implementing a \u201cmove and copy\u201d workflow. include_object : Skips model from auto gen when it's defined in table args eg: __table_args__ = {\"info\": {\"skip_autogen\": True}} session_options : t.Optional[t.Dict[str, t.Any]] : A default key-value pair pass to SQLAlchemy.Session() when creating a session. engine_options : t.Optional[t.Dict[str, t.Any]] : A default key-value pair to pass to every database configuration engine configuration for SQLAlchemy.create_engine() . This overriden by configurations provided in databases parameters models : t.Optional[t.List[str]] : list of python modules that defines model.Model models. By providing this, EllarSQL ensures models are discovered before Alembic CLI migration actions or any other database interactions with SQLAlchemy. echo : bool : The default value for echo and echo_pool for every engine. This is useful to quickly debug the connections and queries issued from SQLAlchemy. root_path : t.Optional[str] : The root_path for sqlite databases and migration base directory. Defaults to the execution path of EllarSQLModule Connection URL Format \u00b6 Refer to SQLAlchemy\u2019s documentation on Engine Configuration for a comprehensive overview of syntax, dialects, and available options. The standard format for a basic database connection URL is as follows: Username, password, host, and port are optional parameters based on the database type and specific configuration. dialect://username:password@host:port/database Here are some example connection strings: # SQLite, relative to Flask instance path sqlite:///project.db # PostgreSQL postgresql://scott:tiger@localhost/project # MySQL / MariaDB mysql://scott:tiger@localhost/project Default Driver Options \u00b6 To enhance usability for web applications, default options have been configured for SQLite and MySQL engines. For SQLite, relative file paths are now relative to the root_path option rather than the current working directory. Additionally, in-memory databases utilize a static pool and check_same_thread to ensure seamless operation across multiple requests. For MySQL (and MariaDB ) servers, a default idle connection timeout of 8 hours has been set. This configuration helps avoid errors, such as 2013: Lost connection to MySQL server during query. To preemptively recreate connections before hitting this timeout, a default pool_recycle value of 2 hours ( 7200 seconds) is applied. Timeout \u00b6 Certain databases, including MySQL and MariaDB , might be set to close inactive connections after a certain duration, which can lead to errors like 2013: Lost connection to MySQL server during query. While this behavior is configured by default in MySQL and MariaDB, it could also be implemented by other database services. If you encounter such errors, consider adjusting the pool_recycle option in the engine settings to a value less than the database's timeout. Alternatively, you can explore setting pool_pre_ping if you anticipate frequent closure of connections, especially in scenarios like running the database in a container that may undergo periodic restarts. For more in-depth information on dealing with disconnects , refer to SQLAlchemy's documentation on handling connection issues. EllarSQLModule RegisterSetup \u00b6 As mentioned earlier, EllarSQLModule can be configured from the application through EllarSQLModule.register_setup . This process registers a ModuleSetup factory that depends on the Application Config object. The factory retrieves the ELLAR_SQL attribute from the config and validates the data before passing it to EllarSQLModule for setup. It's essential to note that ELLAR_SQL will be a dictionary object with the configuration parameters mentioned above as keys. Here's a quick example: db_learning/root_module.py from ellar.common import Module , exception_handler , IExecutionContext , JSONResponse , Response , IApplicationStartup from ellar.app import App from ellar.core import ModuleBase from ellar_sql import EllarSQLModule , EllarSQLService from .controller import UsersController @Module ( modules = [ EllarSQLModule . register_setup ()], controllers = [ UsersController ] ) class ApplicationModule ( ModuleBase , IApplicationStartup ): async def on_startup ( self , app : App ) -> None : db_service = app . injector . get ( EllarSQLService ) db_service . create_all () @exception_handler ( 404 ) def exception_404_handler ( cls , ctx : IExecutionContext , exc : Exception ) -> Response : return JSONResponse ( dict ( detail = \"Resource not found.\" ), status_code = 404 ) Let's update config.py . import typing as t ... class DevelopmentConfig ( BaseConfig ): DEBUG : bool = True ELLAR_SQL : t . Dict [ str , t . Any ] = { 'databases' : { 'default' : 'sqlite:///app.db' , }, 'echo' : True , 'migration_options' : { 'directory' : 'migrations' # root directory will be determined based on where the module is instantiated. }, 'models' : [] } The registered ModuleSetup factory reads the ELLAR_SQL value and configures the EllarSQLModule appropriately. This approach is particularly useful when dealing with multiple environments. It allows for seamless modification of the ELLAR_SQL values in various environments such as Continuous Integration (CI), Development, Staging, or Production. You can easily change the settings for each environment and export the configurations as a string to be imported into ELLAR_CONFIG_MODULE .","title":"Configuration"},{"location":"models/configuration/#ellarsqlmodule-config","text":"EllarSQLModule is an Ellar Dynamic Module that offers two ways of configuration: EllarSQLModule.register_setup() : This method registers a ModuleSetup that depends on the application config. EllarSQLModule.setup() : This method immediately sets up the module with the provided options. While we've explored many examples using EllarSQLModule.setup() , this section will focus on the usage of EllarSQLModule.register_setup() . Before delving into that, let's first explore the setup options available for EllarSQLModule .","title":"EllarSQLModule Config"},{"location":"models/configuration/#ellarsqlmodule-configuration-parameters","text":"databases : typing.Union[str, typing.Dict[str, typing.Any]] : This field describes the options for your database engine, utilized by SQLAlchemy Engine , Metadata , and Sessions . There are three methods for setting these options, as illustrated below: ## CASE 1 databases = \"sqlite//:memory:\" # This will result to # databases = { # 'default': { # 'url': 'sqlite//:memory:' # } # } ## CASE 2 databases = { 'default' : \"sqlite//:memory:\" , 'db2' : \"sqlite//:memory:\" , } # This will result to # databases = { # 'default': { # 'url': 'sqlite//:memory:' # }, # 'db2': { # 'url': 'sqlite//:memory:' # }, # } ## CASE 3 - With Extra Engine Options databases = { 'default' : { \"url\" : \"sqlite//:memory:\" , \"echo\" : True , \"connect_args\" : { \"check_same_thread\" : False } } } migration_options : typing.Union[typing.Dict[str, typing.Any], MigrationOption] : The migration options can be specified either in a dictionary object or as a MigrationOption schema instance. These configurations are essential for defining the necessary settings for database migrations. The available options include: directory = migrations :directory to save alembic migration templates/env and migration versions use_two_phase = True : bool value that indicates use of two in migration SQLAlchemy session context_configure = {compare_type: True, render_as_batch: True, include_object: callable} : key-value pair that will be passed to EnvironmentContext.configure . Default context_configure set by EllarSQL: compare_type=True : This option configures the automatic migration generation subsystem to detect column type changes. render_as_batch=True : This option generates migration scripts using batch mode , an operational mode that works around limitations of many ALTER commands in the SQLite database by implementing a \u201cmove and copy\u201d workflow. include_object : Skips model from auto gen when it's defined in table args eg: __table_args__ = {\"info\": {\"skip_autogen\": True}} session_options : t.Optional[t.Dict[str, t.Any]] : A default key-value pair pass to SQLAlchemy.Session() when creating a session. engine_options : t.Optional[t.Dict[str, t.Any]] : A default key-value pair to pass to every database configuration engine configuration for SQLAlchemy.create_engine() . This overriden by configurations provided in databases parameters models : t.Optional[t.List[str]] : list of python modules that defines model.Model models. By providing this, EllarSQL ensures models are discovered before Alembic CLI migration actions or any other database interactions with SQLAlchemy. echo : bool : The default value for echo and echo_pool for every engine. This is useful to quickly debug the connections and queries issued from SQLAlchemy. root_path : t.Optional[str] : The root_path for sqlite databases and migration base directory. Defaults to the execution path of EllarSQLModule","title":"EllarSQLModule Configuration Parameters"},{"location":"models/configuration/#connection-url-format","text":"Refer to SQLAlchemy\u2019s documentation on Engine Configuration for a comprehensive overview of syntax, dialects, and available options. The standard format for a basic database connection URL is as follows: Username, password, host, and port are optional parameters based on the database type and specific configuration. dialect://username:password@host:port/database Here are some example connection strings: # SQLite, relative to Flask instance path sqlite:///project.db # PostgreSQL postgresql://scott:tiger@localhost/project # MySQL / MariaDB mysql://scott:tiger@localhost/project","title":"Connection URL Format"},{"location":"models/configuration/#default-driver-options","text":"To enhance usability for web applications, default options have been configured for SQLite and MySQL engines. For SQLite, relative file paths are now relative to the root_path option rather than the current working directory. Additionally, in-memory databases utilize a static pool and check_same_thread to ensure seamless operation across multiple requests. For MySQL (and MariaDB ) servers, a default idle connection timeout of 8 hours has been set. This configuration helps avoid errors, such as 2013: Lost connection to MySQL server during query. To preemptively recreate connections before hitting this timeout, a default pool_recycle value of 2 hours ( 7200 seconds) is applied.","title":"Default Driver Options"},{"location":"models/configuration/#timeout","text":"Certain databases, including MySQL and MariaDB , might be set to close inactive connections after a certain duration, which can lead to errors like 2013: Lost connection to MySQL server during query. While this behavior is configured by default in MySQL and MariaDB, it could also be implemented by other database services. If you encounter such errors, consider adjusting the pool_recycle option in the engine settings to a value less than the database's timeout. Alternatively, you can explore setting pool_pre_ping if you anticipate frequent closure of connections, especially in scenarios like running the database in a container that may undergo periodic restarts. For more in-depth information on dealing with disconnects , refer to SQLAlchemy's documentation on handling connection issues.","title":"Timeout"},{"location":"models/configuration/#ellarsqlmodule-registersetup","text":"As mentioned earlier, EllarSQLModule can be configured from the application through EllarSQLModule.register_setup . This process registers a ModuleSetup factory that depends on the Application Config object. The factory retrieves the ELLAR_SQL attribute from the config and validates the data before passing it to EllarSQLModule for setup. It's essential to note that ELLAR_SQL will be a dictionary object with the configuration parameters mentioned above as keys. Here's a quick example: db_learning/root_module.py from ellar.common import Module , exception_handler , IExecutionContext , JSONResponse , Response , IApplicationStartup from ellar.app import App from ellar.core import ModuleBase from ellar_sql import EllarSQLModule , EllarSQLService from .controller import UsersController @Module ( modules = [ EllarSQLModule . register_setup ()], controllers = [ UsersController ] ) class ApplicationModule ( ModuleBase , IApplicationStartup ): async def on_startup ( self , app : App ) -> None : db_service = app . injector . get ( EllarSQLService ) db_service . create_all () @exception_handler ( 404 ) def exception_404_handler ( cls , ctx : IExecutionContext , exc : Exception ) -> Response : return JSONResponse ( dict ( detail = \"Resource not found.\" ), status_code = 404 ) Let's update config.py . import typing as t ... class DevelopmentConfig ( BaseConfig ): DEBUG : bool = True ELLAR_SQL : t . Dict [ str , t . Any ] = { 'databases' : { 'default' : 'sqlite:///app.db' , }, 'echo' : True , 'migration_options' : { 'directory' : 'migrations' # root directory will be determined based on where the module is instantiated. }, 'models' : [] } The registered ModuleSetup factory reads the ELLAR_SQL value and configures the EllarSQLModule appropriately. This approach is particularly useful when dealing with multiple environments. It allows for seamless modification of the ELLAR_SQL values in various environments such as Continuous Integration (CI), Development, Staging, or Production. You can easily change the settings for each environment and export the configurations as a string to be imported into ELLAR_CONFIG_MODULE .","title":"EllarSQLModule RegisterSetup"},{"location":"models/extra-fields/","text":"Extra Column Types \u00b6 EllarSQL comes with extra column type descriptor that will come in handy in your project. They include GUID IPAddress GUID Column \u00b6 GUID, Global Unique Identifier of 128-bit text string can be used as a unique identifier in a table. For applications that require a GUID type of primary, this can be a use resource. It uses UUID type in Postgres and CHAR(32) in other SQL databases. import uuid from ellar_sql import model class Guid ( model . Model ): id : model . Mapped [ uuid . uuid4 ] = model . mapped_column ( \"id\" , model . GUID (), nullable = False , unique = True , primary_key = True , default = uuid . uuid4 , ) IPAddress Column \u00b6 GenericIP column type validates and converts column value to ipaddress.IPv4Address or ipaddress.IPv6Address . It uses INET type in Postgres and CHAR(45) in other SQL databases. import typing as t import ipaddress from ellar_sql import model class IPAddress ( model . Model ): id = model . Column ( model . Integer , primary_key = True ) ip : model . Mapped [ t . Union [ ipaddress . IPv4Address , ipaddress . IPv6Address ]] = model . Column ( model . GenericIP )","title":"Extra Fields"},{"location":"models/extra-fields/#extra-column-types","text":"EllarSQL comes with extra column type descriptor that will come in handy in your project. They include GUID IPAddress","title":"Extra Column Types"},{"location":"models/extra-fields/#guid-column","text":"GUID, Global Unique Identifier of 128-bit text string can be used as a unique identifier in a table. For applications that require a GUID type of primary, this can be a use resource. It uses UUID type in Postgres and CHAR(32) in other SQL databases. import uuid from ellar_sql import model class Guid ( model . Model ): id : model . Mapped [ uuid . uuid4 ] = model . mapped_column ( \"id\" , model . GUID (), nullable = False , unique = True , primary_key = True , default = uuid . uuid4 , )","title":"GUID Column"},{"location":"models/extra-fields/#ipaddress-column","text":"GenericIP column type validates and converts column value to ipaddress.IPv4Address or ipaddress.IPv6Address . It uses INET type in Postgres and CHAR(45) in other SQL databases. import typing as t import ipaddress from ellar_sql import model class IPAddress ( model . Model ): id = model . Column ( model . Integer , primary_key = True ) ip : model . Mapped [ t . Union [ ipaddress . IPv4Address , ipaddress . IPv6Address ]] = model . Column ( model . GenericIP )","title":"IPAddress Column"},{"location":"models/models/","text":"Models and Tables \u00b6 The ellar_sql.model.Model class acts as a factory for creating SQLAlchemy models, and associating the generated models with the corresponding Metadata through their designated __database__ key. This class can be configured through the __base_config__ attribute, allowing you to specify how your SQLAlchemy model should be created. The __base_config__ attribute can be of type ModelBaseConfig , which is a dataclass, or a dictionary with keys that match the attributes of ModelBaseConfig . Attributes of ModelBaseConfig : as_base : Indicates whether the class should be treated as a Base class for other model definitions, similar to creating a Base from a DeclarativeBase or DeclarativeBaseNoMeta class. (Default: False) use_base : Specifies the base classes that will be used to create the SQLAlchemy model. (Default: []) Creating a Base Class \u00b6 Model treats each model as a standalone entity. Each instance of model.Model creates a distinct declarative base for itself, using the __database__ key as a reference to determine its associated Metadata . Consequently, models sharing the same __database__ key will utilize the same Metadata object. Let's explore how we can create a Base model using Model , similar to the approach in traditional SQLAlchemy . from ellar_sql import model , ModelBaseConfig class Base ( model . Model ): __base_config__ = ModelBaseConfig ( as_base = True , use_bases = [ model . DeclarativeBase ]) assert issubclass ( Base , model . DeclarativeBase ) If you are interested in SQLAlchemy\u2019s native support for data classes , then you can add MappedAsDataclass to use_bases as shown below: from ellar_sql import model , ModelBaseConfig class Base ( model . Model ): __base_config__ = ModelBaseConfig ( as_base = True , use_bases = [ model . DeclarativeBase , model . MappedAsDataclass ]) assert issubclass ( Base , model . MappedAsDataclass ) In the examples above, Base classes are created, all subclassed from the use_bases provided, and with the as_base option, the factory creates the Base class as a Base . Create base with MetaData \u00b6 You can also configure the SQLAlchemy object with a custom MetaData object. For instance, you can define a specific naming convention for constraints, ensuring consistency and predictability in constraint names. This can be particularly beneficial during migrations, as detailed by Alembic . For example: from ellar_sql import model , ModelBaseConfig class Base ( model . Model ): __base_config__ = ModelBaseConfig ( as_base = True , use_bases = [ model . DeclarativeBase ]) metadata = model . MetaData ( naming_convention = { \"ix\" : 'ix_ %(column_0_label)s ' , \"uq\" : \"uq_ %(table_name)s _ %(column_0_name)s \" , \"ck\" : \"ck_ %(table_name)s _ %(constraint_name)s \" , \"fk\" : \"fk_ %(table_name)s _ %(column_0_name)s _ %(referred_table_name)s \" , \"pk\" : \"pk_ %(table_name)s \" }) Abstract Models and Mixins \u00b6 If the desired behavior is only applicable to specific models rather than all models, you can use an abstract model base class to customize only those models. For example, if certain models need to track their creation or update timestamps , t his approach allows for targeted customization. from datetime import datetime , timezone from ellar_sql import model from sqlalchemy.orm import Mapped , mapped_column class TimestampModel ( model . Model ): __abstract__ = True created : Mapped [ datetime ] = mapped_column ( default = lambda : datetime . now ( timezone . utc )) updated : Mapped [ datetime ] = mapped_column ( default = lambda : datetime . now ( timezone . utc ), onupdate = lambda : datetime . now ( timezone . utc )) class BookAuthor ( model . Model ): id : Mapped [ int ] = mapped_column ( primary_key = True ) name : Mapped [ str ] = mapped_column ( unique = True ) class Book ( TimestampModel ): id : Mapped [ int ] = mapped_column ( primary_key = True ) title : Mapped [ str ] This can also be done with a mixin class, inherited separately. from datetime import datetime , timezone from ellar_sql import model from sqlalchemy.orm import Mapped , mapped_column class TimestampModel : created : Mapped [ datetime ] = mapped_column ( default = lambda : datetime . now ( timezone . utc )) updated : Mapped [ datetime ] = mapped_column ( default = lambda : datetime . now ( timezone . utc ), onupdate = lambda : datetime . now ( timezone . utc )) class Book ( model . Model , TimestampModel ): id : Mapped [ int ] = mapped_column ( primary_key = True ) title : Mapped [ str ] Defining Models \u00b6 Unlike plain SQLAlchemy, EllarSQL models will automatically generate a table name if the __tablename__ attribute is not set, provided a primary key column is defined. from ellar_sql import model class User ( model . Model ): id : model . Mapped [ int ] = model . mapped_column ( primary_key = True ) username : model . Mapped [ str ] = model . mapped_column ( unique = True ) email : model . Mapped [ str ] class UserAddress ( model . Model ): __tablename__ = 'user-address' id : model . Mapped [ int ] = model . mapped_column ( primary_key = True ) address : model . Mapped [ str ] = model . mapped_column ( unique = True ) assert User . __tablename__ == 'user' assert UserAddress . __tablename__ == 'user-address' For a comprehensive guide on defining model classes declaratively, refer to SQLAlchemy\u2019s declarative documentation . This resource provides detailed information and insights into the declarative approach for defining model classes. Defining Tables \u00b6 The table class is designed to receive a table name, followed by columns and other table components such as constraints. EllarSQL enhances the functionality of the SQLAlchemy Table by facilitating the selection of Metadata based on the __database__ argument. Directly creating a table proves particularly valuable when establishing many-to-many relationships. In such cases, the association table doesn't need its dedicated model class; rather, it can be conveniently accessed through the relevant relationship attributes on the associated models. from ellar_sql import model author_book_m2m = model . Table ( \"author_book\" , model . Column ( \"book_author_id\" , model . ForeignKey ( BookAuthor . id ), primary_key = True ), model . Column ( \"book_id\" , model . ForeignKey ( Book . id ), primary_key = True ), ) Quick Tutorial \u00b6 In this section, we'll delve into straightforward CRUD operations using the ORM objects. However, if you're not well-acquainted with SQLAlchemy, feel free to explore their tutorial on ORM for a more comprehensive understanding. Having understood, Model usage. Let's create a User model from ellar_sql import model class User ( model . Model ): id : model . Mapped [ int ] = model . mapped_column ( primary_key = True ) username : model . Mapped [ str ] = model . mapped_column ( unique = True ) full_name : model . Mapped [ str ] = model . mapped_column ( model . String ) We have created a User model but the data does not exist. Let's fix that from ellar.app import current_injector from ellar_sql import EllarSQLService db_service = current_injector . get ( EllarSQLService ) db_service . create_all () Insert \u00b6 To insert a data, you need a session import ellar.common as ecm from .model import User @ecm . post ( '/create' ) def create_user (): session = User . get_db_session () squidward = User ( name = \"squidward\" , fullname = \"Squidward Tentacles\" ) session . add ( squidward ) session . commit () return squidward . dict ( exclude = { 'id' }) In the above illustration, squidward data was converted to dictionary object by calling .dict() and excluding the id as shown below. It's important to note this functionality has not been extended to a relationship objects in an SQLAlchemy ORM object . Update \u00b6 To update, make changes to the ORM object and commit. import ellar.common as ecm from .model import User @ecm . put ( '/update' ) def update_user (): session = User . get_db_session () squidward = session . get ( User , 1 ) squidward . fullname = 'EllarSQL' session . commit () return squidward . dict () Delete \u00b6 To delete, pass the ORM object to session.delete() . import ellar.common as ecm from .model import User @ecm . delete ( '/delete' ) def delete_user (): session = User . get_db_session () squidward = session . get ( User , 1 ) session . delete ( squidward ) session . commit () return '' After modifying data, you must call session.commit() to commit the changes to the database. Otherwise, changes may not be persisted to the database. View Utilities \u00b6 EllarSQL provides some utility query functions to check missing entities and raise 404 Not found if not found. get_or_404 : It will raise a 404 error if the row with the given id does not exist; otherwise, it will return the corresponding instance. first_or_404 : It will raise a 404 error if the query does not return any results; otherwise, it will return the first result. one_or_404 (): It will raise a 404 error if the query does not return exactly one result; otherwise, it will return the result. import ellar.common as ecm from ellar_sql import get_or_404 , one_or_404 , model @ecm . get ( \"/user-by-id/{user_id:int}\" ) def user_by_id ( user_id : int ): user = get_or_404 ( User , user_id ) return user . dict () @ecm . get ( \"/user-by-name/{name:str}\" ) def user_by_username ( name : str ): user = one_or_404 ( model . select ( User ) . filter_by ( name = name ), error_message = f \"No user named ' { name } '.\" ) return user . dict () Accessing Metadata and Engines \u00b6 In the process of EllarSQLModule setup, three services are registered to the Ellar IoC container. EllarSQLService : Which manages models, metadata, engines and sessions Engine : SQLAlchemy Engine of the default database configuration Session SQLAlchemy Session of the default database configuration Although with EllarSQLService you can get the engine and session . It's there for easy of access. import sqlalchemy as sa import sqlalchemy.orm as sa_orm from ellar.app import current_injector from ellar_sql import EllarSQLService db_service = current_injector . get ( EllarSQLService ) assert isinstance ( db_service . engine , sa . Engine ) assert isinstance ( db_service . session_factory (), sa_orm . Session ) Important Constraints \u00b6 EllarSQLModule databases options for SQLAlchemy.ext.asyncio.AsyncEngine will register SQLAlchemy.ext.asyncio.AsyncEngine and SQLAlchemy.ext.asyncio.AsyncSession EllarSQLModule databases options for SQLAlchemy.Engine will register SQLAlchemy.Engine and SQLAlchemy.orm.Session . EllarSQL.get_all_metadata() retrieves all configured metadatas EllarSQL.get_metadata() retrieves metadata by __database__ key or default is no parameter is passed. import sqlalchemy as sa import sqlalchemy.orm as sa_orm from ellar.app import current_injector # get engine from DI default_engine = current_injector . get ( sa . Engine ) # get session from DI session = current_injector . get ( sa_orm . Session ) assert isinstance ( default_engine , sa . Engine ) assert isinstance ( session , sa_orm . Session ) For Async Database options from sqlalchemy.ext.asyncio import AsyncSession , AsyncEngine from ellar.app import current_injector # get engine from DI default_engine = current_injector . get ( AsyncEngine ) # get session from DI session = current_injector . get ( AsyncSession ) assert isinstance ( default_engine , AsyncEngine ) assert isinstance ( session , AsyncSession )","title":"index"},{"location":"models/models/#models-and-tables","text":"The ellar_sql.model.Model class acts as a factory for creating SQLAlchemy models, and associating the generated models with the corresponding Metadata through their designated __database__ key. This class can be configured through the __base_config__ attribute, allowing you to specify how your SQLAlchemy model should be created. The __base_config__ attribute can be of type ModelBaseConfig , which is a dataclass, or a dictionary with keys that match the attributes of ModelBaseConfig . Attributes of ModelBaseConfig : as_base : Indicates whether the class should be treated as a Base class for other model definitions, similar to creating a Base from a DeclarativeBase or DeclarativeBaseNoMeta class. (Default: False) use_base : Specifies the base classes that will be used to create the SQLAlchemy model. (Default: [])","title":"Models and Tables"},{"location":"models/models/#creating-a-base-class","text":"Model treats each model as a standalone entity. Each instance of model.Model creates a distinct declarative base for itself, using the __database__ key as a reference to determine its associated Metadata . Consequently, models sharing the same __database__ key will utilize the same Metadata object. Let's explore how we can create a Base model using Model , similar to the approach in traditional SQLAlchemy . from ellar_sql import model , ModelBaseConfig class Base ( model . Model ): __base_config__ = ModelBaseConfig ( as_base = True , use_bases = [ model . DeclarativeBase ]) assert issubclass ( Base , model . DeclarativeBase ) If you are interested in SQLAlchemy\u2019s native support for data classes , then you can add MappedAsDataclass to use_bases as shown below: from ellar_sql import model , ModelBaseConfig class Base ( model . Model ): __base_config__ = ModelBaseConfig ( as_base = True , use_bases = [ model . DeclarativeBase , model . MappedAsDataclass ]) assert issubclass ( Base , model . MappedAsDataclass ) In the examples above, Base classes are created, all subclassed from the use_bases provided, and with the as_base option, the factory creates the Base class as a Base .","title":"Creating a Base Class"},{"location":"models/models/#create-base-with-metadata","text":"You can also configure the SQLAlchemy object with a custom MetaData object. For instance, you can define a specific naming convention for constraints, ensuring consistency and predictability in constraint names. This can be particularly beneficial during migrations, as detailed by Alembic . For example: from ellar_sql import model , ModelBaseConfig class Base ( model . Model ): __base_config__ = ModelBaseConfig ( as_base = True , use_bases = [ model . DeclarativeBase ]) metadata = model . MetaData ( naming_convention = { \"ix\" : 'ix_ %(column_0_label)s ' , \"uq\" : \"uq_ %(table_name)s _ %(column_0_name)s \" , \"ck\" : \"ck_ %(table_name)s _ %(constraint_name)s \" , \"fk\" : \"fk_ %(table_name)s _ %(column_0_name)s _ %(referred_table_name)s \" , \"pk\" : \"pk_ %(table_name)s \" })","title":"Create base with MetaData"},{"location":"models/models/#abstract-models-and-mixins","text":"If the desired behavior is only applicable to specific models rather than all models, you can use an abstract model base class to customize only those models. For example, if certain models need to track their creation or update timestamps , t his approach allows for targeted customization. from datetime import datetime , timezone from ellar_sql import model from sqlalchemy.orm import Mapped , mapped_column class TimestampModel ( model . Model ): __abstract__ = True created : Mapped [ datetime ] = mapped_column ( default = lambda : datetime . now ( timezone . utc )) updated : Mapped [ datetime ] = mapped_column ( default = lambda : datetime . now ( timezone . utc ), onupdate = lambda : datetime . now ( timezone . utc )) class BookAuthor ( model . Model ): id : Mapped [ int ] = mapped_column ( primary_key = True ) name : Mapped [ str ] = mapped_column ( unique = True ) class Book ( TimestampModel ): id : Mapped [ int ] = mapped_column ( primary_key = True ) title : Mapped [ str ] This can also be done with a mixin class, inherited separately. from datetime import datetime , timezone from ellar_sql import model from sqlalchemy.orm import Mapped , mapped_column class TimestampModel : created : Mapped [ datetime ] = mapped_column ( default = lambda : datetime . now ( timezone . utc )) updated : Mapped [ datetime ] = mapped_column ( default = lambda : datetime . now ( timezone . utc ), onupdate = lambda : datetime . now ( timezone . utc )) class Book ( model . Model , TimestampModel ): id : Mapped [ int ] = mapped_column ( primary_key = True ) title : Mapped [ str ]","title":"Abstract Models and Mixins"},{"location":"models/models/#defining-models","text":"Unlike plain SQLAlchemy, EllarSQL models will automatically generate a table name if the __tablename__ attribute is not set, provided a primary key column is defined. from ellar_sql import model class User ( model . Model ): id : model . Mapped [ int ] = model . mapped_column ( primary_key = True ) username : model . Mapped [ str ] = model . mapped_column ( unique = True ) email : model . Mapped [ str ] class UserAddress ( model . Model ): __tablename__ = 'user-address' id : model . Mapped [ int ] = model . mapped_column ( primary_key = True ) address : model . Mapped [ str ] = model . mapped_column ( unique = True ) assert User . __tablename__ == 'user' assert UserAddress . __tablename__ == 'user-address' For a comprehensive guide on defining model classes declaratively, refer to SQLAlchemy\u2019s declarative documentation . This resource provides detailed information and insights into the declarative approach for defining model classes.","title":"Defining Models"},{"location":"models/models/#defining-tables","text":"The table class is designed to receive a table name, followed by columns and other table components such as constraints. EllarSQL enhances the functionality of the SQLAlchemy Table by facilitating the selection of Metadata based on the __database__ argument. Directly creating a table proves particularly valuable when establishing many-to-many relationships. In such cases, the association table doesn't need its dedicated model class; rather, it can be conveniently accessed through the relevant relationship attributes on the associated models. from ellar_sql import model author_book_m2m = model . Table ( \"author_book\" , model . Column ( \"book_author_id\" , model . ForeignKey ( BookAuthor . id ), primary_key = True ), model . Column ( \"book_id\" , model . ForeignKey ( Book . id ), primary_key = True ), )","title":"Defining Tables"},{"location":"models/models/#quick-tutorial","text":"In this section, we'll delve into straightforward CRUD operations using the ORM objects. However, if you're not well-acquainted with SQLAlchemy, feel free to explore their tutorial on ORM for a more comprehensive understanding. Having understood, Model usage. Let's create a User model from ellar_sql import model class User ( model . Model ): id : model . Mapped [ int ] = model . mapped_column ( primary_key = True ) username : model . Mapped [ str ] = model . mapped_column ( unique = True ) full_name : model . Mapped [ str ] = model . mapped_column ( model . String ) We have created a User model but the data does not exist. Let's fix that from ellar.app import current_injector from ellar_sql import EllarSQLService db_service = current_injector . get ( EllarSQLService ) db_service . create_all ()","title":"Quick Tutorial"},{"location":"models/models/#insert","text":"To insert a data, you need a session import ellar.common as ecm from .model import User @ecm . post ( '/create' ) def create_user (): session = User . get_db_session () squidward = User ( name = \"squidward\" , fullname = \"Squidward Tentacles\" ) session . add ( squidward ) session . commit () return squidward . dict ( exclude = { 'id' }) In the above illustration, squidward data was converted to dictionary object by calling .dict() and excluding the id as shown below. It's important to note this functionality has not been extended to a relationship objects in an SQLAlchemy ORM object .","title":"Insert"},{"location":"models/models/#update","text":"To update, make changes to the ORM object and commit. import ellar.common as ecm from .model import User @ecm . put ( '/update' ) def update_user (): session = User . get_db_session () squidward = session . get ( User , 1 ) squidward . fullname = 'EllarSQL' session . commit () return squidward . dict ()","title":"Update"},{"location":"models/models/#delete","text":"To delete, pass the ORM object to session.delete() . import ellar.common as ecm from .model import User @ecm . delete ( '/delete' ) def delete_user (): session = User . get_db_session () squidward = session . get ( User , 1 ) session . delete ( squidward ) session . commit () return '' After modifying data, you must call session.commit() to commit the changes to the database. Otherwise, changes may not be persisted to the database.","title":"Delete"},{"location":"models/models/#view-utilities","text":"EllarSQL provides some utility query functions to check missing entities and raise 404 Not found if not found. get_or_404 : It will raise a 404 error if the row with the given id does not exist; otherwise, it will return the corresponding instance. first_or_404 : It will raise a 404 error if the query does not return any results; otherwise, it will return the first result. one_or_404 (): It will raise a 404 error if the query does not return exactly one result; otherwise, it will return the result. import ellar.common as ecm from ellar_sql import get_or_404 , one_or_404 , model @ecm . get ( \"/user-by-id/{user_id:int}\" ) def user_by_id ( user_id : int ): user = get_or_404 ( User , user_id ) return user . dict () @ecm . get ( \"/user-by-name/{name:str}\" ) def user_by_username ( name : str ): user = one_or_404 ( model . select ( User ) . filter_by ( name = name ), error_message = f \"No user named ' { name } '.\" ) return user . dict ()","title":"View Utilities"},{"location":"models/models/#accessing-metadata-and-engines","text":"In the process of EllarSQLModule setup, three services are registered to the Ellar IoC container. EllarSQLService : Which manages models, metadata, engines and sessions Engine : SQLAlchemy Engine of the default database configuration Session SQLAlchemy Session of the default database configuration Although with EllarSQLService you can get the engine and session . It's there for easy of access. import sqlalchemy as sa import sqlalchemy.orm as sa_orm from ellar.app import current_injector from ellar_sql import EllarSQLService db_service = current_injector . get ( EllarSQLService ) assert isinstance ( db_service . engine , sa . Engine ) assert isinstance ( db_service . session_factory (), sa_orm . Session )","title":"Accessing Metadata and Engines"},{"location":"models/models/#important-constraints","text":"EllarSQLModule databases options for SQLAlchemy.ext.asyncio.AsyncEngine will register SQLAlchemy.ext.asyncio.AsyncEngine and SQLAlchemy.ext.asyncio.AsyncSession EllarSQLModule databases options for SQLAlchemy.Engine will register SQLAlchemy.Engine and SQLAlchemy.orm.Session . EllarSQL.get_all_metadata() retrieves all configured metadatas EllarSQL.get_metadata() retrieves metadata by __database__ key or default is no parameter is passed. import sqlalchemy as sa import sqlalchemy.orm as sa_orm from ellar.app import current_injector # get engine from DI default_engine = current_injector . get ( sa . Engine ) # get session from DI session = current_injector . get ( sa_orm . Session ) assert isinstance ( default_engine , sa . Engine ) assert isinstance ( session , sa_orm . Session ) For Async Database options from sqlalchemy.ext.asyncio import AsyncSession , AsyncEngine from ellar.app import current_injector # get engine from DI default_engine = current_injector . get ( AsyncEngine ) # get session from DI session = current_injector . get ( AsyncSession ) assert isinstance ( default_engine , AsyncEngine ) assert isinstance ( session , AsyncSession )","title":"Important Constraints"},{"location":"multiple/","text":"Multiple Databases \u00b6 SQLAlchemy has the capability to establish connections with multiple databases simultaneously, referring to these connections as \"binds.\" EllarSQL simplifies the management of binds by associating each engine with a short string identifier, __database__ . Subsequently, each model and table is linked to a __database__ , and during a query, the session selects the appropriate engine based on the __database__ of the entity being queried. In the absence of a specified __database__ , the default engine is employed. Configuring Multiple Databases \u00b6 In EllarSQL, database configuration begins with the setup of the default database, followed by additional databases, as exemplified in the EllarSQLModule configurations: from ellar_sql import EllarSQLModule EllarSQLModule . setup ( databases = { \"default\" : \"postgresql:///main\" , \"meta\" : \"sqlite:////path/to/meta.db\" , \"auth\" : { \"url\" : \"mysql://localhost/users\" , \"pool_recycle\" : 3600 , }, }, migration_options = { 'directory' : 'migrations' } ) Defining Models and Tables with Different Databases \u00b6 EllarSQL creates Metadata and an Engine for each configured database. Models and tables associated with a specific __database__ key are registered with the corresponding Metadata . During a session query, the session employs the related Engine . To designate the database for a model, set the __database__ class attribute. Not specifying a __database__ key is equivalent to setting it to default : In Models \u00b6 from ellar_sql import model class User ( model . Model ): __database__ = \"auth\" id = model . Column ( model . Integer , primary_key = True ) Models inheriting from an already existing model will share the same database key unless they are overriden. Info Its importance to not that model.Model has __database__ value equals default In Tables \u00b6 To specify the database for a table, utilize the __database__ keyword argument: from ellar_sql import model user_table = model . Table ( \"user\" , model . Column ( \"id\" , model . Integer , primary_key = True ), __database__ = \"auth\" , ) Info Ultimately, the session references the database key associated with the metadata or table, an association established during creation. Consequently, changing the database key after creating a model or table has no effect . Creating and Dropping Tables \u00b6 The create_all() and drop_all() methods operating are all part of the EllarSQLService . It also requires the database argument to target a specific database. # Create tables for all binds from ellar.app import current_injector from ellar_sql import EllarSQLService db_service = current_injector . get ( EllarSQLService ) # Create tables for all configured databases db_service . create_all () # Create tables for the 'default' and \"auth\" databases db_service . create_all ( 'default' , \"auth\" ) # Create tables for the \"meta\" database db_service . create_all ( \"meta\" ) # Drop tables for the 'default' database db_service . drop_all ( 'default' )","title":"Multiple Database"},{"location":"multiple/#multiple-databases","text":"SQLAlchemy has the capability to establish connections with multiple databases simultaneously, referring to these connections as \"binds.\" EllarSQL simplifies the management of binds by associating each engine with a short string identifier, __database__ . Subsequently, each model and table is linked to a __database__ , and during a query, the session selects the appropriate engine based on the __database__ of the entity being queried. In the absence of a specified __database__ , the default engine is employed.","title":"Multiple Databases"},{"location":"multiple/#configuring-multiple-databases","text":"In EllarSQL, database configuration begins with the setup of the default database, followed by additional databases, as exemplified in the EllarSQLModule configurations: from ellar_sql import EllarSQLModule EllarSQLModule . setup ( databases = { \"default\" : \"postgresql:///main\" , \"meta\" : \"sqlite:////path/to/meta.db\" , \"auth\" : { \"url\" : \"mysql://localhost/users\" , \"pool_recycle\" : 3600 , }, }, migration_options = { 'directory' : 'migrations' } )","title":"Configuring Multiple Databases"},{"location":"multiple/#defining-models-and-tables-with-different-databases","text":"EllarSQL creates Metadata and an Engine for each configured database. Models and tables associated with a specific __database__ key are registered with the corresponding Metadata . During a session query, the session employs the related Engine . To designate the database for a model, set the __database__ class attribute. Not specifying a __database__ key is equivalent to setting it to default :","title":"Defining Models and Tables with Different Databases"},{"location":"multiple/#in-models","text":"from ellar_sql import model class User ( model . Model ): __database__ = \"auth\" id = model . Column ( model . Integer , primary_key = True ) Models inheriting from an already existing model will share the same database key unless they are overriden. Info Its importance to not that model.Model has __database__ value equals default","title":"In Models"},{"location":"multiple/#in-tables","text":"To specify the database for a table, utilize the __database__ keyword argument: from ellar_sql import model user_table = model . Table ( \"user\" , model . Column ( \"id\" , model . Integer , primary_key = True ), __database__ = \"auth\" , ) Info Ultimately, the session references the database key associated with the metadata or table, an association established during creation. Consequently, changing the database key after creating a model or table has no effect .","title":"In Tables"},{"location":"multiple/#creating-and-dropping-tables","text":"The create_all() and drop_all() methods operating are all part of the EllarSQLService . It also requires the database argument to target a specific database. # Create tables for all binds from ellar.app import current_injector from ellar_sql import EllarSQLService db_service = current_injector . get ( EllarSQLService ) # Create tables for all configured databases db_service . create_all () # Create tables for the 'default' and \"auth\" databases db_service . create_all ( 'default' , \"auth\" ) # Create tables for the \"meta\" database db_service . create_all ( \"meta\" ) # Drop tables for the 'default' database db_service . drop_all ( 'default' )","title":"Creating and Dropping Tables"},{"location":"pagination/","text":"Pagination \u00b6 Pagination is a common practice for large datasets, enhancing user experience by breaking content into manageable pages. It optimizes load times and navigation and allows users to explore extensive datasets with ease while maintaining system performance and responsiveness. EllarSQL offers two styles of pagination: PageNumberPagination : This pagination internally configures items per_page and max item size ( max_size ) and, allows users to set the page property. LimitOffsetPagination : This pagination internally configures max item size ( max_limit ) and, allows users to set the limit and offset properties. EllarSQL pagination is activated when a route function is decorated with paginate function. The result of the route function is expected to be a SQLAlchemy.sql.Select instance or a Model type. For example: import ellar.common as ec from ellar_sql import model , paginate from .models import User from .schemas import UserSchema @ec . get ( '/users' ) @paginate ( item_schema = UserSchema ) def list_users (): return model . select ( User ) paginate properties \u00b6 pagination_class : t.Optional[t.Type[PaginationBase]]=None : specifies pagination style to use. if not set, it will be set to PageNumberPagination model : t.Optional[t.Type[ModelBase]]=None : specifies a Model type to get list of data. If set, route function can return None or override by returning a select/filtered statement as_template_context : bool=False : indicates that the paginator object be added to template context. See Template Pagination item_schema : t.Optional[t.Type[BaseModel]]=None : This is required if template_context is False. It is used to serialize the SQLAlchemy model and create a response-schema/docs . paginator_options : t.Any : keyword argument for configuring pagination_class set to use for pagination. API Pagination \u00b6 API pagination simply means pagination in an API route function. This requires item_schema for the paginate decorator to create a 200 response documentation for the decorated route and for the paginated result to be serialized to json. import ellar.common as ec from ellar_sql import paginate from .models import User class UserSchema ( ec . Serializer ): id : int username : str email : str @ec . get ( '/users' ) @paginate ( item_schema = UserSchema , per_page = 100 ) def list_users (): return User We can also rewrite the illustration above since we are not making any modification to the User query. ... @ec . get ( '/users' ) @paginate ( model = User , item_schema = UserSchema ) def list_users (): pass Template Pagination \u00b6 This is for route functions decorated with render function that need to be paginated. For this to happen, paginate function need to return a context and this is achieved by setting as_template_context=True import ellar.common as ec from ellar_sql import model , paginate from .models import User @ec . get ( '/users' ) @ec . render ( 'list.html' ) @paginate ( as_template_context = True ) def list_users (): return model . select ( User ), { 'name' : 'Template Pagination' } # pagination model, template context In the illustration above, a tuple of select statement and a template context was returned. The template context will be updated with a paginator as an extra key by the paginate function before been processed by render function. We can re-write the example above to return just the template context since there is no form of filter directly affecting the User model query. ... @ec . get ( '/users' ) @ec . render ( 'list.html' ) @paginate ( model = model . select ( User ), as_template_context = True ) def list_users (): return { 'name' : 'Template Pagination' } Also, in the list.html we have the following codes: < html lang = \"en\" > < h3 > {{ name }} {% macro render_pagination(paginator, endpoint) %} < div > {{ paginator.first }} - {{ paginator.last }} of {{ paginator.total }} < div > {% for page in paginator.iter_pages() %} {% if page %} {% if page != paginator.page %} < a href = \"{{ url_for(endpoint) }}?page={{page}}\" > {{ page }} {% else %} < strong > {{ page }} {% endif %} {% else %} < span class = ellipsis > \u2026 {% endif %} {% endfor %} {% endmacro %} < ul > {% for user in paginator %} < li > {{ user.id }} @ {{ user.name }} {% endfor %} {{render_pagination(paginator=paginator, endpoint=\"list_users\") }} The paginator object in the template context has a iter_pages() method which produces up to three group of numbers, seperated by None . It defaults to showing 2 page numbers at either edge, 2 numbers before the current, the current, and 4 numbers after the current. For example, if there are 20 pages and the current page is 7, the following values are yielded. paginator.iter_pages() [1, 2, None, 5, 6, 7, 8, 9, 10, 11, None, 19, 20] The total attribute showcases the total number of results, while first and last display the range of items on the current page. The accompanying Jinja macro renders a simple pagination widget. {% macro render_pagination(paginator, endpoint) %} < div > {{ paginator.first }} - {{ paginator.last }} of {{ paginator.total }} < div > {% for page in paginator.iter_pages() %} {% if page %} {% if page != paginator.page %} < a href = \"{{ url_for(endpoint) }}?page={{page}}\" > {{ page }} {% else %} < strong > {{ page }} {% endif %} {% else %} < span class = ellipsis > \u2026 {% endif %} {% endfor %} {% endmacro %}","title":"Pagination"},{"location":"pagination/#pagination","text":"Pagination is a common practice for large datasets, enhancing user experience by breaking content into manageable pages. It optimizes load times and navigation and allows users to explore extensive datasets with ease while maintaining system performance and responsiveness. EllarSQL offers two styles of pagination: PageNumberPagination : This pagination internally configures items per_page and max item size ( max_size ) and, allows users to set the page property. LimitOffsetPagination : This pagination internally configures max item size ( max_limit ) and, allows users to set the limit and offset properties. EllarSQL pagination is activated when a route function is decorated with paginate function. The result of the route function is expected to be a SQLAlchemy.sql.Select instance or a Model type. For example: import ellar.common as ec from ellar_sql import model , paginate from .models import User from .schemas import UserSchema @ec . get ( '/users' ) @paginate ( item_schema = UserSchema ) def list_users (): return model . select ( User )","title":"Pagination"},{"location":"pagination/#paginate-properties","text":"pagination_class : t.Optional[t.Type[PaginationBase]]=None : specifies pagination style to use. if not set, it will be set to PageNumberPagination model : t.Optional[t.Type[ModelBase]]=None : specifies a Model type to get list of data. If set, route function can return None or override by returning a select/filtered statement as_template_context : bool=False : indicates that the paginator object be added to template context. See Template Pagination item_schema : t.Optional[t.Type[BaseModel]]=None : This is required if template_context is False. It is used to serialize the SQLAlchemy model and create a response-schema/docs . paginator_options : t.Any : keyword argument for configuring pagination_class set to use for pagination.","title":"paginate properties"},{"location":"pagination/#api-pagination","text":"API pagination simply means pagination in an API route function. This requires item_schema for the paginate decorator to create a 200 response documentation for the decorated route and for the paginated result to be serialized to json. import ellar.common as ec from ellar_sql import paginate from .models import User class UserSchema ( ec . Serializer ): id : int username : str email : str @ec . get ( '/users' ) @paginate ( item_schema = UserSchema , per_page = 100 ) def list_users (): return User We can also rewrite the illustration above since we are not making any modification to the User query. ... @ec . get ( '/users' ) @paginate ( model = User , item_schema = UserSchema ) def list_users (): pass","title":"API Pagination"},{"location":"pagination/#template-pagination","text":"This is for route functions decorated with render function that need to be paginated. For this to happen, paginate function need to return a context and this is achieved by setting as_template_context=True import ellar.common as ec from ellar_sql import model , paginate from .models import User @ec . get ( '/users' ) @ec . render ( 'list.html' ) @paginate ( as_template_context = True ) def list_users (): return model . select ( User ), { 'name' : 'Template Pagination' } # pagination model, template context In the illustration above, a tuple of select statement and a template context was returned. The template context will be updated with a paginator as an extra key by the paginate function before been processed by render function. We can re-write the example above to return just the template context since there is no form of filter directly affecting the User model query. ... @ec . get ( '/users' ) @ec . render ( 'list.html' ) @paginate ( model = model . select ( User ), as_template_context = True ) def list_users (): return { 'name' : 'Template Pagination' } Also, in the list.html we have the following codes: < html lang = \"en\" > < h3 > {{ name }} {% macro render_pagination(paginator, endpoint) %} < div > {{ paginator.first }} - {{ paginator.last }} of {{ paginator.total }} < div > {% for page in paginator.iter_pages() %} {% if page %} {% if page != paginator.page %} < a href = \"{{ url_for(endpoint) }}?page={{page}}\" > {{ page }} {% else %} < strong > {{ page }} {% endif %} {% else %} < span class = ellipsis > \u2026 {% endif %} {% endfor %} {% endmacro %} < ul > {% for user in paginator %} < li > {{ user.id }} @ {{ user.name }} {% endfor %} {{render_pagination(paginator=paginator, endpoint=\"list_users\") }} The paginator object in the template context has a iter_pages() method which produces up to three group of numbers, seperated by None . It defaults to showing 2 page numbers at either edge, 2 numbers before the current, the current, and 4 numbers after the current. For example, if there are 20 pages and the current page is 7, the following values are yielded. paginator.iter_pages() [1, 2, None, 5, 6, 7, 8, 9, 10, 11, None, 19, 20] The total attribute showcases the total number of results, while first and last display the range of items on the current page. The accompanying Jinja macro renders a simple pagination widget. {% macro render_pagination(paginator, endpoint) %} < div > {{ paginator.first }} - {{ paginator.last }} of {{ paginator.total }} < div > {% for page in paginator.iter_pages() %} {% if page %} {% if page != paginator.page %} < a href = \"{{ url_for(endpoint) }}?page={{page}}\" > {{ page }} {% else %} < strong > {{ page }} {% endif %} {% else %} < span class = ellipsis > \u2026 {% endif %} {% endfor %} {% endmacro %}","title":"Template Pagination"},{"location":"testing/","text":"Testing EllarSQL Models \u00b6 There are various approaches to testing SQLAlchemy models, but in this section, we will focus on setting up a good testing environment for EllarSQL models using the Ellar Test factory and pytest. For an effective testing environment, it is recommended to utilize the EllarSQLModule.register_setup() approach to set up the EllarSQLModule . This allows you to add a new configuration for ELLAR_SQL specific to your testing database, preventing interference with production or any other databases in use. Defining TestConfig \u00b6 There are various methods for configuring test settings in Ellar, as outlined here . However, in this section, we will adopt the 'in a file' approach. Within the db_learning/config.py file, include the following code: db_learning/config.py import typing as t ... class DevelopmentConfig ( BaseConfig ): DEBUG : bool = True # Configuration through Config ELLAR_SQL : t . Dict [ str , t . Any ] = { 'databases' : { 'default' : 'sqlite:///project.db' , }, 'echo' : True , 'migration_options' : { 'directory' : 'migrations' }, 'models' : [ 'models' ] } class TestConfig ( BaseConfig ): DEBUG = False ELLAR_SQL : t . Dict [ str , t . Any ] = { ** DevelopmentConfig . ELLAR_SQL , 'databases' : { 'default' : 'sqlite:///test.db' , }, 'echo' : False , } This snippet demonstrates the 'in a file' approach to setting up the TestConfig class within the same db_learning/config.py file. Changes made: \u00b6 Updated the databases section to use sqlite+aiosqlite:///test.db for the testing database. Set echo to True to enable SQLAlchemy output during testing for cleaner logs. Preserved the migration_options and models configurations from DevelopmentConfig . Also, feel free to further adjust it based on your specific testing requirements! Test Fixtures \u00b6 After defining TestConfig , we need to add some pytest fixtures to set up EllarSQLModule and another one that returns a session for testing purposes. Additionally, we need to export ELLAR_CONFIG_MODULE to point to the newly defined TestConfig . tests/conftest.py import os import pytest from ellar.common.constants import ELLAR_CONFIG_MODULE from ellar.testing import Test from ellar_sql import EllarSQLService from db_learning.root_module import ApplicationModule # Setting the ELLAR_CONFIG_MODULE environment variable to TestConfig os . environ . setdefault ( ELLAR_CONFIG_MODULE , \"db_learning.config:TestConfig\" ) # Fixture for creating a test module @pytest . fixture ( scope = 'session' ) def tm (): test_module = Test . create_test_module ( modules = [ ApplicationModule ]) yield test_module # Fixture for creating a database session for testing @pytest . fixture ( scope = 'session' ) def db ( tm ): db_service = tm . get ( EllarSQLService ) # Creating all tables db_service . create_all () yield # Dropping all tables after the tests db_service . drop_all () # Fixture for creating a database session for testing @pytest . fixture ( scope = 'session' ) def db_session ( db , tm ): db_service = tm . get ( EllarSQLService ) yield db_service . session_factory () # Removing the session factory db_service . session_factory . remove () The provided fixtures help in setting up a testing environment for EllarSQL models. The Test.create_test_module method creates a TestModule for initializing your Ellar application, and the db_session fixture initializes a database session for testing, creating and dropping tables as needed. If you are working with asynchronous database drivers, you can convert db_session into an async function to handle coroutines seamlessly. Alembic Migration with Test Fixture \u00b6 In cases where there are already generated database migration files, and there is a need to apply migrations during testing, this can be achieved as shown in the example below: tests/conftest.py import os import pytest from ellar.common.constants import ELLAR_CONFIG_MODULE from ellar.testing import Test from ellar_sql import EllarSQLService from ellar_sql.cli.handlers import CLICommandHandlers from db_learning.root_module import ApplicationModule # Setting the ELLAR_CONFIG_MODULE environment variable to TestConfig os . environ . setdefault ( ELLAR_CONFIG_MODULE , \"db_learning.config:TestConfig\" ) # Fixture for creating a test module @pytest . fixture ( scope = 'session' ) def tm (): test_module = Test . create_test_module ( modules = [ ApplicationModule ]) yield test_module # Fixture for creating a database session for testing @pytest . fixture ( scope = 'session' ) async def db ( tm ): db_service = tm . get ( EllarSQLService ) # Applying migrations using Alembic async with tm . create_application () . application_context (): cli = CLICommandHandlers ( db_service ) cli . migrate () yield # Downgrading migrations after testing async with tm . create_application () . application_context (): cli = CLICommandHandlers ( db_service ) cli . downgrade () # Fixture for creating an asynchronous database session for testing @pytest . fixture ( scope = 'session' ) async def db_session ( db , tm ): db_service = tm . get ( EllarSQLService ) yield db_service . session_factory () # Removing the session factory db_service . session_factory . remove () The CLICommandHandlers class wraps all Alembic functions executed through the Ellar command-line interface. It can be used in conjunction with the application context to initialize all model tables during testing as shown in the illustration above. db_session pytest fixture also ensures that migrations are applied and then downgraded after testing, maintaining a clean and consistent test database state. Testing a Model \u00b6 After setting up the testing database and creating a session, let's test the insertion of a user model into the database. In db_learning/models.py , we have a user model: db_learning/model.py from ellar_sql import model class User ( model . Model ): id : model . Mapped [ int ] = model . mapped_column ( model . Integer , primary_key = True ) username : model . Mapped [ str ] = model . mapped_column ( model . String , unique = True , nullable = False ) email : model . Mapped [ str ] = model . mapped_column ( model . String ) Now, create a file named test_user_model.py : tests/test_user_model.py import pytest import sqlalchemy.exc as sa_exc from db_learning.models import User def test_username_must_be_unique ( db_session ): # Creating and adding the first user user1 = User ( username = 'ellarSQL' , email = 'ellarsql@gmail.com' ) db_session . add ( user1 ) db_session . commit () # Attempting to add a second user with the same username user2 = User ( username = 'ellarSQL' , email = 'ellarsql2@gmail.com' ) db_session . add ( user2 ) # Expecting an IntegrityError due to unique constraint violation with pytest . raises ( sa_exc . IntegrityError ): db_session . commit () In this test, we are checking whether the unique constraint on the username field is enforced by attempting to insert two users with the same username. The test expects an IntegrityError to be raised, indicating a violation of the unique constraint. This ensures that the model behaves correctly and enforces the specified uniqueness requirement. Testing Factory Boy \u00b6 factory-boy provides a convenient and flexible way to create mock objects, supporting various ORMs like Django, MongoDB, and SQLAlchemy. EllarSQL extends factory.alchemy.SQLAlchemy to offer a Model factory solution compatible with both synchronous and asynchronous database drivers. To get started, you need to install factory-boy : pip install factory-boy Now, let's create a factory for our user model in tests/factories.py : tests/factories.py import factory from ellar_sql.factory import EllarSQLFactory , SESSION_PERSISTENCE_FLUSH from db_learning.models import User from . import common class UserFactory ( EllarSQLFactory ): class Meta : model = User sqlalchemy_session_persistence = SESSION_PERSISTENCE_FLUSH sqlalchemy_session_factory = lambda : common . Session () username = factory . Faker ( 'username' ) email = factory . Faker ( 'email' ) The UserFactory depends on a database session. Since the pytest fixture we created applies to it, we also need a session factory in tests/common.py : tests/common.py from sqlalchemy import orm Session = orm . scoped_session ( orm . sessionmaker ()) Additionally, we require a fixture responsible for configuring the Factory session in tests/conftest.py : tests/conftest.py import os import pytest import sqlalchemy as sa from ellar.common.constants import ELLAR_CONFIG_MODULE from ellar.testing import Test from ellar_sql import EllarSQLService from db_learning.root_module import ApplicationModule from . import common os . environ . setdefault ( ELLAR_CONFIG_MODULE , \"db_learning.config:TestConfig\" ) @pytest . fixture ( scope = 'session' ) def tm (): test_module = Test . create_test_module ( modules = [ ApplicationModule ]) yield test_module # Fixture for creating a database session for testing @pytest . fixture ( scope = 'session' ) def db ( tm ): db_service = tm . get ( EllarSQLService ) # Creating all tables db_service . create_all () yield # Dropping all tables after the tests db_service . drop_all () # Fixture for creating a database session for testing @pytest . fixture ( scope = 'session' ) def db_session ( db , tm ): db_service = tm . get ( EllarSQLService ) yield db_service . session_factory () # Removing the session factory db_service . session_factory . remove () @pytest . fixture def factory_session ( db , tm ): engine = tm . get ( sa . Engine ) common . Session . configure ( bind = engine ) yield common . Session . remove () In the factory_session fixture, we retrieve the Engine registered in the DI container by EllarSQLModule . Using this engine, we configure the common Session . It's important to note that if you are using an async database driver, EllarSQLModule will register AsyncEngine . With this setup, we can rewrite our test_username_must_be_unique test using UserFactory and factory_session : tests/test_user_model.py import pytest import sqlalchemy.exc as sa_exc from .factories import UserFactory def test_username_must_be_unique ( factory_session ): user1 = UserFactory () with pytest . raises ( sa_exc . IntegrityError ): UserFactory ( username = user1 . username ) This test yields the same result as before. Refer to the factory-boy documentation for more features and tutorials.","title":"index"},{"location":"testing/#testing-ellarsql-models","text":"There are various approaches to testing SQLAlchemy models, but in this section, we will focus on setting up a good testing environment for EllarSQL models using the Ellar Test factory and pytest. For an effective testing environment, it is recommended to utilize the EllarSQLModule.register_setup() approach to set up the EllarSQLModule . This allows you to add a new configuration for ELLAR_SQL specific to your testing database, preventing interference with production or any other databases in use.","title":"Testing EllarSQL Models"},{"location":"testing/#defining-testconfig","text":"There are various methods for configuring test settings in Ellar, as outlined here . However, in this section, we will adopt the 'in a file' approach. Within the db_learning/config.py file, include the following code: db_learning/config.py import typing as t ... class DevelopmentConfig ( BaseConfig ): DEBUG : bool = True # Configuration through Config ELLAR_SQL : t . Dict [ str , t . Any ] = { 'databases' : { 'default' : 'sqlite:///project.db' , }, 'echo' : True , 'migration_options' : { 'directory' : 'migrations' }, 'models' : [ 'models' ] } class TestConfig ( BaseConfig ): DEBUG = False ELLAR_SQL : t . Dict [ str , t . Any ] = { ** DevelopmentConfig . ELLAR_SQL , 'databases' : { 'default' : 'sqlite:///test.db' , }, 'echo' : False , } This snippet demonstrates the 'in a file' approach to setting up the TestConfig class within the same db_learning/config.py file.","title":"Defining TestConfig"},{"location":"testing/#changes-made","text":"Updated the databases section to use sqlite+aiosqlite:///test.db for the testing database. Set echo to True to enable SQLAlchemy output during testing for cleaner logs. Preserved the migration_options and models configurations from DevelopmentConfig . Also, feel free to further adjust it based on your specific testing requirements!","title":"Changes made:"},{"location":"testing/#test-fixtures","text":"After defining TestConfig , we need to add some pytest fixtures to set up EllarSQLModule and another one that returns a session for testing purposes. Additionally, we need to export ELLAR_CONFIG_MODULE to point to the newly defined TestConfig . tests/conftest.py import os import pytest from ellar.common.constants import ELLAR_CONFIG_MODULE from ellar.testing import Test from ellar_sql import EllarSQLService from db_learning.root_module import ApplicationModule # Setting the ELLAR_CONFIG_MODULE environment variable to TestConfig os . environ . setdefault ( ELLAR_CONFIG_MODULE , \"db_learning.config:TestConfig\" ) # Fixture for creating a test module @pytest . fixture ( scope = 'session' ) def tm (): test_module = Test . create_test_module ( modules = [ ApplicationModule ]) yield test_module # Fixture for creating a database session for testing @pytest . fixture ( scope = 'session' ) def db ( tm ): db_service = tm . get ( EllarSQLService ) # Creating all tables db_service . create_all () yield # Dropping all tables after the tests db_service . drop_all () # Fixture for creating a database session for testing @pytest . fixture ( scope = 'session' ) def db_session ( db , tm ): db_service = tm . get ( EllarSQLService ) yield db_service . session_factory () # Removing the session factory db_service . session_factory . remove () The provided fixtures help in setting up a testing environment for EllarSQL models. The Test.create_test_module method creates a TestModule for initializing your Ellar application, and the db_session fixture initializes a database session for testing, creating and dropping tables as needed. If you are working with asynchronous database drivers, you can convert db_session into an async function to handle coroutines seamlessly.","title":"Test Fixtures"},{"location":"testing/#alembic-migration-with-test-fixture","text":"In cases where there are already generated database migration files, and there is a need to apply migrations during testing, this can be achieved as shown in the example below: tests/conftest.py import os import pytest from ellar.common.constants import ELLAR_CONFIG_MODULE from ellar.testing import Test from ellar_sql import EllarSQLService from ellar_sql.cli.handlers import CLICommandHandlers from db_learning.root_module import ApplicationModule # Setting the ELLAR_CONFIG_MODULE environment variable to TestConfig os . environ . setdefault ( ELLAR_CONFIG_MODULE , \"db_learning.config:TestConfig\" ) # Fixture for creating a test module @pytest . fixture ( scope = 'session' ) def tm (): test_module = Test . create_test_module ( modules = [ ApplicationModule ]) yield test_module # Fixture for creating a database session for testing @pytest . fixture ( scope = 'session' ) async def db ( tm ): db_service = tm . get ( EllarSQLService ) # Applying migrations using Alembic async with tm . create_application () . application_context (): cli = CLICommandHandlers ( db_service ) cli . migrate () yield # Downgrading migrations after testing async with tm . create_application () . application_context (): cli = CLICommandHandlers ( db_service ) cli . downgrade () # Fixture for creating an asynchronous database session for testing @pytest . fixture ( scope = 'session' ) async def db_session ( db , tm ): db_service = tm . get ( EllarSQLService ) yield db_service . session_factory () # Removing the session factory db_service . session_factory . remove () The CLICommandHandlers class wraps all Alembic functions executed through the Ellar command-line interface. It can be used in conjunction with the application context to initialize all model tables during testing as shown in the illustration above. db_session pytest fixture also ensures that migrations are applied and then downgraded after testing, maintaining a clean and consistent test database state.","title":"Alembic Migration with Test Fixture"},{"location":"testing/#testing-a-model","text":"After setting up the testing database and creating a session, let's test the insertion of a user model into the database. In db_learning/models.py , we have a user model: db_learning/model.py from ellar_sql import model class User ( model . Model ): id : model . Mapped [ int ] = model . mapped_column ( model . Integer , primary_key = True ) username : model . Mapped [ str ] = model . mapped_column ( model . String , unique = True , nullable = False ) email : model . Mapped [ str ] = model . mapped_column ( model . String ) Now, create a file named test_user_model.py : tests/test_user_model.py import pytest import sqlalchemy.exc as sa_exc from db_learning.models import User def test_username_must_be_unique ( db_session ): # Creating and adding the first user user1 = User ( username = 'ellarSQL' , email = 'ellarsql@gmail.com' ) db_session . add ( user1 ) db_session . commit () # Attempting to add a second user with the same username user2 = User ( username = 'ellarSQL' , email = 'ellarsql2@gmail.com' ) db_session . add ( user2 ) # Expecting an IntegrityError due to unique constraint violation with pytest . raises ( sa_exc . IntegrityError ): db_session . commit () In this test, we are checking whether the unique constraint on the username field is enforced by attempting to insert two users with the same username. The test expects an IntegrityError to be raised, indicating a violation of the unique constraint. This ensures that the model behaves correctly and enforces the specified uniqueness requirement.","title":"Testing a Model"},{"location":"testing/#testing-factory-boy","text":"factory-boy provides a convenient and flexible way to create mock objects, supporting various ORMs like Django, MongoDB, and SQLAlchemy. EllarSQL extends factory.alchemy.SQLAlchemy to offer a Model factory solution compatible with both synchronous and asynchronous database drivers. To get started, you need to install factory-boy : pip install factory-boy Now, let's create a factory for our user model in tests/factories.py : tests/factories.py import factory from ellar_sql.factory import EllarSQLFactory , SESSION_PERSISTENCE_FLUSH from db_learning.models import User from . import common class UserFactory ( EllarSQLFactory ): class Meta : model = User sqlalchemy_session_persistence = SESSION_PERSISTENCE_FLUSH sqlalchemy_session_factory = lambda : common . Session () username = factory . Faker ( 'username' ) email = factory . Faker ( 'email' ) The UserFactory depends on a database session. Since the pytest fixture we created applies to it, we also need a session factory in tests/common.py : tests/common.py from sqlalchemy import orm Session = orm . scoped_session ( orm . sessionmaker ()) Additionally, we require a fixture responsible for configuring the Factory session in tests/conftest.py : tests/conftest.py import os import pytest import sqlalchemy as sa from ellar.common.constants import ELLAR_CONFIG_MODULE from ellar.testing import Test from ellar_sql import EllarSQLService from db_learning.root_module import ApplicationModule from . import common os . environ . setdefault ( ELLAR_CONFIG_MODULE , \"db_learning.config:TestConfig\" ) @pytest . fixture ( scope = 'session' ) def tm (): test_module = Test . create_test_module ( modules = [ ApplicationModule ]) yield test_module # Fixture for creating a database session for testing @pytest . fixture ( scope = 'session' ) def db ( tm ): db_service = tm . get ( EllarSQLService ) # Creating all tables db_service . create_all () yield # Dropping all tables after the tests db_service . drop_all () # Fixture for creating a database session for testing @pytest . fixture ( scope = 'session' ) def db_session ( db , tm ): db_service = tm . get ( EllarSQLService ) yield db_service . session_factory () # Removing the session factory db_service . session_factory . remove () @pytest . fixture def factory_session ( db , tm ): engine = tm . get ( sa . Engine ) common . Session . configure ( bind = engine ) yield common . Session . remove () In the factory_session fixture, we retrieve the Engine registered in the DI container by EllarSQLModule . Using this engine, we configure the common Session . It's important to note that if you are using an async database driver, EllarSQLModule will register AsyncEngine . With this setup, we can rewrite our test_username_must_be_unique test using UserFactory and factory_session : tests/test_user_model.py import pytest import sqlalchemy.exc as sa_exc from .factories import UserFactory def test_username_must_be_unique ( factory_session ): user1 = UserFactory () with pytest . raises ( sa_exc . IntegrityError ): UserFactory ( username = user1 . username ) This test yields the same result as before. Refer to the factory-boy documentation for more features and tutorials.","title":"Testing Factory Boy"}]} \ No newline at end of file +{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-,:!=\\[\\]()\"`/]+|\\.(?!\\d)|&[lg]t;|(?!\\b)(?=[A-Z][a-z])"},"docs":[{"location":"","text":".md-content .md-typeset h1 { display: none; } EllarSQL is an SQL database Ellar Module. EllarSQL is an SQL database module, leveraging the robust capabilities of SQLAlchemy to seamlessly interact with SQL databases through Python code and objects. EllarSQL is meticulously designed to streamline the integration of SQLAlchemy within your Ellar application. It introduces discerning usage patterns around pivotal objects such as model , session , and engine , ensuring an efficient and coherent workflow. Notably, EllarSQL refrains from altering the fundamental workings or usage of SQLAlchemy. This documentation is focused on the meticulous setup of EllarSQL. For an in-depth exploration of SQLAlchemy, we recommend referring to the comprehensive SQLAlchemy documentation . Feature Highlights \u00b6 EllarSQL comes packed with a set of awesome features designed: Migration : Enjoy an async-first migration solution that seamlessly handles both single and multiple database setups and for both async and sync database engines configuration. Single/Multiple Database : EllarSQL provides an intuitive setup for models with different databases, allowing you to manage your data across various sources effortlessly. Pagination : EllarSQL introduces SQLAlchemy Paginator for API/Templated routes, along with support for other fantastic SQLAlchemy pagination tools. Unlimited Compatibility : EllarSQL plays nice with the entire SQLAlchemy ecosystem. Whether you're using third-party tools or exploring the vast SQLAlchemy landscape, EllarSQL seamlessly integrates with your preferred tooling. Requirements \u00b6 EllarSQL core dependencies includes: Python >= 3.8 Ellar >= 0.6.7 SQLAlchemy >= 2.0.16 Alembic >= 1.10.0 Installation \u00b6 pip install ellar-sql Quick Example \u00b6 Let's create a simple User model. from ellar_sql import model class User ( model . Model ): id : model . Mapped [ int ] = model . mapped_column ( model . Integer , primary_key = True ) username : model . Mapped [ str ] = model . mapped_column ( model . String , unique = True , nullable = False ) email : model . Mapped [ str ] = model . mapped_column ( model . String ) Let's create app.db with User table in it. For that we need to set up EllarSQLService as shown below: from ellar_sql import EllarSQLService db_service = EllarSQLService ( databases = 'sqlite:///app.db' , echo = True , ) db_service . create_all () If you check your execution directory, you will see sqlite directory with app.db . Let's populate our User table. To do, we need a session, which is available at db_service.session_factory from ellar_sql import EllarSQLService , model class User ( model . Model ): id : model . Mapped [ int ] = model . mapped_column ( model . Integer , primary_key = True ) username : model . Mapped [ str ] = model . mapped_column ( model . String , unique = True , nullable = False ) email : model . Mapped [ str ] = model . mapped_column ( model . String ) db_service = EllarSQLService ( databases = 'sqlite:///app.db' , echo = True , ) db_service . create_all () session = db_service . session_factory () for i in range ( 50 ): session . add ( User ( username = f 'username- { i + 1 } ' , email = f 'user { i + 1 } doe@example.com' )) session . commit () rows = session . execute ( model . select ( User )) . scalars () all_users = [ row . dict () for row in rows ] assert len ( all_users ) == 50 session . close () We have successfully seed 50 users to User table in app.db . You can find the source code for this example here . I know at this point you want to know more, so let's dive deep into the documents and get started .","title":"Index"},{"location":"#feature-highlights","text":"EllarSQL comes packed with a set of awesome features designed: Migration : Enjoy an async-first migration solution that seamlessly handles both single and multiple database setups and for both async and sync database engines configuration. Single/Multiple Database : EllarSQL provides an intuitive setup for models with different databases, allowing you to manage your data across various sources effortlessly. Pagination : EllarSQL introduces SQLAlchemy Paginator for API/Templated routes, along with support for other fantastic SQLAlchemy pagination tools. Unlimited Compatibility : EllarSQL plays nice with the entire SQLAlchemy ecosystem. Whether you're using third-party tools or exploring the vast SQLAlchemy landscape, EllarSQL seamlessly integrates with your preferred tooling.","title":"Feature Highlights"},{"location":"#requirements","text":"EllarSQL core dependencies includes: Python >= 3.8 Ellar >= 0.6.7 SQLAlchemy >= 2.0.16 Alembic >= 1.10.0","title":"Requirements"},{"location":"#installation","text":"pip install ellar-sql","title":"Installation"},{"location":"#quick-example","text":"Let's create a simple User model. from ellar_sql import model class User ( model . Model ): id : model . Mapped [ int ] = model . mapped_column ( model . Integer , primary_key = True ) username : model . Mapped [ str ] = model . mapped_column ( model . String , unique = True , nullable = False ) email : model . Mapped [ str ] = model . mapped_column ( model . String ) Let's create app.db with User table in it. For that we need to set up EllarSQLService as shown below: from ellar_sql import EllarSQLService db_service = EllarSQLService ( databases = 'sqlite:///app.db' , echo = True , ) db_service . create_all () If you check your execution directory, you will see sqlite directory with app.db . Let's populate our User table. To do, we need a session, which is available at db_service.session_factory from ellar_sql import EllarSQLService , model class User ( model . Model ): id : model . Mapped [ int ] = model . mapped_column ( model . Integer , primary_key = True ) username : model . Mapped [ str ] = model . mapped_column ( model . String , unique = True , nullable = False ) email : model . Mapped [ str ] = model . mapped_column ( model . String ) db_service = EllarSQLService ( databases = 'sqlite:///app.db' , echo = True , ) db_service . create_all () session = db_service . session_factory () for i in range ( 50 ): session . add ( User ( username = f 'username- { i + 1 } ' , email = f 'user { i + 1 } doe@example.com' )) session . commit () rows = session . execute ( model . select ( User )) . scalars () all_users = [ row . dict () for row in rows ] assert len ( all_users ) == 50 session . close () We have successfully seed 50 users to User table in app.db . You can find the source code for this example here . I know at this point you want to know more, so let's dive deep into the documents and get started .","title":"Quick Example"},{"location":"advance/","text":"","title":"Index"},{"location":"migrations/","text":"Migrations \u00b6 EllarSQL also extends Alembic package to add migration functionality and make database operations accessible through EllarCLI commandline interface. EllarSQL with Alembic does not override Alembic action rather provide Alembic all the configs/information it needs to for a proper migration/database operations. Its also still possible to use Alembic outside EllarSQL setup when necessary. This section is inspired by Flask Migrate Quick Example \u00b6 We assume you have set up EllarSQLModule in your application, and you have specified migration_options . Create a simple User model as shown below: from ellar_sql import model class User ( model . Model ): id = model . Column ( model . Integer , primary_key = True ) name = model . Column ( model . String ( 128 )) Initialize migration template \u00b6 With the Model setup, run the command below # Initialize the database ellar db init Executing this command will incorporate a migrations folder into your application structure. Ensure that the contents of this folder are included in version control alongside your other source files. Following the initialization, you can generate an initial migration using the command: # Generate the initial migration ellar db migrate -m \"Initial migration.\" Few things to do after generating a migration file: Review and edit the migration script Alembic may not detect certain changes automatically, such as table and column name modifications or unnamed constraints. Refer to the Alembic autogenerate documentation for a comprehensive list of limitations. Add the finalized migration script to version control Ensure that the edited script is committed along with your source code changes Apply the changes described in the migration script to your database ellar db upgrade Whenever there are changes to the database models, it's necessary to repeat the migrate and upgrade commands. For synchronizing the database on another system, simply refresh the migrations folder from the source control repository and execute the upgrade command. This ensures that the database structure aligns with the latest changes in the models. Multiple Database Migration \u00b6 If your application utilizes multiple databases, a distinct Alembic template for migration is required. To enable this, include -m or --multi with the db init command, as demonstrated below: ellar db init --multi Command Reference \u00b6 All Alembic commands are expose to Ellar CLI under db group after a successful EllarSQLModule setup. To see all the commands that are available run this command: ellar db --help # output Usage: ellar db [ OPTIONS ] COMMAND [ ARGS ] ... - Perform Alembic Database Commands - Options: --help Show this message and exit. Commands: branches - Show current branch points check Check if there are any new operations to migrate current - Display the current revision for each database. downgrade - Revert to a previous version edit - Edit a revision file heads - Show current available heads in the script directory history - List changeset scripts in chronological order. init Creates a new migration repository. merge - Merge two revisions together, creating a new revision file migrate - Autogenerate a new revision file ( Alias for 'revision... revision - Create a new revision file. show - Show the revision denoted by the given symbol. stamp - ' stamp ' the revision table with the given revision; don' t... upgrade - Upgrade to a later version ellar db --help Shows a list of available commands. ellar db revision [--message MESSAGE] [--autogenerate] [--sql] [--head HEAD] [--splice] [--branch-label BRANCH_LABEL] [--version-path VERSION_PATH] [--rev-id REV_ID] Creates an empty revision script. The script needs to be edited manually with the upgrade and downgrade changes. See Alembic\u2019s documentation for instructions on how to write migration scripts. An optional migration message can be included. ellar db migrate [--message MESSAGE] [--sql] [--head HEAD] [--splice] [--branch-label BRANCH_LABEL] [--version-path VERSION_PATH] [--rev-id REV_ID] Equivalent to revision --autogenerate. The migration script is populated with changes detected automatically. The generated script should be reviewed and edited as not all types of changes can be detected automatically. This command does not make any changes to the database, just creates the revision script. ellar db check Checks that a migrate command would not generate any changes. If pending changes are detected, the command exits with a non-zero status code. ellar db edit Edit a revision script using $EDITOR. ellar db upgrade [--sql] [--tag TAG] Upgrades the database. If revision isn\u2019t given, then \"head\" is assumed. ellar db downgrade [--sql] [--tag TAG] Downgrades the database. If revision isn\u2019t given, then -1 is assumed. ellar db stamp [--sql] [--tag TAG] Sets the revision in the database to the one given as an argument, without performing any migrations. ellar db current [--verbose] Shows the current revision of the database. ellar db history [--rev-range REV_RANGE] [--verbose] Shows the list of migrations. If a range isn\u2019t given, then the entire history is shown. ellar db show Show the revision denoted by the given symbol. ellar db merge [--message MESSAGE] [--branch-label BRANCH_LABEL] [--rev-id REV_ID] Merge two revisions together. Create a new revision file. ellar db heads [--verbose] [--resolve-dependencies] Show current available heads in the revision script directory. ellar db branches [--verbose] Show current branch points.","title":"index"},{"location":"migrations/#migrations","text":"EllarSQL also extends Alembic package to add migration functionality and make database operations accessible through EllarCLI commandline interface. EllarSQL with Alembic does not override Alembic action rather provide Alembic all the configs/information it needs to for a proper migration/database operations. Its also still possible to use Alembic outside EllarSQL setup when necessary. This section is inspired by Flask Migrate","title":"Migrations"},{"location":"migrations/#quick-example","text":"We assume you have set up EllarSQLModule in your application, and you have specified migration_options . Create a simple User model as shown below: from ellar_sql import model class User ( model . Model ): id = model . Column ( model . Integer , primary_key = True ) name = model . Column ( model . String ( 128 ))","title":"Quick Example"},{"location":"migrations/#initialize-migration-template","text":"With the Model setup, run the command below # Initialize the database ellar db init Executing this command will incorporate a migrations folder into your application structure. Ensure that the contents of this folder are included in version control alongside your other source files. Following the initialization, you can generate an initial migration using the command: # Generate the initial migration ellar db migrate -m \"Initial migration.\" Few things to do after generating a migration file: Review and edit the migration script Alembic may not detect certain changes automatically, such as table and column name modifications or unnamed constraints. Refer to the Alembic autogenerate documentation for a comprehensive list of limitations. Add the finalized migration script to version control Ensure that the edited script is committed along with your source code changes Apply the changes described in the migration script to your database ellar db upgrade Whenever there are changes to the database models, it's necessary to repeat the migrate and upgrade commands. For synchronizing the database on another system, simply refresh the migrations folder from the source control repository and execute the upgrade command. This ensures that the database structure aligns with the latest changes in the models.","title":"Initialize migration template"},{"location":"migrations/#multiple-database-migration","text":"If your application utilizes multiple databases, a distinct Alembic template for migration is required. To enable this, include -m or --multi with the db init command, as demonstrated below: ellar db init --multi","title":"Multiple Database Migration"},{"location":"migrations/#command-reference","text":"All Alembic commands are expose to Ellar CLI under db group after a successful EllarSQLModule setup. To see all the commands that are available run this command: ellar db --help # output Usage: ellar db [ OPTIONS ] COMMAND [ ARGS ] ... - Perform Alembic Database Commands - Options: --help Show this message and exit. Commands: branches - Show current branch points check Check if there are any new operations to migrate current - Display the current revision for each database. downgrade - Revert to a previous version edit - Edit a revision file heads - Show current available heads in the script directory history - List changeset scripts in chronological order. init Creates a new migration repository. merge - Merge two revisions together, creating a new revision file migrate - Autogenerate a new revision file ( Alias for 'revision... revision - Create a new revision file. show - Show the revision denoted by the given symbol. stamp - ' stamp ' the revision table with the given revision; don' t... upgrade - Upgrade to a later version ellar db --help Shows a list of available commands. ellar db revision [--message MESSAGE] [--autogenerate] [--sql] [--head HEAD] [--splice] [--branch-label BRANCH_LABEL] [--version-path VERSION_PATH] [--rev-id REV_ID] Creates an empty revision script. The script needs to be edited manually with the upgrade and downgrade changes. See Alembic\u2019s documentation for instructions on how to write migration scripts. An optional migration message can be included. ellar db migrate [--message MESSAGE] [--sql] [--head HEAD] [--splice] [--branch-label BRANCH_LABEL] [--version-path VERSION_PATH] [--rev-id REV_ID] Equivalent to revision --autogenerate. The migration script is populated with changes detected automatically. The generated script should be reviewed and edited as not all types of changes can be detected automatically. This command does not make any changes to the database, just creates the revision script. ellar db check Checks that a migrate command would not generate any changes. If pending changes are detected, the command exits with a non-zero status code. ellar db edit Edit a revision script using $EDITOR. ellar db upgrade [--sql] [--tag TAG] Upgrades the database. If revision isn\u2019t given, then \"head\" is assumed. ellar db downgrade [--sql] [--tag TAG] Downgrades the database. If revision isn\u2019t given, then -1 is assumed. ellar db stamp [--sql] [--tag TAG] Sets the revision in the database to the one given as an argument, without performing any migrations. ellar db current [--verbose] Shows the current revision of the database. ellar db history [--rev-range REV_RANGE] [--verbose] Shows the list of migrations. If a range isn\u2019t given, then the entire history is shown. ellar db show Show the revision denoted by the given symbol. ellar db merge [--message MESSAGE] [--branch-label BRANCH_LABEL] [--rev-id REV_ID] Merge two revisions together. Create a new revision file. ellar db heads [--verbose] [--resolve-dependencies] Show current available heads in the revision script directory. ellar db branches [--verbose] Show current branch points.","title":"Command Reference"},{"location":"migrations/env/","text":"Alembic Env \u00b6 In the generated migration template, EllarSQL adopts an async-first approach for handling migration file generation. This approach simplifies the execution of migrations for both Session , Engine , AsyncSession , and AsyncEngine , but it also introduces a certain level of complexity. from logging.config import fileConfig from alembic import context from ellar.app import current_injector from ellar.threading import execute_coroutine_with_sync_worker from ellar_sql.migrations import SingleDatabaseAlembicEnvMigration from ellar_sql.services import EllarSQLService # this is the Alembic Config object, which provides # access to the values within the .ini file in use. config = context . config # Interpret the config file for Python logging. # This line sets up loggers basically. fileConfig ( config . config_file_name ) # type:ignore[arg-type] # logger = logging.getLogger(\"alembic.env\") # other values from the config, defined by the needs of env.py, # can be acquired: # my_important_option = config.get_main_option(\"my_important_option\") # ... etc. async def main () -> None : db_service : EllarSQLService = current_injector . get ( EllarSQLService ) # initialize migration class alembic_env_migration = SingleDatabaseAlembicEnvMigration ( db_service ) if context . is_offline_mode (): alembic_env_migration . run_migrations_offline ( context ) # type:ignore[arg-type] else : await alembic_env_migration . run_migrations_online ( context ) # type:ignore[arg-type] execute_coroutine_with_sync_worker ( main ()) The EllarSQL migration package provides two main migration classes: SingleDatabaseAlembicEnvMigration : Manages migrations for a single database configuration, catering to both Engine and AsyncEngine . MultipleDatabaseAlembicEnvMigration : Manages migrations for multiple database configurations, covering both Engine and AsyncEngine . Customizing the Env file \u00b6 To customize or edit the Env file, it is recommended to inherit from either SingleDatabaseAlembicEnvMigration or MultipleDatabaseAlembicEnvMigration based on your specific configuration. Make the necessary changes within the inherited class. If you prefer to write something from scratch, then the abstract class AlembicEnvMigrationBase is the starting point. This class includes three abstract methods and expects a EllarSQLService during initialization, as demonstrated below: class AlembicEnvMigrationBase : def __init__ ( self , db_service : EllarSQLService ) -> None : self . db_service = db_service self . use_two_phase = db_service . migration_options . use_two_phase @abstractmethod def default_process_revision_directives ( self , context : \"MigrationContext\" , revision : RevisionArgs , directives : t . List [ \"MigrationScript\" ], ) -> t . Any : pass @abstractmethod def run_migrations_offline ( self , context : \"EnvironmentContext\" ) -> None : pass @abstractmethod async def run_migrations_online ( self , context : \"EnvironmentContext\" ) -> None : pass The run_migrations_online and run_migrations_offline are all similar to the same function from Alembic env.py template. The default_process_revision_directives is a callback is used to prevent an auto-migration from being generated when there are no changes to the schema described in details here Example \u00b6 import logging from logging.config import fileConfig from alembic import context from ellar_sql.migrations import AlembicEnvMigrationBase from ellar_sql.model.database_binds import get_metadata from ellar.app import current_injector from ellar.threading import execute_coroutine_with_sync_worker from ellar_sql.services import EllarSQLService # This is the Alembic Config object, which provides # access to the values within the .ini file in use. config = context . config logger = logging . getLogger ( \"alembic.env\" ) # Interpret the config file for Python logging. # This line sets up loggers essentially. fileConfig ( config . config_file_name ) # type:ignore[arg-type] class MyCustomMigrationEnv ( AlembicEnvMigrationBase ): def default_process_revision_directives ( self , context , revision , directives , ) -> None : if getattr ( context . config . cmd_opts , \"autogenerate\" , False ): script = directives [ 0 ] if script . upgrade_ops . is_empty (): directives [:] = [] logger . info ( \"No changes in schema detected.\" ) def run_migrations_offline ( self , context : \"EnvironmentContext\" ) -> None : \"\"\"Run migrations in 'offline' mode. This configures the context with just a URL and not an Engine, though an Engine is acceptable here as well. By skipping the Engine creation we don't even need a DBAPI to be available. Calls to context.execute() here emit the given string to the script output. \"\"\" pass async def run_migrations_online ( self , context : \"EnvironmentContext\" ) -> None : \"\"\"Run migrations in 'online' mode. In this scenario, we need to create an Engine and associate a connection with the context. \"\"\" key , engine = self . db_service . engines . popitem () metadata = get_metadata ( key , certain = True ) . metadata conf_args = {} conf_args . setdefault ( \"process_revision_directives\" , self . default_process_revision_directives ) with engine . connect () as connection : context . configure ( connection = connection , target_metadata = metadata , ** conf_args ) with context . begin_transaction (): context . run_migrations () async def main () -> None : db_service : EllarSQLService = current_injector . get ( EllarSQLService ) # initialize migration class alembic_env_migration = MyCustomMigrationEnv ( db_service ) if context . is_offline_mode (): alembic_env_migration . run_migrations_offline ( context ) else : await alembic_env_migration . run_migrations_online ( context ) execute_coroutine_with_sync_worker ( main ()) This migration environment class, MyCustomMigrationEnv , inherits from AlembicEnvMigrationBase and provides the necessary methods for offline and online migrations. It utilizes the EllarSQLService to obtain the database engines and metadata for the migration process. The main function initializes and executes the migration class, with specific handling for offline and online modes.","title":"customizing env"},{"location":"migrations/env/#alembic-env","text":"In the generated migration template, EllarSQL adopts an async-first approach for handling migration file generation. This approach simplifies the execution of migrations for both Session , Engine , AsyncSession , and AsyncEngine , but it also introduces a certain level of complexity. from logging.config import fileConfig from alembic import context from ellar.app import current_injector from ellar.threading import execute_coroutine_with_sync_worker from ellar_sql.migrations import SingleDatabaseAlembicEnvMigration from ellar_sql.services import EllarSQLService # this is the Alembic Config object, which provides # access to the values within the .ini file in use. config = context . config # Interpret the config file for Python logging. # This line sets up loggers basically. fileConfig ( config . config_file_name ) # type:ignore[arg-type] # logger = logging.getLogger(\"alembic.env\") # other values from the config, defined by the needs of env.py, # can be acquired: # my_important_option = config.get_main_option(\"my_important_option\") # ... etc. async def main () -> None : db_service : EllarSQLService = current_injector . get ( EllarSQLService ) # initialize migration class alembic_env_migration = SingleDatabaseAlembicEnvMigration ( db_service ) if context . is_offline_mode (): alembic_env_migration . run_migrations_offline ( context ) # type:ignore[arg-type] else : await alembic_env_migration . run_migrations_online ( context ) # type:ignore[arg-type] execute_coroutine_with_sync_worker ( main ()) The EllarSQL migration package provides two main migration classes: SingleDatabaseAlembicEnvMigration : Manages migrations for a single database configuration, catering to both Engine and AsyncEngine . MultipleDatabaseAlembicEnvMigration : Manages migrations for multiple database configurations, covering both Engine and AsyncEngine .","title":"Alembic Env"},{"location":"migrations/env/#customizing-the-env-file","text":"To customize or edit the Env file, it is recommended to inherit from either SingleDatabaseAlembicEnvMigration or MultipleDatabaseAlembicEnvMigration based on your specific configuration. Make the necessary changes within the inherited class. If you prefer to write something from scratch, then the abstract class AlembicEnvMigrationBase is the starting point. This class includes three abstract methods and expects a EllarSQLService during initialization, as demonstrated below: class AlembicEnvMigrationBase : def __init__ ( self , db_service : EllarSQLService ) -> None : self . db_service = db_service self . use_two_phase = db_service . migration_options . use_two_phase @abstractmethod def default_process_revision_directives ( self , context : \"MigrationContext\" , revision : RevisionArgs , directives : t . List [ \"MigrationScript\" ], ) -> t . Any : pass @abstractmethod def run_migrations_offline ( self , context : \"EnvironmentContext\" ) -> None : pass @abstractmethod async def run_migrations_online ( self , context : \"EnvironmentContext\" ) -> None : pass The run_migrations_online and run_migrations_offline are all similar to the same function from Alembic env.py template. The default_process_revision_directives is a callback is used to prevent an auto-migration from being generated when there are no changes to the schema described in details here","title":"Customizing the Env file"},{"location":"migrations/env/#example","text":"import logging from logging.config import fileConfig from alembic import context from ellar_sql.migrations import AlembicEnvMigrationBase from ellar_sql.model.database_binds import get_metadata from ellar.app import current_injector from ellar.threading import execute_coroutine_with_sync_worker from ellar_sql.services import EllarSQLService # This is the Alembic Config object, which provides # access to the values within the .ini file in use. config = context . config logger = logging . getLogger ( \"alembic.env\" ) # Interpret the config file for Python logging. # This line sets up loggers essentially. fileConfig ( config . config_file_name ) # type:ignore[arg-type] class MyCustomMigrationEnv ( AlembicEnvMigrationBase ): def default_process_revision_directives ( self , context , revision , directives , ) -> None : if getattr ( context . config . cmd_opts , \"autogenerate\" , False ): script = directives [ 0 ] if script . upgrade_ops . is_empty (): directives [:] = [] logger . info ( \"No changes in schema detected.\" ) def run_migrations_offline ( self , context : \"EnvironmentContext\" ) -> None : \"\"\"Run migrations in 'offline' mode. This configures the context with just a URL and not an Engine, though an Engine is acceptable here as well. By skipping the Engine creation we don't even need a DBAPI to be available. Calls to context.execute() here emit the given string to the script output. \"\"\" pass async def run_migrations_online ( self , context : \"EnvironmentContext\" ) -> None : \"\"\"Run migrations in 'online' mode. In this scenario, we need to create an Engine and associate a connection with the context. \"\"\" key , engine = self . db_service . engines . popitem () metadata = get_metadata ( key , certain = True ) . metadata conf_args = {} conf_args . setdefault ( \"process_revision_directives\" , self . default_process_revision_directives ) with engine . connect () as connection : context . configure ( connection = connection , target_metadata = metadata , ** conf_args ) with context . begin_transaction (): context . run_migrations () async def main () -> None : db_service : EllarSQLService = current_injector . get ( EllarSQLService ) # initialize migration class alembic_env_migration = MyCustomMigrationEnv ( db_service ) if context . is_offline_mode (): alembic_env_migration . run_migrations_offline ( context ) else : await alembic_env_migration . run_migrations_online ( context ) execute_coroutine_with_sync_worker ( main ()) This migration environment class, MyCustomMigrationEnv , inherits from AlembicEnvMigrationBase and provides the necessary methods for offline and online migrations. It utilizes the EllarSQLService to obtain the database engines and metadata for the migration process. The main function initializes and executes the migration class, with specific handling for offline and online modes.","title":"Example"},{"location":"models/","text":"Quick Start \u00b6 In this segment, we will walk through the process of configuring EllarSQL within your Ellar application, ensuring that all essential services are registered, configurations are set, and everything is prepared for immediate use. Before we delve into the setup instructions, it is assumed that you possess a comprehensive understanding of how Ellar Modules operate. Installation \u00b6 Let us install all the required packages, assuming that your Python environment has been properly configured: For Existing Project: \u00b6 pip install ellar-sql For New Project : \u00b6 pip install ellar ellar-cli ellar-sql After a successful package installation, we need to scaffold a new project using ellar cli tool ellar new db-learning This will scaffold db-learning project with necessary file structure shown below. path/to/db-learning/ \u251c\u2500 db_learning/ \u2502 \u251c\u2500 apps/ \u2502 \u2502 \u251c\u2500 __init__.py \u2502 \u251c\u2500 core/ \u2502 \u251c\u2500 config.py \u2502 \u251c\u2500 domain \u2502 \u251c\u2500 root_module.py \u2502 \u251c\u2500 server.py \u2502 \u251c\u2500 __init__.py \u251c\u2500 tests/ \u2502 \u251c\u2500 __init__.py \u251c\u2500 pyproject.toml \u251c\u2500 README.md Next, in db_learning/ directory, we need to create a models.py . It will hold all our SQLAlchemy ORM Models for now. Creating a Model \u00b6 In models.py , we use ellar_sql.model.Model to create our SQLAlchemy ORM Models. db_learning/model.py from ellar_sql import model class User ( model . Model ): id : model . Mapped [ int ] = model . mapped_column ( model . Integer , primary_key = True ) username : model . Mapped [ str ] = model . mapped_column ( model . String , unique = True , nullable = False ) email : model . Mapped [ str ] = model . mapped_column ( model . String ) Info ellar_sql.model also exposes sqlalchemy , sqlalchemy.orm and sqlalchemy.event imports just for ease of import reference Create A UserController \u00b6 Let's create a controller that exposes our user data. db_learning/controller.py import ellar.common as ecm from ellar.pydantic import EmailStr from ellar_sql import model , get_or_404 from .models import User @ecm . Controller class UsersController ( ecm . ControllerBase ): @ecm . post ( \"/users\" ) def create_user ( self , username : ecm . Body [ str ], email : ecm . Body [ EmailStr ], session : ecm . Inject [ model . Session ]): user = User ( username = username , email = email ) session . add ( user ) session . commit () session . refresh ( user ) return user . dict () @ecm . get ( \"/users/{user_id:int}\" ) def user_by_id ( self , user_id : int ): user = get_or_404 ( User , user_id ) return user . dict () @ecm . get ( \"/\" ) async def user_list ( self , session : ecm . Inject [ model . Session ]): stmt = model . select ( User ) rows = session . execute ( stmt . offset ( 0 ) . limit ( 100 )) . scalars () return [ row . dict () for row in rows ] @ecm . get ( \"/{user_id:int}\" ) async def user_delete ( self , user_id : int , session : ecm . Inject [ model . Session ]): user = get_or_404 ( User , user_id ) session . delete ( user ) return { 'detail' : f 'User id= { user_id } Deleted successfully' } EllarSQLModule Setup \u00b6 In the root_module.py file, two main tasks need to be performed: Register the UsersController to make the /users endpoint available when starting the application. Configure the EllarSQLModule , which will set up and register essential services such as EllarSQLService , Session , and Engine . db_learning/root_module.py from ellar.common import Module , exception_handler , IExecutionContext , JSONResponse , Response , IApplicationStartup from ellar.app import App from ellar.core import ModuleBase from ellar_sql import EllarSQLModule , EllarSQLService from .controller import UsersController @Module ( modules = [ EllarSQLModule . setup ( databases = { 'default' : { 'url' : 'sqlite:///app.db' , 'echo' : True } }, migration_options = { 'directory' : 'migrations' } )], controllers = [ UsersController ] ) class ApplicationModule ( ModuleBase , IApplicationStartup ): async def on_startup ( self , app : App ) -> None : db_service = app . injector . get ( EllarSQLService ) db_service . create_all () @exception_handler ( 404 ) def exception_404_handler ( cls , ctx : IExecutionContext , exc : Exception ) -> Response : return JSONResponse ( dict ( detail = \"Resource not found.\" ), status_code = 404 ) In the provided code snippet: We registered UserController and EllarSQLModule with specific configurations for the database and migration options. For more details on EllarSQLModule configurations . In the on_startup method, we obtained the EllarSQLService from the Ellar Dependency Injection container using EllarSQLModule . Subsequently, we invoked the create_all() method to generate the necessary SQLAlchemy tables. With these configurations, the application is now ready for testing. ellar runserver --reload Additionally, please remember to uncomment the configurations for the OpenAPIModule in the server.py file to enable visualization and interaction with the /users endpoint. Once done, you can access the OpenAPI documentation at http://127.0.0.1:8000/docs . You can find the source code for this project here .","title":"Get Started"},{"location":"models/#quick-start","text":"In this segment, we will walk through the process of configuring EllarSQL within your Ellar application, ensuring that all essential services are registered, configurations are set, and everything is prepared for immediate use. Before we delve into the setup instructions, it is assumed that you possess a comprehensive understanding of how Ellar Modules operate.","title":"Quick Start"},{"location":"models/#installation","text":"Let us install all the required packages, assuming that your Python environment has been properly configured:","title":"Installation"},{"location":"models/#for-existing-project","text":"pip install ellar-sql","title":"For Existing Project:"},{"location":"models/#for-new-project","text":"pip install ellar ellar-cli ellar-sql After a successful package installation, we need to scaffold a new project using ellar cli tool ellar new db-learning This will scaffold db-learning project with necessary file structure shown below. path/to/db-learning/ \u251c\u2500 db_learning/ \u2502 \u251c\u2500 apps/ \u2502 \u2502 \u251c\u2500 __init__.py \u2502 \u251c\u2500 core/ \u2502 \u251c\u2500 config.py \u2502 \u251c\u2500 domain \u2502 \u251c\u2500 root_module.py \u2502 \u251c\u2500 server.py \u2502 \u251c\u2500 __init__.py \u251c\u2500 tests/ \u2502 \u251c\u2500 __init__.py \u251c\u2500 pyproject.toml \u251c\u2500 README.md Next, in db_learning/ directory, we need to create a models.py . It will hold all our SQLAlchemy ORM Models for now.","title":"For New Project:"},{"location":"models/#creating-a-model","text":"In models.py , we use ellar_sql.model.Model to create our SQLAlchemy ORM Models. db_learning/model.py from ellar_sql import model class User ( model . Model ): id : model . Mapped [ int ] = model . mapped_column ( model . Integer , primary_key = True ) username : model . Mapped [ str ] = model . mapped_column ( model . String , unique = True , nullable = False ) email : model . Mapped [ str ] = model . mapped_column ( model . String ) Info ellar_sql.model also exposes sqlalchemy , sqlalchemy.orm and sqlalchemy.event imports just for ease of import reference","title":"Creating a Model"},{"location":"models/#create-a-usercontroller","text":"Let's create a controller that exposes our user data. db_learning/controller.py import ellar.common as ecm from ellar.pydantic import EmailStr from ellar_sql import model , get_or_404 from .models import User @ecm . Controller class UsersController ( ecm . ControllerBase ): @ecm . post ( \"/users\" ) def create_user ( self , username : ecm . Body [ str ], email : ecm . Body [ EmailStr ], session : ecm . Inject [ model . Session ]): user = User ( username = username , email = email ) session . add ( user ) session . commit () session . refresh ( user ) return user . dict () @ecm . get ( \"/users/{user_id:int}\" ) def user_by_id ( self , user_id : int ): user = get_or_404 ( User , user_id ) return user . dict () @ecm . get ( \"/\" ) async def user_list ( self , session : ecm . Inject [ model . Session ]): stmt = model . select ( User ) rows = session . execute ( stmt . offset ( 0 ) . limit ( 100 )) . scalars () return [ row . dict () for row in rows ] @ecm . get ( \"/{user_id:int}\" ) async def user_delete ( self , user_id : int , session : ecm . Inject [ model . Session ]): user = get_or_404 ( User , user_id ) session . delete ( user ) return { 'detail' : f 'User id= { user_id } Deleted successfully' }","title":"Create A UserController"},{"location":"models/#ellarsqlmodule-setup","text":"In the root_module.py file, two main tasks need to be performed: Register the UsersController to make the /users endpoint available when starting the application. Configure the EllarSQLModule , which will set up and register essential services such as EllarSQLService , Session , and Engine . db_learning/root_module.py from ellar.common import Module , exception_handler , IExecutionContext , JSONResponse , Response , IApplicationStartup from ellar.app import App from ellar.core import ModuleBase from ellar_sql import EllarSQLModule , EllarSQLService from .controller import UsersController @Module ( modules = [ EllarSQLModule . setup ( databases = { 'default' : { 'url' : 'sqlite:///app.db' , 'echo' : True } }, migration_options = { 'directory' : 'migrations' } )], controllers = [ UsersController ] ) class ApplicationModule ( ModuleBase , IApplicationStartup ): async def on_startup ( self , app : App ) -> None : db_service = app . injector . get ( EllarSQLService ) db_service . create_all () @exception_handler ( 404 ) def exception_404_handler ( cls , ctx : IExecutionContext , exc : Exception ) -> Response : return JSONResponse ( dict ( detail = \"Resource not found.\" ), status_code = 404 ) In the provided code snippet: We registered UserController and EllarSQLModule with specific configurations for the database and migration options. For more details on EllarSQLModule configurations . In the on_startup method, we obtained the EllarSQLService from the Ellar Dependency Injection container using EllarSQLModule . Subsequently, we invoked the create_all() method to generate the necessary SQLAlchemy tables. With these configurations, the application is now ready for testing. ellar runserver --reload Additionally, please remember to uncomment the configurations for the OpenAPIModule in the server.py file to enable visualization and interaction with the /users endpoint. Once done, you can access the OpenAPI documentation at http://127.0.0.1:8000/docs . You can find the source code for this project here .","title":"EllarSQLModule Setup"},{"location":"models/configuration/","text":"EllarSQLModule Config \u00b6 EllarSQLModule is an Ellar Dynamic Module that offers two ways of configuration: EllarSQLModule.register_setup() : This method registers a ModuleSetup that depends on the application config. EllarSQLModule.setup() : This method immediately sets up the module with the provided options. While we've explored many examples using EllarSQLModule.setup() , this section will focus on the usage of EllarSQLModule.register_setup() . Before delving into that, let's first explore the setup options available for EllarSQLModule . EllarSQLModule Configuration Parameters \u00b6 databases : typing.Union[str, typing.Dict[str, typing.Any]] : This field describes the options for your database engine, utilized by SQLAlchemy Engine , Metadata , and Sessions . There are three methods for setting these options, as illustrated below: ## CASE 1 databases = \"sqlite//:memory:\" # This will result to # databases = { # 'default': { # 'url': 'sqlite//:memory:' # } # } ## CASE 2 databases = { 'default' : \"sqlite//:memory:\" , 'db2' : \"sqlite//:memory:\" , } # This will result to # databases = { # 'default': { # 'url': 'sqlite//:memory:' # }, # 'db2': { # 'url': 'sqlite//:memory:' # }, # } ## CASE 3 - With Extra Engine Options databases = { 'default' : { \"url\" : \"sqlite//:memory:\" , \"echo\" : True , \"connect_args\" : { \"check_same_thread\" : False } } } migration_options : typing.Union[typing.Dict[str, typing.Any], MigrationOption] : The migration options can be specified either in a dictionary object or as a MigrationOption schema instance. These configurations are essential for defining the necessary settings for database migrations. The available options include: directory = migrations :directory to save alembic migration templates/env and migration versions use_two_phase = True : bool value that indicates use of two in migration SQLAlchemy session context_configure = {compare_type: True, render_as_batch: True, include_object: callable} : key-value pair that will be passed to EnvironmentContext.configure . Default context_configure set by EllarSQL: compare_type=True : This option configures the automatic migration generation subsystem to detect column type changes. render_as_batch=True : This option generates migration scripts using batch mode , an operational mode that works around limitations of many ALTER commands in the SQLite database by implementing a \u201cmove and copy\u201d workflow. include_object : Skips model from auto gen when it's defined in table args eg: __table_args__ = {\"info\": {\"skip_autogen\": True}} session_options : t.Optional[t.Dict[str, t.Any]] : A default key-value pair pass to SQLAlchemy.Session() when creating a session. engine_options : t.Optional[t.Dict[str, t.Any]] : A default key-value pair to pass to every database configuration engine configuration for SQLAlchemy.create_engine() . This overriden by configurations provided in databases parameters models : t.Optional[t.List[str]] : list of python modules that defines model.Model models. By providing this, EllarSQL ensures models are discovered before Alembic CLI migration actions or any other database interactions with SQLAlchemy. echo : bool : The default value for echo and echo_pool for every engine. This is useful to quickly debug the connections and queries issued from SQLAlchemy. root_path : t.Optional[str] : The root_path for sqlite databases and migration base directory. Defaults to the execution path of EllarSQLModule Connection URL Format \u00b6 Refer to SQLAlchemy\u2019s documentation on Engine Configuration for a comprehensive overview of syntax, dialects, and available options. The standard format for a basic database connection URL is as follows: Username, password, host, and port are optional parameters based on the database type and specific configuration. dialect://username:password@host:port/database Here are some example connection strings: # SQLite, relative to Flask instance path sqlite:///project.db # PostgreSQL postgresql://scott:tiger@localhost/project # MySQL / MariaDB mysql://scott:tiger@localhost/project Default Driver Options \u00b6 To enhance usability for web applications, default options have been configured for SQLite and MySQL engines. For SQLite, relative file paths are now relative to the root_path option rather than the current working directory. Additionally, in-memory databases utilize a static pool and check_same_thread to ensure seamless operation across multiple requests. For MySQL (and MariaDB ) servers, a default idle connection timeout of 8 hours has been set. This configuration helps avoid errors, such as 2013: Lost connection to MySQL server during query. To preemptively recreate connections before hitting this timeout, a default pool_recycle value of 2 hours ( 7200 seconds) is applied. Timeout \u00b6 Certain databases, including MySQL and MariaDB , might be set to close inactive connections after a certain duration, which can lead to errors like 2013: Lost connection to MySQL server during query. While this behavior is configured by default in MySQL and MariaDB, it could also be implemented by other database services. If you encounter such errors, consider adjusting the pool_recycle option in the engine settings to a value less than the database's timeout. Alternatively, you can explore setting pool_pre_ping if you anticipate frequent closure of connections, especially in scenarios like running the database in a container that may undergo periodic restarts. For more in-depth information on dealing with disconnects , refer to SQLAlchemy's documentation on handling connection issues. EllarSQLModule RegisterSetup \u00b6 As mentioned earlier, EllarSQLModule can be configured from the application through EllarSQLModule.register_setup . This process registers a ModuleSetup factory that depends on the Application Config object. The factory retrieves the ELLAR_SQL attribute from the config and validates the data before passing it to EllarSQLModule for setup. It's essential to note that ELLAR_SQL will be a dictionary object with the configuration parameters mentioned above as keys. Here's a quick example: db_learning/root_module.py from ellar.common import Module , exception_handler , IExecutionContext , JSONResponse , Response , IApplicationStartup from ellar.app import App from ellar.core import ModuleBase from ellar_sql import EllarSQLModule , EllarSQLService from .controller import UsersController @Module ( modules = [ EllarSQLModule . register_setup ()], controllers = [ UsersController ] ) class ApplicationModule ( ModuleBase , IApplicationStartup ): async def on_startup ( self , app : App ) -> None : db_service = app . injector . get ( EllarSQLService ) db_service . create_all () @exception_handler ( 404 ) def exception_404_handler ( cls , ctx : IExecutionContext , exc : Exception ) -> Response : return JSONResponse ( dict ( detail = \"Resource not found.\" ), status_code = 404 ) Let's update config.py . import typing as t ... class DevelopmentConfig ( BaseConfig ): DEBUG : bool = True ELLAR_SQL : t . Dict [ str , t . Any ] = { 'databases' : { 'default' : 'sqlite:///app.db' , }, 'echo' : True , 'migration_options' : { 'directory' : 'migrations' # root directory will be determined based on where the module is instantiated. }, 'models' : [] } The registered ModuleSetup factory reads the ELLAR_SQL value and configures the EllarSQLModule appropriately. This approach is particularly useful when dealing with multiple environments. It allows for seamless modification of the ELLAR_SQL values in various environments such as Continuous Integration (CI), Development, Staging, or Production. You can easily change the settings for each environment and export the configurations as a string to be imported into ELLAR_CONFIG_MODULE .","title":"Configuration"},{"location":"models/configuration/#ellarsqlmodule-config","text":"EllarSQLModule is an Ellar Dynamic Module that offers two ways of configuration: EllarSQLModule.register_setup() : This method registers a ModuleSetup that depends on the application config. EllarSQLModule.setup() : This method immediately sets up the module with the provided options. While we've explored many examples using EllarSQLModule.setup() , this section will focus on the usage of EllarSQLModule.register_setup() . Before delving into that, let's first explore the setup options available for EllarSQLModule .","title":"EllarSQLModule Config"},{"location":"models/configuration/#ellarsqlmodule-configuration-parameters","text":"databases : typing.Union[str, typing.Dict[str, typing.Any]] : This field describes the options for your database engine, utilized by SQLAlchemy Engine , Metadata , and Sessions . There are three methods for setting these options, as illustrated below: ## CASE 1 databases = \"sqlite//:memory:\" # This will result to # databases = { # 'default': { # 'url': 'sqlite//:memory:' # } # } ## CASE 2 databases = { 'default' : \"sqlite//:memory:\" , 'db2' : \"sqlite//:memory:\" , } # This will result to # databases = { # 'default': { # 'url': 'sqlite//:memory:' # }, # 'db2': { # 'url': 'sqlite//:memory:' # }, # } ## CASE 3 - With Extra Engine Options databases = { 'default' : { \"url\" : \"sqlite//:memory:\" , \"echo\" : True , \"connect_args\" : { \"check_same_thread\" : False } } } migration_options : typing.Union[typing.Dict[str, typing.Any], MigrationOption] : The migration options can be specified either in a dictionary object or as a MigrationOption schema instance. These configurations are essential for defining the necessary settings for database migrations. The available options include: directory = migrations :directory to save alembic migration templates/env and migration versions use_two_phase = True : bool value that indicates use of two in migration SQLAlchemy session context_configure = {compare_type: True, render_as_batch: True, include_object: callable} : key-value pair that will be passed to EnvironmentContext.configure . Default context_configure set by EllarSQL: compare_type=True : This option configures the automatic migration generation subsystem to detect column type changes. render_as_batch=True : This option generates migration scripts using batch mode , an operational mode that works around limitations of many ALTER commands in the SQLite database by implementing a \u201cmove and copy\u201d workflow. include_object : Skips model from auto gen when it's defined in table args eg: __table_args__ = {\"info\": {\"skip_autogen\": True}} session_options : t.Optional[t.Dict[str, t.Any]] : A default key-value pair pass to SQLAlchemy.Session() when creating a session. engine_options : t.Optional[t.Dict[str, t.Any]] : A default key-value pair to pass to every database configuration engine configuration for SQLAlchemy.create_engine() . This overriden by configurations provided in databases parameters models : t.Optional[t.List[str]] : list of python modules that defines model.Model models. By providing this, EllarSQL ensures models are discovered before Alembic CLI migration actions or any other database interactions with SQLAlchemy. echo : bool : The default value for echo and echo_pool for every engine. This is useful to quickly debug the connections and queries issued from SQLAlchemy. root_path : t.Optional[str] : The root_path for sqlite databases and migration base directory. Defaults to the execution path of EllarSQLModule","title":"EllarSQLModule Configuration Parameters"},{"location":"models/configuration/#connection-url-format","text":"Refer to SQLAlchemy\u2019s documentation on Engine Configuration for a comprehensive overview of syntax, dialects, and available options. The standard format for a basic database connection URL is as follows: Username, password, host, and port are optional parameters based on the database type and specific configuration. dialect://username:password@host:port/database Here are some example connection strings: # SQLite, relative to Flask instance path sqlite:///project.db # PostgreSQL postgresql://scott:tiger@localhost/project # MySQL / MariaDB mysql://scott:tiger@localhost/project","title":"Connection URL Format"},{"location":"models/configuration/#default-driver-options","text":"To enhance usability for web applications, default options have been configured for SQLite and MySQL engines. For SQLite, relative file paths are now relative to the root_path option rather than the current working directory. Additionally, in-memory databases utilize a static pool and check_same_thread to ensure seamless operation across multiple requests. For MySQL (and MariaDB ) servers, a default idle connection timeout of 8 hours has been set. This configuration helps avoid errors, such as 2013: Lost connection to MySQL server during query. To preemptively recreate connections before hitting this timeout, a default pool_recycle value of 2 hours ( 7200 seconds) is applied.","title":"Default Driver Options"},{"location":"models/configuration/#timeout","text":"Certain databases, including MySQL and MariaDB , might be set to close inactive connections after a certain duration, which can lead to errors like 2013: Lost connection to MySQL server during query. While this behavior is configured by default in MySQL and MariaDB, it could also be implemented by other database services. If you encounter such errors, consider adjusting the pool_recycle option in the engine settings to a value less than the database's timeout. Alternatively, you can explore setting pool_pre_ping if you anticipate frequent closure of connections, especially in scenarios like running the database in a container that may undergo periodic restarts. For more in-depth information on dealing with disconnects , refer to SQLAlchemy's documentation on handling connection issues.","title":"Timeout"},{"location":"models/configuration/#ellarsqlmodule-registersetup","text":"As mentioned earlier, EllarSQLModule can be configured from the application through EllarSQLModule.register_setup . This process registers a ModuleSetup factory that depends on the Application Config object. The factory retrieves the ELLAR_SQL attribute from the config and validates the data before passing it to EllarSQLModule for setup. It's essential to note that ELLAR_SQL will be a dictionary object with the configuration parameters mentioned above as keys. Here's a quick example: db_learning/root_module.py from ellar.common import Module , exception_handler , IExecutionContext , JSONResponse , Response , IApplicationStartup from ellar.app import App from ellar.core import ModuleBase from ellar_sql import EllarSQLModule , EllarSQLService from .controller import UsersController @Module ( modules = [ EllarSQLModule . register_setup ()], controllers = [ UsersController ] ) class ApplicationModule ( ModuleBase , IApplicationStartup ): async def on_startup ( self , app : App ) -> None : db_service = app . injector . get ( EllarSQLService ) db_service . create_all () @exception_handler ( 404 ) def exception_404_handler ( cls , ctx : IExecutionContext , exc : Exception ) -> Response : return JSONResponse ( dict ( detail = \"Resource not found.\" ), status_code = 404 ) Let's update config.py . import typing as t ... class DevelopmentConfig ( BaseConfig ): DEBUG : bool = True ELLAR_SQL : t . Dict [ str , t . Any ] = { 'databases' : { 'default' : 'sqlite:///app.db' , }, 'echo' : True , 'migration_options' : { 'directory' : 'migrations' # root directory will be determined based on where the module is instantiated. }, 'models' : [] } The registered ModuleSetup factory reads the ELLAR_SQL value and configures the EllarSQLModule appropriately. This approach is particularly useful when dealing with multiple environments. It allows for seamless modification of the ELLAR_SQL values in various environments such as Continuous Integration (CI), Development, Staging, or Production. You can easily change the settings for each environment and export the configurations as a string to be imported into ELLAR_CONFIG_MODULE .","title":"EllarSQLModule RegisterSetup"},{"location":"models/extra-fields/","text":"Extra Column Types \u00b6 EllarSQL comes with extra column type descriptor that will come in handy in your project. They include GUID IPAddress GUID Column \u00b6 GUID, Global Unique Identifier of 128-bit text string can be used as a unique identifier in a table. For applications that require a GUID type of primary, this can be a use resource. It uses UUID type in Postgres and CHAR(32) in other SQL databases. import uuid from ellar_sql import model class Guid ( model . Model ): id : model . Mapped [ uuid . uuid4 ] = model . mapped_column ( \"id\" , model . GUID (), nullable = False , unique = True , primary_key = True , default = uuid . uuid4 , ) IPAddress Column \u00b6 GenericIP column type validates and converts column value to ipaddress.IPv4Address or ipaddress.IPv6Address . It uses INET type in Postgres and CHAR(45) in other SQL databases. import typing as t import ipaddress from ellar_sql import model class IPAddress ( model . Model ): id = model . Column ( model . Integer , primary_key = True ) ip : model . Mapped [ t . Union [ ipaddress . IPv4Address , ipaddress . IPv6Address ]] = model . Column ( model . GenericIP )","title":"Extra Fields"},{"location":"models/extra-fields/#extra-column-types","text":"EllarSQL comes with extra column type descriptor that will come in handy in your project. They include GUID IPAddress","title":"Extra Column Types"},{"location":"models/extra-fields/#guid-column","text":"GUID, Global Unique Identifier of 128-bit text string can be used as a unique identifier in a table. For applications that require a GUID type of primary, this can be a use resource. It uses UUID type in Postgres and CHAR(32) in other SQL databases. import uuid from ellar_sql import model class Guid ( model . Model ): id : model . Mapped [ uuid . uuid4 ] = model . mapped_column ( \"id\" , model . GUID (), nullable = False , unique = True , primary_key = True , default = uuid . uuid4 , )","title":"GUID Column"},{"location":"models/extra-fields/#ipaddress-column","text":"GenericIP column type validates and converts column value to ipaddress.IPv4Address or ipaddress.IPv6Address . It uses INET type in Postgres and CHAR(45) in other SQL databases. import typing as t import ipaddress from ellar_sql import model class IPAddress ( model . Model ): id = model . Column ( model . Integer , primary_key = True ) ip : model . Mapped [ t . Union [ ipaddress . IPv4Address , ipaddress . IPv6Address ]] = model . Column ( model . GenericIP )","title":"IPAddress Column"},{"location":"models/models/","text":"Models and Tables \u00b6 The ellar_sql.model.Model class acts as a factory for creating SQLAlchemy models, and associating the generated models with the corresponding Metadata through their designated __database__ key. This class can be configured through the __base_config__ attribute, allowing you to specify how your SQLAlchemy model should be created. The __base_config__ attribute can be of type ModelBaseConfig , which is a dataclass, or a dictionary with keys that match the attributes of ModelBaseConfig . Attributes of ModelBaseConfig : as_base : Indicates whether the class should be treated as a Base class for other model definitions, similar to creating a Base from a DeclarativeBase or DeclarativeBaseNoMeta class. (Default: False) use_base : Specifies the base classes that will be used to create the SQLAlchemy model. (Default: []) Creating a Base Class \u00b6 Model treats each model as a standalone entity. Each instance of model.Model creates a distinct declarative base for itself, using the __database__ key as a reference to determine its associated Metadata . Consequently, models sharing the same __database__ key will utilize the same Metadata object. Let's explore how we can create a Base model using Model , similar to the approach in traditional SQLAlchemy . from ellar_sql import model , ModelBaseConfig class Base ( model . Model ): __base_config__ = ModelBaseConfig ( as_base = True , use_bases = [ model . DeclarativeBase ]) assert issubclass ( Base , model . DeclarativeBase ) If you are interested in SQLAlchemy\u2019s native support for data classes , then you can add MappedAsDataclass to use_bases as shown below: from ellar_sql import model , ModelBaseConfig class Base ( model . Model ): __base_config__ = ModelBaseConfig ( as_base = True , use_bases = [ model . DeclarativeBase , model . MappedAsDataclass ]) assert issubclass ( Base , model . MappedAsDataclass ) In the examples above, Base classes are created, all subclassed from the use_bases provided, and with the as_base option, the factory creates the Base class as a Base . Create base with MetaData \u00b6 You can also configure the SQLAlchemy object with a custom MetaData object. For instance, you can define a specific naming convention for constraints, ensuring consistency and predictability in constraint names. This can be particularly beneficial during migrations, as detailed by Alembic . For example: from ellar_sql import model , ModelBaseConfig class Base ( model . Model ): __base_config__ = ModelBaseConfig ( as_base = True , use_bases = [ model . DeclarativeBase ]) metadata = model . MetaData ( naming_convention = { \"ix\" : 'ix_ %(column_0_label)s ' , \"uq\" : \"uq_ %(table_name)s _ %(column_0_name)s \" , \"ck\" : \"ck_ %(table_name)s _ %(constraint_name)s \" , \"fk\" : \"fk_ %(table_name)s _ %(column_0_name)s _ %(referred_table_name)s \" , \"pk\" : \"pk_ %(table_name)s \" }) Abstract Models and Mixins \u00b6 If the desired behavior is only applicable to specific models rather than all models, you can use an abstract model base class to customize only those models. For example, if certain models need to track their creation or update timestamps , t his approach allows for targeted customization. from datetime import datetime , timezone from ellar_sql import model from sqlalchemy.orm import Mapped , mapped_column class TimestampModel ( model . Model ): __abstract__ = True created : Mapped [ datetime ] = mapped_column ( default = lambda : datetime . now ( timezone . utc )) updated : Mapped [ datetime ] = mapped_column ( default = lambda : datetime . now ( timezone . utc ), onupdate = lambda : datetime . now ( timezone . utc )) class BookAuthor ( model . Model ): id : Mapped [ int ] = mapped_column ( primary_key = True ) name : Mapped [ str ] = mapped_column ( unique = True ) class Book ( TimestampModel ): id : Mapped [ int ] = mapped_column ( primary_key = True ) title : Mapped [ str ] This can also be done with a mixin class, inherited separately. from datetime import datetime , timezone from ellar_sql import model from sqlalchemy.orm import Mapped , mapped_column class TimestampModel : created : Mapped [ datetime ] = mapped_column ( default = lambda : datetime . now ( timezone . utc )) updated : Mapped [ datetime ] = mapped_column ( default = lambda : datetime . now ( timezone . utc ), onupdate = lambda : datetime . now ( timezone . utc )) class Book ( model . Model , TimestampModel ): id : Mapped [ int ] = mapped_column ( primary_key = True ) title : Mapped [ str ] Defining Models \u00b6 Unlike plain SQLAlchemy, EllarSQL models will automatically generate a table name if the __tablename__ attribute is not set, provided a primary key column is defined. from ellar_sql import model class User ( model . Model ): id : model . Mapped [ int ] = model . mapped_column ( primary_key = True ) username : model . Mapped [ str ] = model . mapped_column ( unique = True ) email : model . Mapped [ str ] class UserAddress ( model . Model ): __tablename__ = 'user-address' id : model . Mapped [ int ] = model . mapped_column ( primary_key = True ) address : model . Mapped [ str ] = model . mapped_column ( unique = True ) assert User . __tablename__ == 'user' assert UserAddress . __tablename__ == 'user-address' For a comprehensive guide on defining model classes declaratively, refer to SQLAlchemy\u2019s declarative documentation . This resource provides detailed information and insights into the declarative approach for defining model classes. Defining Tables \u00b6 The table class is designed to receive a table name, followed by columns and other table components such as constraints. EllarSQL enhances the functionality of the SQLAlchemy Table by facilitating the selection of Metadata based on the __database__ argument. Directly creating a table proves particularly valuable when establishing many-to-many relationships. In such cases, the association table doesn't need its dedicated model class; rather, it can be conveniently accessed through the relevant relationship attributes on the associated models. from ellar_sql import model author_book_m2m = model . Table ( \"author_book\" , model . Column ( \"book_author_id\" , model . ForeignKey ( BookAuthor . id ), primary_key = True ), model . Column ( \"book_id\" , model . ForeignKey ( Book . id ), primary_key = True ), ) Quick Tutorial \u00b6 In this section, we'll delve into straightforward CRUD operations using the ORM objects. However, if you're not well-acquainted with SQLAlchemy, feel free to explore their tutorial on ORM for a more comprehensive understanding. Having understood, Model usage. Let's create a User model from ellar_sql import model class User ( model . Model ): id : model . Mapped [ int ] = model . mapped_column ( primary_key = True ) username : model . Mapped [ str ] = model . mapped_column ( unique = True ) full_name : model . Mapped [ str ] = model . mapped_column ( model . String ) We have created a User model but the data does not exist. Let's fix that from ellar.app import current_injector from ellar_sql import EllarSQLService db_service = current_injector . get ( EllarSQLService ) db_service . create_all () Insert \u00b6 To insert a data, you need a session import ellar.common as ecm from .model import User @ecm . post ( '/create' ) def create_user (): session = User . get_db_session () squidward = User ( name = \"squidward\" , fullname = \"Squidward Tentacles\" ) session . add ( squidward ) session . commit () return squidward . dict ( exclude = { 'id' }) In the above illustration, squidward data was converted to dictionary object by calling .dict() and excluding the id as shown below. It's important to note this functionality has not been extended to a relationship objects in an SQLAlchemy ORM object . Update \u00b6 To update, make changes to the ORM object and commit. import ellar.common as ecm from .model import User @ecm . put ( '/update' ) def update_user (): session = User . get_db_session () squidward = session . get ( User , 1 ) squidward . fullname = 'EllarSQL' session . commit () return squidward . dict () Delete \u00b6 To delete, pass the ORM object to session.delete() . import ellar.common as ecm from .model import User @ecm . delete ( '/delete' ) def delete_user (): session = User . get_db_session () squidward = session . get ( User , 1 ) session . delete ( squidward ) session . commit () return '' After modifying data, you must call session.commit() to commit the changes to the database. Otherwise, changes may not be persisted to the database. View Utilities \u00b6 EllarSQL provides some utility query functions to check missing entities and raise 404 Not found if not found. get_or_404 : It will raise a 404 error if the row with the given id does not exist; otherwise, it will return the corresponding instance. first_or_404 : It will raise a 404 error if the query does not return any results; otherwise, it will return the first result. one_or_404 (): It will raise a 404 error if the query does not return exactly one result; otherwise, it will return the result. import ellar.common as ecm from ellar_sql import get_or_404 , one_or_404 , model @ecm . get ( \"/user-by-id/{user_id:int}\" ) def user_by_id ( user_id : int ): user = get_or_404 ( User , user_id ) return user . dict () @ecm . get ( \"/user-by-name/{name:str}\" ) def user_by_username ( name : str ): user = one_or_404 ( model . select ( User ) . filter_by ( name = name ), error_message = f \"No user named ' { name } '.\" ) return user . dict () Accessing Metadata and Engines \u00b6 In the process of EllarSQLModule setup, three services are registered to the Ellar IoC container. EllarSQLService : Which manages models, metadata, engines and sessions Engine : SQLAlchemy Engine of the default database configuration Session SQLAlchemy Session of the default database configuration Although with EllarSQLService you can get the engine and session . It's there for easy of access. import sqlalchemy as sa import sqlalchemy.orm as sa_orm from ellar.app import current_injector from ellar_sql import EllarSQLService db_service = current_injector . get ( EllarSQLService ) assert isinstance ( db_service . engine , sa . Engine ) assert isinstance ( db_service . session_factory (), sa_orm . Session ) Important Constraints \u00b6 EllarSQLModule databases options for SQLAlchemy.ext.asyncio.AsyncEngine will register SQLAlchemy.ext.asyncio.AsyncEngine and SQLAlchemy.ext.asyncio.AsyncSession EllarSQLModule databases options for SQLAlchemy.Engine will register SQLAlchemy.Engine and SQLAlchemy.orm.Session . EllarSQL.get_all_metadata() retrieves all configured metadatas EllarSQL.get_metadata() retrieves metadata by __database__ key or default is no parameter is passed. import sqlalchemy as sa import sqlalchemy.orm as sa_orm from ellar.app import current_injector # get engine from DI default_engine = current_injector . get ( sa . Engine ) # get session from DI session = current_injector . get ( sa_orm . Session ) assert isinstance ( default_engine , sa . Engine ) assert isinstance ( session , sa_orm . Session ) For Async Database options from sqlalchemy.ext.asyncio import AsyncSession , AsyncEngine from ellar.app import current_injector # get engine from DI default_engine = current_injector . get ( AsyncEngine ) # get session from DI session = current_injector . get ( AsyncSession ) assert isinstance ( default_engine , AsyncEngine ) assert isinstance ( session , AsyncSession )","title":"index"},{"location":"models/models/#models-and-tables","text":"The ellar_sql.model.Model class acts as a factory for creating SQLAlchemy models, and associating the generated models with the corresponding Metadata through their designated __database__ key. This class can be configured through the __base_config__ attribute, allowing you to specify how your SQLAlchemy model should be created. The __base_config__ attribute can be of type ModelBaseConfig , which is a dataclass, or a dictionary with keys that match the attributes of ModelBaseConfig . Attributes of ModelBaseConfig : as_base : Indicates whether the class should be treated as a Base class for other model definitions, similar to creating a Base from a DeclarativeBase or DeclarativeBaseNoMeta class. (Default: False) use_base : Specifies the base classes that will be used to create the SQLAlchemy model. (Default: [])","title":"Models and Tables"},{"location":"models/models/#creating-a-base-class","text":"Model treats each model as a standalone entity. Each instance of model.Model creates a distinct declarative base for itself, using the __database__ key as a reference to determine its associated Metadata . Consequently, models sharing the same __database__ key will utilize the same Metadata object. Let's explore how we can create a Base model using Model , similar to the approach in traditional SQLAlchemy . from ellar_sql import model , ModelBaseConfig class Base ( model . Model ): __base_config__ = ModelBaseConfig ( as_base = True , use_bases = [ model . DeclarativeBase ]) assert issubclass ( Base , model . DeclarativeBase ) If you are interested in SQLAlchemy\u2019s native support for data classes , then you can add MappedAsDataclass to use_bases as shown below: from ellar_sql import model , ModelBaseConfig class Base ( model . Model ): __base_config__ = ModelBaseConfig ( as_base = True , use_bases = [ model . DeclarativeBase , model . MappedAsDataclass ]) assert issubclass ( Base , model . MappedAsDataclass ) In the examples above, Base classes are created, all subclassed from the use_bases provided, and with the as_base option, the factory creates the Base class as a Base .","title":"Creating a Base Class"},{"location":"models/models/#create-base-with-metadata","text":"You can also configure the SQLAlchemy object with a custom MetaData object. For instance, you can define a specific naming convention for constraints, ensuring consistency and predictability in constraint names. This can be particularly beneficial during migrations, as detailed by Alembic . For example: from ellar_sql import model , ModelBaseConfig class Base ( model . Model ): __base_config__ = ModelBaseConfig ( as_base = True , use_bases = [ model . DeclarativeBase ]) metadata = model . MetaData ( naming_convention = { \"ix\" : 'ix_ %(column_0_label)s ' , \"uq\" : \"uq_ %(table_name)s _ %(column_0_name)s \" , \"ck\" : \"ck_ %(table_name)s _ %(constraint_name)s \" , \"fk\" : \"fk_ %(table_name)s _ %(column_0_name)s _ %(referred_table_name)s \" , \"pk\" : \"pk_ %(table_name)s \" })","title":"Create base with MetaData"},{"location":"models/models/#abstract-models-and-mixins","text":"If the desired behavior is only applicable to specific models rather than all models, you can use an abstract model base class to customize only those models. For example, if certain models need to track their creation or update timestamps , t his approach allows for targeted customization. from datetime import datetime , timezone from ellar_sql import model from sqlalchemy.orm import Mapped , mapped_column class TimestampModel ( model . Model ): __abstract__ = True created : Mapped [ datetime ] = mapped_column ( default = lambda : datetime . now ( timezone . utc )) updated : Mapped [ datetime ] = mapped_column ( default = lambda : datetime . now ( timezone . utc ), onupdate = lambda : datetime . now ( timezone . utc )) class BookAuthor ( model . Model ): id : Mapped [ int ] = mapped_column ( primary_key = True ) name : Mapped [ str ] = mapped_column ( unique = True ) class Book ( TimestampModel ): id : Mapped [ int ] = mapped_column ( primary_key = True ) title : Mapped [ str ] This can also be done with a mixin class, inherited separately. from datetime import datetime , timezone from ellar_sql import model from sqlalchemy.orm import Mapped , mapped_column class TimestampModel : created : Mapped [ datetime ] = mapped_column ( default = lambda : datetime . now ( timezone . utc )) updated : Mapped [ datetime ] = mapped_column ( default = lambda : datetime . now ( timezone . utc ), onupdate = lambda : datetime . now ( timezone . utc )) class Book ( model . Model , TimestampModel ): id : Mapped [ int ] = mapped_column ( primary_key = True ) title : Mapped [ str ]","title":"Abstract Models and Mixins"},{"location":"models/models/#defining-models","text":"Unlike plain SQLAlchemy, EllarSQL models will automatically generate a table name if the __tablename__ attribute is not set, provided a primary key column is defined. from ellar_sql import model class User ( model . Model ): id : model . Mapped [ int ] = model . mapped_column ( primary_key = True ) username : model . Mapped [ str ] = model . mapped_column ( unique = True ) email : model . Mapped [ str ] class UserAddress ( model . Model ): __tablename__ = 'user-address' id : model . Mapped [ int ] = model . mapped_column ( primary_key = True ) address : model . Mapped [ str ] = model . mapped_column ( unique = True ) assert User . __tablename__ == 'user' assert UserAddress . __tablename__ == 'user-address' For a comprehensive guide on defining model classes declaratively, refer to SQLAlchemy\u2019s declarative documentation . This resource provides detailed information and insights into the declarative approach for defining model classes.","title":"Defining Models"},{"location":"models/models/#defining-tables","text":"The table class is designed to receive a table name, followed by columns and other table components such as constraints. EllarSQL enhances the functionality of the SQLAlchemy Table by facilitating the selection of Metadata based on the __database__ argument. Directly creating a table proves particularly valuable when establishing many-to-many relationships. In such cases, the association table doesn't need its dedicated model class; rather, it can be conveniently accessed through the relevant relationship attributes on the associated models. from ellar_sql import model author_book_m2m = model . Table ( \"author_book\" , model . Column ( \"book_author_id\" , model . ForeignKey ( BookAuthor . id ), primary_key = True ), model . Column ( \"book_id\" , model . ForeignKey ( Book . id ), primary_key = True ), )","title":"Defining Tables"},{"location":"models/models/#quick-tutorial","text":"In this section, we'll delve into straightforward CRUD operations using the ORM objects. However, if you're not well-acquainted with SQLAlchemy, feel free to explore their tutorial on ORM for a more comprehensive understanding. Having understood, Model usage. Let's create a User model from ellar_sql import model class User ( model . Model ): id : model . Mapped [ int ] = model . mapped_column ( primary_key = True ) username : model . Mapped [ str ] = model . mapped_column ( unique = True ) full_name : model . Mapped [ str ] = model . mapped_column ( model . String ) We have created a User model but the data does not exist. Let's fix that from ellar.app import current_injector from ellar_sql import EllarSQLService db_service = current_injector . get ( EllarSQLService ) db_service . create_all ()","title":"Quick Tutorial"},{"location":"models/models/#insert","text":"To insert a data, you need a session import ellar.common as ecm from .model import User @ecm . post ( '/create' ) def create_user (): session = User . get_db_session () squidward = User ( name = \"squidward\" , fullname = \"Squidward Tentacles\" ) session . add ( squidward ) session . commit () return squidward . dict ( exclude = { 'id' }) In the above illustration, squidward data was converted to dictionary object by calling .dict() and excluding the id as shown below. It's important to note this functionality has not been extended to a relationship objects in an SQLAlchemy ORM object .","title":"Insert"},{"location":"models/models/#update","text":"To update, make changes to the ORM object and commit. import ellar.common as ecm from .model import User @ecm . put ( '/update' ) def update_user (): session = User . get_db_session () squidward = session . get ( User , 1 ) squidward . fullname = 'EllarSQL' session . commit () return squidward . dict ()","title":"Update"},{"location":"models/models/#delete","text":"To delete, pass the ORM object to session.delete() . import ellar.common as ecm from .model import User @ecm . delete ( '/delete' ) def delete_user (): session = User . get_db_session () squidward = session . get ( User , 1 ) session . delete ( squidward ) session . commit () return '' After modifying data, you must call session.commit() to commit the changes to the database. Otherwise, changes may not be persisted to the database.","title":"Delete"},{"location":"models/models/#view-utilities","text":"EllarSQL provides some utility query functions to check missing entities and raise 404 Not found if not found. get_or_404 : It will raise a 404 error if the row with the given id does not exist; otherwise, it will return the corresponding instance. first_or_404 : It will raise a 404 error if the query does not return any results; otherwise, it will return the first result. one_or_404 (): It will raise a 404 error if the query does not return exactly one result; otherwise, it will return the result. import ellar.common as ecm from ellar_sql import get_or_404 , one_or_404 , model @ecm . get ( \"/user-by-id/{user_id:int}\" ) def user_by_id ( user_id : int ): user = get_or_404 ( User , user_id ) return user . dict () @ecm . get ( \"/user-by-name/{name:str}\" ) def user_by_username ( name : str ): user = one_or_404 ( model . select ( User ) . filter_by ( name = name ), error_message = f \"No user named ' { name } '.\" ) return user . dict ()","title":"View Utilities"},{"location":"models/models/#accessing-metadata-and-engines","text":"In the process of EllarSQLModule setup, three services are registered to the Ellar IoC container. EllarSQLService : Which manages models, metadata, engines and sessions Engine : SQLAlchemy Engine of the default database configuration Session SQLAlchemy Session of the default database configuration Although with EllarSQLService you can get the engine and session . It's there for easy of access. import sqlalchemy as sa import sqlalchemy.orm as sa_orm from ellar.app import current_injector from ellar_sql import EllarSQLService db_service = current_injector . get ( EllarSQLService ) assert isinstance ( db_service . engine , sa . Engine ) assert isinstance ( db_service . session_factory (), sa_orm . Session )","title":"Accessing Metadata and Engines"},{"location":"models/models/#important-constraints","text":"EllarSQLModule databases options for SQLAlchemy.ext.asyncio.AsyncEngine will register SQLAlchemy.ext.asyncio.AsyncEngine and SQLAlchemy.ext.asyncio.AsyncSession EllarSQLModule databases options for SQLAlchemy.Engine will register SQLAlchemy.Engine and SQLAlchemy.orm.Session . EllarSQL.get_all_metadata() retrieves all configured metadatas EllarSQL.get_metadata() retrieves metadata by __database__ key or default is no parameter is passed. import sqlalchemy as sa import sqlalchemy.orm as sa_orm from ellar.app import current_injector # get engine from DI default_engine = current_injector . get ( sa . Engine ) # get session from DI session = current_injector . get ( sa_orm . Session ) assert isinstance ( default_engine , sa . Engine ) assert isinstance ( session , sa_orm . Session ) For Async Database options from sqlalchemy.ext.asyncio import AsyncSession , AsyncEngine from ellar.app import current_injector # get engine from DI default_engine = current_injector . get ( AsyncEngine ) # get session from DI session = current_injector . get ( AsyncSession ) assert isinstance ( default_engine , AsyncEngine ) assert isinstance ( session , AsyncSession )","title":"Important Constraints"},{"location":"multiple/","text":"Multiple Databases \u00b6 SQLAlchemy has the capability to establish connections with multiple databases simultaneously, referring to these connections as \"binds.\" EllarSQL simplifies the management of binds by associating each engine with a short string identifier, __database__ . Subsequently, each model and table is linked to a __database__ , and during a query, the session selects the appropriate engine based on the __database__ of the entity being queried. In the absence of a specified __database__ , the default engine is employed. Configuring Multiple Databases \u00b6 In EllarSQL, database configuration begins with the setup of the default database, followed by additional databases, as exemplified in the EllarSQLModule configurations: from ellar_sql import EllarSQLModule EllarSQLModule . setup ( databases = { \"default\" : \"postgresql:///main\" , \"meta\" : \"sqlite:////path/to/meta.db\" , \"auth\" : { \"url\" : \"mysql://localhost/users\" , \"pool_recycle\" : 3600 , }, }, migration_options = { 'directory' : 'migrations' } ) Defining Models and Tables with Different Databases \u00b6 EllarSQL creates Metadata and an Engine for each configured database. Models and tables associated with a specific __database__ key are registered with the corresponding Metadata . During a session query, the session employs the related Engine . To designate the database for a model, set the __database__ class attribute. Not specifying a __database__ key is equivalent to setting it to default : In Models \u00b6 from ellar_sql import model class User ( model . Model ): __database__ = \"auth\" id = model . Column ( model . Integer , primary_key = True ) Models inheriting from an already existing model will share the same database key unless they are overriden. Info Its importance to not that model.Model has __database__ value equals default In Tables \u00b6 To specify the database for a table, utilize the __database__ keyword argument: from ellar_sql import model user_table = model . Table ( \"user\" , model . Column ( \"id\" , model . Integer , primary_key = True ), __database__ = \"auth\" , ) Info Ultimately, the session references the database key associated with the metadata or table, an association established during creation. Consequently, changing the database key after creating a model or table has no effect . Creating and Dropping Tables \u00b6 The create_all() and drop_all() methods operating are all part of the EllarSQLService . It also requires the database argument to target a specific database. # Create tables for all binds from ellar.app import current_injector from ellar_sql import EllarSQLService db_service = current_injector . get ( EllarSQLService ) # Create tables for all configured databases db_service . create_all () # Create tables for the 'default' and \"auth\" databases db_service . create_all ( 'default' , \"auth\" ) # Create tables for the \"meta\" database db_service . create_all ( \"meta\" ) # Drop tables for the 'default' database db_service . drop_all ( 'default' )","title":"Multiple Database"},{"location":"multiple/#multiple-databases","text":"SQLAlchemy has the capability to establish connections with multiple databases simultaneously, referring to these connections as \"binds.\" EllarSQL simplifies the management of binds by associating each engine with a short string identifier, __database__ . Subsequently, each model and table is linked to a __database__ , and during a query, the session selects the appropriate engine based on the __database__ of the entity being queried. In the absence of a specified __database__ , the default engine is employed.","title":"Multiple Databases"},{"location":"multiple/#configuring-multiple-databases","text":"In EllarSQL, database configuration begins with the setup of the default database, followed by additional databases, as exemplified in the EllarSQLModule configurations: from ellar_sql import EllarSQLModule EllarSQLModule . setup ( databases = { \"default\" : \"postgresql:///main\" , \"meta\" : \"sqlite:////path/to/meta.db\" , \"auth\" : { \"url\" : \"mysql://localhost/users\" , \"pool_recycle\" : 3600 , }, }, migration_options = { 'directory' : 'migrations' } )","title":"Configuring Multiple Databases"},{"location":"multiple/#defining-models-and-tables-with-different-databases","text":"EllarSQL creates Metadata and an Engine for each configured database. Models and tables associated with a specific __database__ key are registered with the corresponding Metadata . During a session query, the session employs the related Engine . To designate the database for a model, set the __database__ class attribute. Not specifying a __database__ key is equivalent to setting it to default :","title":"Defining Models and Tables with Different Databases"},{"location":"multiple/#in-models","text":"from ellar_sql import model class User ( model . Model ): __database__ = \"auth\" id = model . Column ( model . Integer , primary_key = True ) Models inheriting from an already existing model will share the same database key unless they are overriden. Info Its importance to not that model.Model has __database__ value equals default","title":"In Models"},{"location":"multiple/#in-tables","text":"To specify the database for a table, utilize the __database__ keyword argument: from ellar_sql import model user_table = model . Table ( \"user\" , model . Column ( \"id\" , model . Integer , primary_key = True ), __database__ = \"auth\" , ) Info Ultimately, the session references the database key associated with the metadata or table, an association established during creation. Consequently, changing the database key after creating a model or table has no effect .","title":"In Tables"},{"location":"multiple/#creating-and-dropping-tables","text":"The create_all() and drop_all() methods operating are all part of the EllarSQLService . It also requires the database argument to target a specific database. # Create tables for all binds from ellar.app import current_injector from ellar_sql import EllarSQLService db_service = current_injector . get ( EllarSQLService ) # Create tables for all configured databases db_service . create_all () # Create tables for the 'default' and \"auth\" databases db_service . create_all ( 'default' , \"auth\" ) # Create tables for the \"meta\" database db_service . create_all ( \"meta\" ) # Drop tables for the 'default' database db_service . drop_all ( 'default' )","title":"Creating and Dropping Tables"},{"location":"pagination/","text":"Pagination \u00b6 Pagination is a common practice for large datasets, enhancing user experience by breaking content into manageable pages. It optimizes load times and navigation and allows users to explore extensive datasets with ease while maintaining system performance and responsiveness. EllarSQL offers two styles of pagination: PageNumberPagination : This pagination internally configures items per_page and max item size ( max_size ) and, allows users to set the page property. LimitOffsetPagination : This pagination internally configures max item size ( max_limit ) and, allows users to set the limit and offset properties. EllarSQL pagination is activated when a route function is decorated with paginate function. The result of the route function is expected to be a SQLAlchemy.sql.Select instance or a Model type. For example: import ellar.common as ec from ellar_sql import model , paginate from .models import User from .schemas import UserSchema @ec . get ( '/users' ) @paginate ( item_schema = UserSchema ) def list_users (): return model . select ( User ) paginate properties \u00b6 pagination_class : t.Optional[t.Type[PaginationBase]]=None : specifies pagination style to use. if not set, it will be set to PageNumberPagination model : t.Optional[t.Type[ModelBase]]=None : specifies a Model type to get list of data. If set, route function can return None or override by returning a select/filtered statement as_template_context : bool=False : indicates that the paginator object be added to template context. See Template Pagination item_schema : t.Optional[t.Type[BaseModel]]=None : This is required if template_context is False. It is used to serialize the SQLAlchemy model and create a response-schema/docs . paginator_options : t.Any : keyword argument for configuring pagination_class set to use for pagination. API Pagination \u00b6 API pagination simply means pagination in an API route function. This requires item_schema for the paginate decorator to create a 200 response documentation for the decorated route and for the paginated result to be serialized to json. import ellar.common as ec from ellar_sql import paginate from .models import User class UserSchema ( ec . Serializer ): id : int username : str email : str @ec . get ( '/users' ) @paginate ( item_schema = UserSchema , per_page = 100 ) def list_users (): return User We can also rewrite the illustration above since we are not making any modification to the User query. ... @ec . get ( '/users' ) @paginate ( model = User , item_schema = UserSchema ) def list_users (): pass Template Pagination \u00b6 This is for route functions decorated with render function that need to be paginated. For this to happen, paginate function need to return a context and this is achieved by setting as_template_context=True import ellar.common as ec from ellar_sql import model , paginate from .models import User @ec . get ( '/users' ) @ec . render ( 'list.html' ) @paginate ( as_template_context = True ) def list_users (): return model . select ( User ), { 'name' : 'Template Pagination' } # pagination model, template context In the illustration above, a tuple of select statement and a template context was returned. The template context will be updated with a paginator as an extra key by the paginate function before been processed by render function. We can re-write the example above to return just the template context since there is no form of filter directly affecting the User model query. ... @ec . get ( '/users' ) @ec . render ( 'list.html' ) @paginate ( model = model . select ( User ), as_template_context = True ) def list_users (): return { 'name' : 'Template Pagination' } Also, in the list.html we have the following codes: < html lang = \"en\" > < h3 > {{ name }} {% macro render_pagination(paginator, endpoint) %} < div > {{ paginator.first }} - {{ paginator.last }} of {{ paginator.total }} < div > {% for page in paginator.iter_pages() %} {% if page %} {% if page != paginator.page %} < a href = \"{{ url_for(endpoint) }}?page={{page}}\" > {{ page }} {% else %} < strong > {{ page }} {% endif %} {% else %} < span class = ellipsis > \u2026 {% endif %} {% endfor %} {% endmacro %} < ul > {% for user in paginator %} < li > {{ user.id }} @ {{ user.name }} {% endfor %} {{render_pagination(paginator=paginator, endpoint=\"list_users\") }} The paginator object in the template context has a iter_pages() method which produces up to three group of numbers, seperated by None . It defaults to showing 2 page numbers at either edge, 2 numbers before the current, the current, and 4 numbers after the current. For example, if there are 20 pages and the current page is 7, the following values are yielded. paginator.iter_pages() [1, 2, None, 5, 6, 7, 8, 9, 10, 11, None, 19, 20] The total attribute showcases the total number of results, while first and last display the range of items on the current page. The accompanying Jinja macro renders a simple pagination widget. {% macro render_pagination(paginator, endpoint) %} < div > {{ paginator.first }} - {{ paginator.last }} of {{ paginator.total }} < div > {% for page in paginator.iter_pages() %} {% if page %} {% if page != paginator.page %} < a href = \"{{ url_for(endpoint) }}?page={{page}}\" > {{ page }} {% else %} < strong > {{ page }} {% endif %} {% else %} < span class = ellipsis > \u2026 {% endif %} {% endfor %} {% endmacro %}","title":"Pagination"},{"location":"pagination/#pagination","text":"Pagination is a common practice for large datasets, enhancing user experience by breaking content into manageable pages. It optimizes load times and navigation and allows users to explore extensive datasets with ease while maintaining system performance and responsiveness. EllarSQL offers two styles of pagination: PageNumberPagination : This pagination internally configures items per_page and max item size ( max_size ) and, allows users to set the page property. LimitOffsetPagination : This pagination internally configures max item size ( max_limit ) and, allows users to set the limit and offset properties. EllarSQL pagination is activated when a route function is decorated with paginate function. The result of the route function is expected to be a SQLAlchemy.sql.Select instance or a Model type. For example: import ellar.common as ec from ellar_sql import model , paginate from .models import User from .schemas import UserSchema @ec . get ( '/users' ) @paginate ( item_schema = UserSchema ) def list_users (): return model . select ( User )","title":"Pagination"},{"location":"pagination/#paginate-properties","text":"pagination_class : t.Optional[t.Type[PaginationBase]]=None : specifies pagination style to use. if not set, it will be set to PageNumberPagination model : t.Optional[t.Type[ModelBase]]=None : specifies a Model type to get list of data. If set, route function can return None or override by returning a select/filtered statement as_template_context : bool=False : indicates that the paginator object be added to template context. See Template Pagination item_schema : t.Optional[t.Type[BaseModel]]=None : This is required if template_context is False. It is used to serialize the SQLAlchemy model and create a response-schema/docs . paginator_options : t.Any : keyword argument for configuring pagination_class set to use for pagination.","title":"paginate properties"},{"location":"pagination/#api-pagination","text":"API pagination simply means pagination in an API route function. This requires item_schema for the paginate decorator to create a 200 response documentation for the decorated route and for the paginated result to be serialized to json. import ellar.common as ec from ellar_sql import paginate from .models import User class UserSchema ( ec . Serializer ): id : int username : str email : str @ec . get ( '/users' ) @paginate ( item_schema = UserSchema , per_page = 100 ) def list_users (): return User We can also rewrite the illustration above since we are not making any modification to the User query. ... @ec . get ( '/users' ) @paginate ( model = User , item_schema = UserSchema ) def list_users (): pass","title":"API Pagination"},{"location":"pagination/#template-pagination","text":"This is for route functions decorated with render function that need to be paginated. For this to happen, paginate function need to return a context and this is achieved by setting as_template_context=True import ellar.common as ec from ellar_sql import model , paginate from .models import User @ec . get ( '/users' ) @ec . render ( 'list.html' ) @paginate ( as_template_context = True ) def list_users (): return model . select ( User ), { 'name' : 'Template Pagination' } # pagination model, template context In the illustration above, a tuple of select statement and a template context was returned. The template context will be updated with a paginator as an extra key by the paginate function before been processed by render function. We can re-write the example above to return just the template context since there is no form of filter directly affecting the User model query. ... @ec . get ( '/users' ) @ec . render ( 'list.html' ) @paginate ( model = model . select ( User ), as_template_context = True ) def list_users (): return { 'name' : 'Template Pagination' } Also, in the list.html we have the following codes: < html lang = \"en\" > < h3 > {{ name }} {% macro render_pagination(paginator, endpoint) %} < div > {{ paginator.first }} - {{ paginator.last }} of {{ paginator.total }} < div > {% for page in paginator.iter_pages() %} {% if page %} {% if page != paginator.page %} < a href = \"{{ url_for(endpoint) }}?page={{page}}\" > {{ page }} {% else %} < strong > {{ page }} {% endif %} {% else %} < span class = ellipsis > \u2026 {% endif %} {% endfor %} {% endmacro %} < ul > {% for user in paginator %} < li > {{ user.id }} @ {{ user.name }} {% endfor %} {{render_pagination(paginator=paginator, endpoint=\"list_users\") }} The paginator object in the template context has a iter_pages() method which produces up to three group of numbers, seperated by None . It defaults to showing 2 page numbers at either edge, 2 numbers before the current, the current, and 4 numbers after the current. For example, if there are 20 pages and the current page is 7, the following values are yielded. paginator.iter_pages() [1, 2, None, 5, 6, 7, 8, 9, 10, 11, None, 19, 20] The total attribute showcases the total number of results, while first and last display the range of items on the current page. The accompanying Jinja macro renders a simple pagination widget. {% macro render_pagination(paginator, endpoint) %} < div > {{ paginator.first }} - {{ paginator.last }} of {{ paginator.total }} < div > {% for page in paginator.iter_pages() %} {% if page %} {% if page != paginator.page %} < a href = \"{{ url_for(endpoint) }}?page={{page}}\" > {{ page }} {% else %} < strong > {{ page }} {% endif %} {% else %} < span class = ellipsis > \u2026 {% endif %} {% endfor %} {% endmacro %}","title":"Template Pagination"},{"location":"testing/","text":"Testing EllarSQL Models \u00b6 There are various approaches to testing SQLAlchemy models, but in this section, we will focus on setting up a good testing environment for EllarSQL models using the Ellar Test factory and pytest. For an effective testing environment, it is recommended to utilize the EllarSQLModule.register_setup() approach to set up the EllarSQLModule . This allows you to add a new configuration for ELLAR_SQL specific to your testing database, preventing interference with production or any other databases in use. Defining TestConfig \u00b6 There are various methods for configuring test settings in Ellar, as outlined here . However, in this section, we will adopt the 'in a file' approach. Within the db_learning/config.py file, include the following code: db_learning/config.py import typing as t ... class DevelopmentConfig ( BaseConfig ): DEBUG : bool = True # Configuration through Config ELLAR_SQL : t . Dict [ str , t . Any ] = { 'databases' : { 'default' : 'sqlite:///project.db' , }, 'echo' : True , 'migration_options' : { 'directory' : 'migrations' }, 'models' : [ 'models' ] } class TestConfig ( BaseConfig ): DEBUG = False ELLAR_SQL : t . Dict [ str , t . Any ] = { ** DevelopmentConfig . ELLAR_SQL , 'databases' : { 'default' : 'sqlite:///test.db' , }, 'echo' : False , } This snippet demonstrates the 'in a file' approach to setting up the TestConfig class within the same db_learning/config.py file. Changes made: \u00b6 Updated the databases section to use sqlite+aiosqlite:///test.db for the testing database. Set echo to True to enable SQLAlchemy output during testing for cleaner logs. Preserved the migration_options and models configurations from DevelopmentConfig . Also, feel free to further adjust it based on your specific testing requirements! Test Fixtures \u00b6 After defining TestConfig , we need to add some pytest fixtures to set up EllarSQLModule and another one that returns a session for testing purposes. Additionally, we need to export ELLAR_CONFIG_MODULE to point to the newly defined TestConfig . tests/conftest.py import os import pytest from ellar.common.constants import ELLAR_CONFIG_MODULE from ellar.testing import Test from ellar_sql import EllarSQLService from db_learning.root_module import ApplicationModule # Setting the ELLAR_CONFIG_MODULE environment variable to TestConfig os . environ . setdefault ( ELLAR_CONFIG_MODULE , \"db_learning.config:TestConfig\" ) # Fixture for creating a test module @pytest . fixture ( scope = 'session' ) def tm (): test_module = Test . create_test_module ( modules = [ ApplicationModule ]) yield test_module # Fixture for creating a database session for testing @pytest . fixture ( scope = 'session' ) def db ( tm ): db_service = tm . get ( EllarSQLService ) # Creating all tables db_service . create_all () yield # Dropping all tables after the tests db_service . drop_all () # Fixture for creating a database session for testing @pytest . fixture ( scope = 'session' ) def db_session ( db , tm ): db_service = tm . get ( EllarSQLService ) yield db_service . session_factory () # Removing the session factory db_service . session_factory . remove () The provided fixtures help in setting up a testing environment for EllarSQL models. The Test.create_test_module method creates a TestModule for initializing your Ellar application, and the db_session fixture initializes a database session for testing, creating and dropping tables as needed. If you are working with asynchronous database drivers, you can convert db_session into an async function to handle coroutines seamlessly. Alembic Migration with Test Fixture \u00b6 In cases where there are already generated database migration files, and there is a need to apply migrations during testing, this can be achieved as shown in the example below: tests/conftest.py import os import pytest from ellar.common.constants import ELLAR_CONFIG_MODULE from ellar.testing import Test from ellar_sql import EllarSQLService from ellar_sql.cli.handlers import CLICommandHandlers from db_learning.root_module import ApplicationModule # Setting the ELLAR_CONFIG_MODULE environment variable to TestConfig os . environ . setdefault ( ELLAR_CONFIG_MODULE , \"db_learning.config:TestConfig\" ) # Fixture for creating a test module @pytest . fixture ( scope = 'session' ) def tm (): test_module = Test . create_test_module ( modules = [ ApplicationModule ]) yield test_module # Fixture for creating a database session for testing @pytest . fixture ( scope = 'session' ) async def db ( tm ): db_service = tm . get ( EllarSQLService ) # Applying migrations using Alembic async with tm . create_application () . application_context (): cli = CLICommandHandlers ( db_service ) cli . migrate () yield # Downgrading migrations after testing async with tm . create_application () . application_context (): cli = CLICommandHandlers ( db_service ) cli . downgrade () # Fixture for creating an asynchronous database session for testing @pytest . fixture ( scope = 'session' ) async def db_session ( db , tm ): db_service = tm . get ( EllarSQLService ) yield db_service . session_factory () # Removing the session factory db_service . session_factory . remove () The CLICommandHandlers class wraps all Alembic functions executed through the Ellar command-line interface. It can be used in conjunction with the application context to initialize all model tables during testing as shown in the illustration above. db_session pytest fixture also ensures that migrations are applied and then downgraded after testing, maintaining a clean and consistent test database state. Testing a Model \u00b6 After setting up the testing database and creating a session, let's test the insertion of a user model into the database. In db_learning/models.py , we have a user model: db_learning/model.py from ellar_sql import model class User ( model . Model ): id : model . Mapped [ int ] = model . mapped_column ( model . Integer , primary_key = True ) username : model . Mapped [ str ] = model . mapped_column ( model . String , unique = True , nullable = False ) email : model . Mapped [ str ] = model . mapped_column ( model . String ) Now, create a file named test_user_model.py : tests/test_user_model.py import pytest import sqlalchemy.exc as sa_exc from db_learning.models import User def test_username_must_be_unique ( db_session ): # Creating and adding the first user user1 = User ( username = 'ellarSQL' , email = 'ellarsql@gmail.com' ) db_session . add ( user1 ) db_session . commit () # Attempting to add a second user with the same username user2 = User ( username = 'ellarSQL' , email = 'ellarsql2@gmail.com' ) db_session . add ( user2 ) # Expecting an IntegrityError due to unique constraint violation with pytest . raises ( sa_exc . IntegrityError ): db_session . commit () In this test, we are checking whether the unique constraint on the username field is enforced by attempting to insert two users with the same username. The test expects an IntegrityError to be raised, indicating a violation of the unique constraint. This ensures that the model behaves correctly and enforces the specified uniqueness requirement. Testing Factory Boy \u00b6 factory-boy provides a convenient and flexible way to create mock objects, supporting various ORMs like Django, MongoDB, and SQLAlchemy. EllarSQL extends factory.alchemy.SQLAlchemy to offer a Model factory solution compatible with both synchronous and asynchronous database drivers. To get started, you need to install factory-boy : pip install factory-boy Now, let's create a factory for our user model in tests/factories.py : tests/factories.py import factory from ellar_sql.factory import EllarSQLFactory , SESSION_PERSISTENCE_FLUSH from db_learning.models import User from . import common class UserFactory ( EllarSQLFactory ): class Meta : model = User sqlalchemy_session_persistence = SESSION_PERSISTENCE_FLUSH sqlalchemy_session_factory = lambda : common . Session () username = factory . Faker ( 'username' ) email = factory . Faker ( 'email' ) The UserFactory depends on a database session. Since the pytest fixture we created applies to it, we also need a session factory in tests/common.py : tests/common.py from sqlalchemy import orm Session = orm . scoped_session ( orm . sessionmaker ()) Additionally, we require a fixture responsible for configuring the Factory session in tests/conftest.py : tests/conftest.py import os import pytest import sqlalchemy as sa from ellar.common.constants import ELLAR_CONFIG_MODULE from ellar.testing import Test from ellar_sql import EllarSQLService from db_learning.root_module import ApplicationModule from . import common os . environ . setdefault ( ELLAR_CONFIG_MODULE , \"db_learning.config:TestConfig\" ) @pytest . fixture ( scope = 'session' ) def tm (): test_module = Test . create_test_module ( modules = [ ApplicationModule ]) yield test_module # Fixture for creating a database session for testing @pytest . fixture ( scope = 'session' ) def db ( tm ): db_service = tm . get ( EllarSQLService ) # Creating all tables db_service . create_all () yield # Dropping all tables after the tests db_service . drop_all () # Fixture for creating a database session for testing @pytest . fixture ( scope = 'session' ) def db_session ( db , tm ): db_service = tm . get ( EllarSQLService ) yield db_service . session_factory () # Removing the session factory db_service . session_factory . remove () @pytest . fixture def factory_session ( db , tm ): engine = tm . get ( sa . Engine ) common . Session . configure ( bind = engine ) yield common . Session . remove () In the factory_session fixture, we retrieve the Engine registered in the DI container by EllarSQLModule . Using this engine, we configure the common Session . It's important to note that if you are using an async database driver, EllarSQLModule will register AsyncEngine . With this setup, we can rewrite our test_username_must_be_unique test using UserFactory and factory_session : tests/test_user_model.py import pytest import sqlalchemy.exc as sa_exc from .factories import UserFactory def test_username_must_be_unique ( factory_session ): user1 = UserFactory () with pytest . raises ( sa_exc . IntegrityError ): UserFactory ( username = user1 . username ) This test yields the same result as before. Refer to the factory-boy documentation for more features and tutorials.","title":"index"},{"location":"testing/#testing-ellarsql-models","text":"There are various approaches to testing SQLAlchemy models, but in this section, we will focus on setting up a good testing environment for EllarSQL models using the Ellar Test factory and pytest. For an effective testing environment, it is recommended to utilize the EllarSQLModule.register_setup() approach to set up the EllarSQLModule . This allows you to add a new configuration for ELLAR_SQL specific to your testing database, preventing interference with production or any other databases in use.","title":"Testing EllarSQL Models"},{"location":"testing/#defining-testconfig","text":"There are various methods for configuring test settings in Ellar, as outlined here . However, in this section, we will adopt the 'in a file' approach. Within the db_learning/config.py file, include the following code: db_learning/config.py import typing as t ... class DevelopmentConfig ( BaseConfig ): DEBUG : bool = True # Configuration through Config ELLAR_SQL : t . Dict [ str , t . Any ] = { 'databases' : { 'default' : 'sqlite:///project.db' , }, 'echo' : True , 'migration_options' : { 'directory' : 'migrations' }, 'models' : [ 'models' ] } class TestConfig ( BaseConfig ): DEBUG = False ELLAR_SQL : t . Dict [ str , t . Any ] = { ** DevelopmentConfig . ELLAR_SQL , 'databases' : { 'default' : 'sqlite:///test.db' , }, 'echo' : False , } This snippet demonstrates the 'in a file' approach to setting up the TestConfig class within the same db_learning/config.py file.","title":"Defining TestConfig"},{"location":"testing/#changes-made","text":"Updated the databases section to use sqlite+aiosqlite:///test.db for the testing database. Set echo to True to enable SQLAlchemy output during testing for cleaner logs. Preserved the migration_options and models configurations from DevelopmentConfig . Also, feel free to further adjust it based on your specific testing requirements!","title":"Changes made:"},{"location":"testing/#test-fixtures","text":"After defining TestConfig , we need to add some pytest fixtures to set up EllarSQLModule and another one that returns a session for testing purposes. Additionally, we need to export ELLAR_CONFIG_MODULE to point to the newly defined TestConfig . tests/conftest.py import os import pytest from ellar.common.constants import ELLAR_CONFIG_MODULE from ellar.testing import Test from ellar_sql import EllarSQLService from db_learning.root_module import ApplicationModule # Setting the ELLAR_CONFIG_MODULE environment variable to TestConfig os . environ . setdefault ( ELLAR_CONFIG_MODULE , \"db_learning.config:TestConfig\" ) # Fixture for creating a test module @pytest . fixture ( scope = 'session' ) def tm (): test_module = Test . create_test_module ( modules = [ ApplicationModule ]) yield test_module # Fixture for creating a database session for testing @pytest . fixture ( scope = 'session' ) def db ( tm ): db_service = tm . get ( EllarSQLService ) # Creating all tables db_service . create_all () yield # Dropping all tables after the tests db_service . drop_all () # Fixture for creating a database session for testing @pytest . fixture ( scope = 'session' ) def db_session ( db , tm ): db_service = tm . get ( EllarSQLService ) yield db_service . session_factory () # Removing the session factory db_service . session_factory . remove () The provided fixtures help in setting up a testing environment for EllarSQL models. The Test.create_test_module method creates a TestModule for initializing your Ellar application, and the db_session fixture initializes a database session for testing, creating and dropping tables as needed. If you are working with asynchronous database drivers, you can convert db_session into an async function to handle coroutines seamlessly.","title":"Test Fixtures"},{"location":"testing/#alembic-migration-with-test-fixture","text":"In cases where there are already generated database migration files, and there is a need to apply migrations during testing, this can be achieved as shown in the example below: tests/conftest.py import os import pytest from ellar.common.constants import ELLAR_CONFIG_MODULE from ellar.testing import Test from ellar_sql import EllarSQLService from ellar_sql.cli.handlers import CLICommandHandlers from db_learning.root_module import ApplicationModule # Setting the ELLAR_CONFIG_MODULE environment variable to TestConfig os . environ . setdefault ( ELLAR_CONFIG_MODULE , \"db_learning.config:TestConfig\" ) # Fixture for creating a test module @pytest . fixture ( scope = 'session' ) def tm (): test_module = Test . create_test_module ( modules = [ ApplicationModule ]) yield test_module # Fixture for creating a database session for testing @pytest . fixture ( scope = 'session' ) async def db ( tm ): db_service = tm . get ( EllarSQLService ) # Applying migrations using Alembic async with tm . create_application () . application_context (): cli = CLICommandHandlers ( db_service ) cli . migrate () yield # Downgrading migrations after testing async with tm . create_application () . application_context (): cli = CLICommandHandlers ( db_service ) cli . downgrade () # Fixture for creating an asynchronous database session for testing @pytest . fixture ( scope = 'session' ) async def db_session ( db , tm ): db_service = tm . get ( EllarSQLService ) yield db_service . session_factory () # Removing the session factory db_service . session_factory . remove () The CLICommandHandlers class wraps all Alembic functions executed through the Ellar command-line interface. It can be used in conjunction with the application context to initialize all model tables during testing as shown in the illustration above. db_session pytest fixture also ensures that migrations are applied and then downgraded after testing, maintaining a clean and consistent test database state.","title":"Alembic Migration with Test Fixture"},{"location":"testing/#testing-a-model","text":"After setting up the testing database and creating a session, let's test the insertion of a user model into the database. In db_learning/models.py , we have a user model: db_learning/model.py from ellar_sql import model class User ( model . Model ): id : model . Mapped [ int ] = model . mapped_column ( model . Integer , primary_key = True ) username : model . Mapped [ str ] = model . mapped_column ( model . String , unique = True , nullable = False ) email : model . Mapped [ str ] = model . mapped_column ( model . String ) Now, create a file named test_user_model.py : tests/test_user_model.py import pytest import sqlalchemy.exc as sa_exc from db_learning.models import User def test_username_must_be_unique ( db_session ): # Creating and adding the first user user1 = User ( username = 'ellarSQL' , email = 'ellarsql@gmail.com' ) db_session . add ( user1 ) db_session . commit () # Attempting to add a second user with the same username user2 = User ( username = 'ellarSQL' , email = 'ellarsql2@gmail.com' ) db_session . add ( user2 ) # Expecting an IntegrityError due to unique constraint violation with pytest . raises ( sa_exc . IntegrityError ): db_session . commit () In this test, we are checking whether the unique constraint on the username field is enforced by attempting to insert two users with the same username. The test expects an IntegrityError to be raised, indicating a violation of the unique constraint. This ensures that the model behaves correctly and enforces the specified uniqueness requirement.","title":"Testing a Model"},{"location":"testing/#testing-factory-boy","text":"factory-boy provides a convenient and flexible way to create mock objects, supporting various ORMs like Django, MongoDB, and SQLAlchemy. EllarSQL extends factory.alchemy.SQLAlchemy to offer a Model factory solution compatible with both synchronous and asynchronous database drivers. To get started, you need to install factory-boy : pip install factory-boy Now, let's create a factory for our user model in tests/factories.py : tests/factories.py import factory from ellar_sql.factory import EllarSQLFactory , SESSION_PERSISTENCE_FLUSH from db_learning.models import User from . import common class UserFactory ( EllarSQLFactory ): class Meta : model = User sqlalchemy_session_persistence = SESSION_PERSISTENCE_FLUSH sqlalchemy_session_factory = lambda : common . Session () username = factory . Faker ( 'username' ) email = factory . Faker ( 'email' ) The UserFactory depends on a database session. Since the pytest fixture we created applies to it, we also need a session factory in tests/common.py : tests/common.py from sqlalchemy import orm Session = orm . scoped_session ( orm . sessionmaker ()) Additionally, we require a fixture responsible for configuring the Factory session in tests/conftest.py : tests/conftest.py import os import pytest import sqlalchemy as sa from ellar.common.constants import ELLAR_CONFIG_MODULE from ellar.testing import Test from ellar_sql import EllarSQLService from db_learning.root_module import ApplicationModule from . import common os . environ . setdefault ( ELLAR_CONFIG_MODULE , \"db_learning.config:TestConfig\" ) @pytest . fixture ( scope = 'session' ) def tm (): test_module = Test . create_test_module ( modules = [ ApplicationModule ]) yield test_module # Fixture for creating a database session for testing @pytest . fixture ( scope = 'session' ) def db ( tm ): db_service = tm . get ( EllarSQLService ) # Creating all tables db_service . create_all () yield # Dropping all tables after the tests db_service . drop_all () # Fixture for creating a database session for testing @pytest . fixture ( scope = 'session' ) def db_session ( db , tm ): db_service = tm . get ( EllarSQLService ) yield db_service . session_factory () # Removing the session factory db_service . session_factory . remove () @pytest . fixture def factory_session ( db , tm ): engine = tm . get ( sa . Engine ) common . Session . configure ( bind = engine ) yield common . Session . remove () In the factory_session fixture, we retrieve the Engine registered in the DI container by EllarSQLModule . Using this engine, we configure the common Session . It's important to note that if you are using an async database driver, EllarSQLModule will register AsyncEngine . With this setup, we can rewrite our test_username_must_be_unique test using UserFactory and factory_session : tests/test_user_model.py import pytest import sqlalchemy.exc as sa_exc from .factories import UserFactory def test_username_must_be_unique ( factory_session ): user1 = UserFactory () with pytest . raises ( sa_exc . IntegrityError ): UserFactory ( username = user1 . username ) This test yields the same result as before. Refer to the factory-boy documentation for more features and tutorials.","title":"Testing Factory Boy"}]} \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz index e171498..79e76c8 100644 Binary files a/sitemap.xml.gz and b/sitemap.xml.gz differ