Fundamentally without SQLAlchemy, we’d only use a DBAPI to establish connections and execute SQL statements. Simple, but not scalable as complexity grows.
SQLAlchemy offers several layers of abstraction and convenient tools for interacting with a database.
One of the key reasons for choosing SQLAlchemy is that we can forget about the database system we are using. For example, you can switch between SQLite for dev and Postgres for prod.
The reason why we can do this, is because of the dialects, which abstracts away the flavor of SQL that we are using because the dialect’s layer controls the quirks and flavor of the specific database system that we are using.
The dialect is the system SQLAlchemy uses to communicate with various types of DBAPI implementations and databases.
- Microsoft SQL Server
All dialects require that an appropriate DBAPI driver is installed.
Externally maintained dialects: Currently maintained external dialect projects for SQLAlchemy include:
The Connection Pool
A connection pool is a standard technique used to maintain long running connections in memory for efficient re-use, as well as to provide management for the total number of connections an application might use simultaneously.
Particularly for server-side web applications, a connection pool is the standard way to maintain a “pool” of active database connections in memory which are reused across requests.
As SQLAlchemy provides connection pooling, you no longer have to open and close connections using the DBAPI as you would when using
With a connection pool, the opening and closing of connections and which connection you are using when you are executing statements within a session is completely abstracted away from you.
As a result of having a connection pool, connections are easily reused after they are started. This avoids the problem of continually opening and closing connections every time we want to make data changes to our database.
A connection pool also easily handles dropped connections for us. For example, when we have network issues.
Lastly, it also helps avoid doing very many small calls to the DB when we are continually making changes to the DB.
The Engine is 1 of 3 main layers for how you may choose to interact with the database. It is the lowest level layer of abstraction for interacting with the database, and is much like using the DBAPI directly. That is, it is very similar to using
psycopg2 in that, you have to manage the connection directly.
For example, look at the below snippet:
from sqlalchemy import create_engine engine = create_engine('postgresql://shravan:kuchkula@localhost:5432/mydatabase') conn = engine.connect() result = conn.execute("SELECT * FROM todos") row = result.fetchone() rows = result.fetchall() result.close()
This looks very similar to
psycopg2 way of doing things. You can use the connection object to execute raw SQL queries and retrieve the results either using
fetchall() methods. Lastly, you would have to close out the connection. This is why, using the engine directly is considered the lowest level of abstraction.
SQLAlchemy gives us a choice of interacting with our database from more than just a connection and engine level. The next layer of abstraction is SQL Expressions.
- The Engine in SQLAlchemy refers to both itself, the Dialect and the Connection Pool, which all work together to interface with our database.
- A connection pool gets automatically created when we create a SQLAlchemy engine.
Lazy initialization: The above engine creates a
Dialect object tailored towards PostgreSQL, as well as a
Pool object which will establish a
DBAPI connection at
localhost:5432 when a connection request is first received. Note that the Engine and its underlying Pool do not establish the first actual DBAPI connection until the
Engine.connect() method is called, or an operation which is dependent on this method such as
Engine.execute() is invoked. In this way, Engine and Pool can be said to have a lazy initialization behavior.
If we don’t want to just send raw SQL statements to the engine, like we have been doing with DBAPI, SQLAlchemy offers you
SQL Expressions that lets you compose SQL statements by building Python objects instead.
The only difference here is that we are using Python Objects, instead of sending raw SQL to the engine. You still need to call
engine.connect() and use it to
conn.execute(ins) SQL statements that are written in Python.
The general idea is that SQLAlchemy offers you more than one way of doing the exact same thing to a database. i.e, you can interact at the engine layer alone and just send raw SQL statements to the engine (OR) you can also compose Python objects and expressions and use these SQL expressions without writing SQL.
There is a third way of interacting with the database, which is the highest level of abstraction offered by SQLAlchemy. So what SQLAlchemy ORM lets you do is that it lets you compose SQL Expressions by building Python classes of objects which eventually are mapped to tables that you are creating and managing in your database.
- Lets you compose SQL expressions by mapping python classes of objects to tables in the database
- Is the highest layer of abstraction in SQLALchemy.
- Wraps the SQL Expressions and Engine to work together to interact with the database
Moreover, SQLAlchemy is split into two libraries:
- SQLAlchemy Core
- SQLAlchemy ORM (Object Relational Mapping library). SQLALchemy ORM is offered as an optional library, so you don’t have to use the ORM in order to use the rest of SQLAlchemy.
- The ORM uses the Core library inside
- The ORM lets you map from the database schema to the application’s Python objects
- The ORM persists objects into corresponding database tables
Finally, everything about SQLAlchemy summed up nicely in this figure: