Archive
DB Basics – What are ACID properties of a Transaction in an RDBMS?
In order to perform a Transaction in a database system and to make sure it works without any issues, there are few rules a Database Transaction should follow. These rules are the standards across all Relational Database systems (RDBMS) and are called ACID rules.
ACID stands for Atomicity, Consistency, Isolation and Durability. So let’s check what all these Rules states.
–> A: Atomicity states that every Transaction should be atomic in nature. A Transaction in a Relational Database can contain either a single SQL statement or multiple SQL statements. Thus by Atomic Transaction it means “all or none”. Either all SQL statements/steps execute successfully in a transaction, or fail as a single unit and none of them should be treated as executed and the system should be returned to its original state.
For example: If account-A & account-B both having $2000 balance, you have to transfer $1000 from account-A to account-B, this will involves 2 steps. First withdrawal from account-A, and Second deposit in account-B. Thus, both the steps should be treated as single or atomic unit and at the end account-A should have $1000 & account-B should have $3000 balance. If in case after First step the system fails or any error occurs then first step should also be rolled-back and $1000 withdrawn from account-A should be re-deposited to it, maintaining $2000 back in both the accounts. Thus there should be no intermediate state where account-A has $1000 and account-B still has $2000 balance.
–> C: Consistency states that any Transaction happened in a database will take it from one consistent state to another consistent state. The data finally recorded in the database must be valid according to the defined Rules, Constraints, Cascades, Triggers, etc. If in case of any failure to these rules the changes made by any transaction should be rolled-back, this will put the system in earlier consistent state.
For example: If the money deposit process has any Trigger built on top of it. And at the time of money transfer any of the Trigger fails or any database node, the system should automatically Rollback the complete transaction and switch back the system to its previous consistent state before the transaction was started. Or if everything executes successfully then the system is committed to a new consistent state.
–> I: Isolation means Transactions performing same functions should run in Isolation and not in parallel to provide more concurrency to the data and avoiding dirty reads & writes. One need to use proper Transaction Isolation levels and locking in order to prevent this.
For example: If two people accessing a joint-account with $5000 balance from 2 terminals to withdraw money. Let’s say at same time John & Marry apply to withdraw $4000 from two different ATMs. If both the Transactions do not run in Isolation and run in parallel then both John & Marry will be able to withdraw $4000 each i.e. $8000 total from their account. To make sure this won’t happen Transactions should be not allowed to run in parallel, by setting Transaction Isolations and/or locking methods on the database objects.
–> D: Durability, a transaction should be durable by storing the data permanently and making it available in case of power failure, recovery from system failure, crash, any error, etc. All in all, the data should not get lost in any of the miss-happenings and one should be able to recover data from restore, logging and other methods.
Getting Started with Hekaton (Memory Optimized) tables | SQL Server 2014
Hekaton, which also means in-Memory or Memory Optimized Tables, is a Microsoft Code Project for its new version of SQL Server 2014. This version of SQL Server mainly focuses on the high performance In-Memory Database workloads. Which means this version includes Memory Optimized Tables which will reside into Server’s memory and efficiently provides execution of Business Logic by completely reducing I/O between Disk & Memory.
So, as soon as a Memory Optimized Table is created the data it contains will also load into Memory. Let’s see here a Hands-On with Hekaton Tables by simple T-SQL Scripts:
–> Let’s first create a Test Database:
USE [master] GO CREATE DATABASE [ManTest] CONTAINMENT = NONE ON PRIMARY ( NAME = N'ManTest', FILENAME = N'D:\SQLServer2014\MSSQL\DATA\ManTest.mdf', SIZE = 5120KB, FILEGROWTH = 1024KB ) LOG ON ( NAME = N'ManTest_log', FILENAME = N'D:\SQLServer2014\MSSQL\Log\ManTest_log.ldf', SIZE = 1024KB, FILEGROWTH = 10% ) GO
–> Now we will create a FileGroup and associate the [ManTest] Database with it:
USE [ManTest] GO -- Add MEMORY_OPTIMIZED_DATA filegroup to the database. ALTER DATABASE [ManTest] ADD FILEGROUP [ManTestFG] CONTAINS MEMORY_OPTIMIZED_DATA GO -- Add file to the MEMORY_OPTIMIZED_DATA filegroup. ALTER DATABASE [ManTest] ADD FILE ( NAME = ManTestFG_file1, FILENAME = N'D:\SQLServer2014\MSSQL\DATA\Datasample_database_1') -- You might need to check and correct the path here. TO FILEGROUP ManTestFG GO
A SQL Server FileGroup is required to store memory_optimized_data in a SQL Server Database. As per MSDN this FileGroup will be used to guarantee durability of memory-resident data in the event of a server crash or restart. During the crash recovery phase in server startup, the data is recovered from this FileGroup and loaded back into memory.
–> Now we will create Memory Optimized Table with new Options added in CREATE TABLE syntax:
-- Create memory optimized table and hash indexes on the memory optimized table: CREATE TABLE dbo.Person_memoryOptimizedTable ( PersonID INT NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 19972), PersonName NVARCHAR(100) NOT NULL, DateAdded DATETIME NOT NULL ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA) GO
The Options:
– NONCLUSTERED HASH WITH : Clustered Index not allowed (as indexes are not stored, but re-created after SQL Server is restarted), so a PK should be defined with NonClustered Index of Type ‘Hash’.
– BUCKET_COUNT : Indicates the number of buckets that should be created in the Hash index.
– MEMORY_OPTIMIZED=ON : makes the table Memory Optimized.
– DURABILITY = SCHEMA_AND_DATA : makes the Table and its Data available all the time in-Memory.
Note:
– Indexes can only be created online with CREATE TABLE statement.
– If you don’t provide the DURABILITY = SCHEMA_AND_DATA option, then the Data will be persisted in the Table only.
– The Option DURABILITY = SCHEMA_ONLY will not make the Table Data Durable on Disk and data will lost after Server Restart or Crash, but the Table Schema will be available.
–> Let’s insert some records in the table created above:
-- Inserting records into the Memory Optimized Table from [Person] table in [AdventureWorks] DB: INSERT INTO dbo.Person_memoryOptimizedTable SELECT [BusinessEntityID], CONCAT([FirstName], ' ', [MiddleName], ' ', [LastName]) AS PersonName, GETDATE() AS DateAdded FROM [AdventureWorks2012].[Person].[Person]
You will get following error:
Msg 41317, Level 16, State 3, Line 34
A user transaction that accesses memory optimized tables or natively compiled procedures cannot access more than one user database or databases model and msdb, and it cannot write to master.
This means that you cannot insert records into Memory Optimized Tables across databases.
So, we have a workaround here, we will insert records into a #temp table first and then insert from this #temp table.
-- So, I'll insert records from [AdvantureWorks] Database to #temp table first: SELECT [BusinessEntityID], CONCAT([FirstName], ' ', [MiddleName], ' ', [LastName]) AS PersonName, GETDATE() AS DateAdded into #tempPerson FROM [AdventureWorks2012].[Person].[Person] -- (19972 row(s) affected) -- Now, I can insert records from #temp table into the Memory Optimized Table: INSERT INTO dbo.Person_memoryOptimizedTable SELECT [BusinessEntityID], PersonName, DateAdded FROM #tempPerson -- (19972 row(s) affected) -- Let's check contents of the Table: SELECT * FROM Person_memoryOptimizedTable GO
So, we here saw how to create Memory Optimized Tables and create a Separate Database to store them by allocating a FileGroup to the DB.
-- Final Cleanup DROP TABLE dbo.Person_memoryOptimizedTable DROP TABLE #tempPerson GO USE master GO DROP DATABASE ManTest GO
Update: Know more about In-Memory tables:
Next post I’ll discuss more about Memory Optimized tables, their benefits and limitations.