Design a reliable data access layer to access SQL Azure

May include but is not limited to: define client data access standards, connection timeout scenarios

There’s a nice article on TechNet on this topic and the next one, Design an efficient strategy to avoid data access throttling. I won’t publish another post for it, because there’s such a huge overlap between these subjects. The linked article above explains both of them pretty well.

As I have stated numerous times in these few posts (I think I’m constantly repeating myself) SQL Azure has the unique feature of killing your connections if you don’t play by the rules and try to monopolize all resources of the server where your database server resides. This feature is called throttling, and you should be aware of it and code your data access logic accordingly. This means that you should build in retry logic (the most preferred way for this is to use extension methods on the basic ADO.NET classes, such as SqlConnection and SqlCommand, according to the online examples floating around). Typically (a Microsoft recommendation) is to wait ten seconds and try again. Of course you can increase this time limit if it needed.

Trying to force something that fails constantly isn’t such a bright idea. So on the second or third approach, you could as well check out what the real problem is, and change your workload accordingly – if all else fails, then of course you could partition your database horizontally, but that’s a whole different topic.

Microsoft says you should do these three things in your data access layer: execute transactions in a continuous loop, catch connection termination errors and wait a little to reconnect. Now let’s see some possible reasons of database throttling. I remind you again to check out the TechNet article that is the basis of this post for a complete reference. So the following conditions result in throttling:

  • Consuming more than a million locks.
  • Uncommitted transactions.
  • Locking a system resource for more than 20 seconds.
  • A single transaction’s log file size exceeds 1 GB.
  • Using more than 5 GB tempdb space.
  • Consuming more than 16 MB of memory for more than 20 seconds when memory is limited.
  • Exceeding maximum database size.
  • Idle connections longer than 30 minutes.
  • A transaction running for more than 24 hours.
  • DoS attack (throttles connections from a particular (set of) IP address(es).
  • Network errors.
  • Failover errors.

These are the main reasons of connection throttling. Of course there’s a cure, and it comes in the form of some general best practices recommended by Microsoft. It makes such a perfect set of exam questions that I won’t leave out anything. So the best practices to avoid throttling:

  • Minimize network latency (choose the closest data center).
  • Reduce network usage: cache data and minimize round-trips to the server.
  • Keep your connections open as short as possible.
  • Set up short timeout durations in the connection string.
  • Use connection pooling.
  • Wrap all database operations in transactions with TRY CATCH blocks.
  • Fine tune your T-SQL.
  • Keep (or push) business logic in SQL Azure.
  • Use stored procedures and batching.
Advertisements

Tags: , , ,

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s

%d bloggers like this: