Azure DocumentDB – Preparing, Loading and Querying Data

 

Recently I have been using Azure DocumentDB to validate several business use cases for a variety of application purposes.

For those SQL DBA’s and others who are new to Azure DocumentDB, its a recent entrant to the NoSQL document database world, and as its a PaaS document database cloud service it has the agility, scalability and availability of the Azure Cloud.

Being a schema-less Azure PaaS “document database” for my use case I wanted to verify…

  • basic costing and performance levels
  • methods to create valid JSON documents from SQL Server
  • methods to load JSON documents into Azure DocumentDB
  • performing basic like-for-like document query comparisons with SQL Server

Some homework reading for those interested…

 

22 May 2017 [EDIT]

Microsoft have just recently added significant new functionality and also formally renamed Azure DocumentDB as Azure CosmosDBa major evolution of the NoSQL database engine.  The details on the renamed service is here – https://azure.microsoft.com/en-au/blog/dear-documentdb-customers-welcome-to-azure-cosmos-db/

 

And so… let get into the belly of Azure DocumentDB CosmosDB!

Continue reading

PASS – A Business Case for Attendance

For those tech professionals who play in the SQL Server and Business Intelligence world (and let me say that’s a huge number!) the PASS Summit provides one of the most fantastic and focused learning and networking experiences available.

I simply cannot think of any other SQL/BI event that brings together such a range of deep expertise and learning, open networking opportunities, knowledge sharing and community involvement than that of PASS.  In fact I know of several of my Aussie counterparts whom attend no other training throughout the year except for the complete 5 day immersion that only PASS can offer.

For those new to PASS – see here http://www.sqlpass.org/summit/2016/Welcome.aspx

I have attended a few PASS events now (and have been lucky enough to present in 2015!) and each event has been better than the last.  With the Call For Speakers now closed I can see that this year’s PASS 2016 in Seattle (Oct 25-28) is shaping up to be yet another fire cracker!

Will I be presenting this year?  Well, I actually didn’t submit any extracts.  I started a job with Microsoft (Azure) and needed to focus on the ever expanding learning curve!

Anyway – I have blogged about my experiences submitting and attending PASS many times…

Unfortunately such amazing resources don’t always come free and these uncertain financial times can create an uncertain training budget for many organisations and individuals alike.

As such for those attendees whom find themselves in this position I have prepared a brief below which can hopefully help you prepare a convincing business case for attendance.  This is one I wrote a few years back after presenting at a few SQL Saturday and PASS virtual chapter sessions and helped me position the key benefits of PASS.

Take from this what you will – and be sure to update this with the most recent content and also to reflect your personal and business circumstances.

GOOD LUCK!  And I hope to see you at PASS !

Continue reading

Compression Performance with Low Key Selectivity

When it comes to enabling data compression in SQL Server I suspect most people don’t do a lot of testing as to the impacts of either enabling compression and its ongoing maintenance.  I assume most are interested in its ongoing performance for queries, and you know that’s damn fair enough too!

If you read the Microsoft Data Compression Whitepaper (and I mean whos got time to do that!) then it has some interesting technical tidbits burried deep in there which affect the impacts of applying and managing data compression.

https://technet.microsoft.com/en-us/library/dd894051(v=sql.100).aspx

Here are some of the ones of interest…

SQL Server uses statistics on the leading column to distribute work amongst multiple CPUs, thus multiple CPUs are not beneficial when creating, rebuilding, or compressing an index where the leading column of the index has relatively few unique values or when the data is heavily skewed to just a small number of leading key values – only limited effective parallelism will be achieved in this case.


And this…

Compressing or rebuilding a heap with ONLINE set to ON uses a single CPU for compression or rebuild. However, SQL Server first needs to scan the table—the scan is parallelized, and after the table scan is complete, the rest of the compression processing of the heap is single-threaded.


And this…

When a heap is compressed, if there are any nonclustered indexes on the heap, they are rebuilt as follows:
(a) With ONLINE set to OFF, the nonclustered indexes are rebuilt one by one.
(b) With ONLINE set to ON, all the nonclustered indexes are rebuilt simultaneously.
You must account for the workspace required to rebuild the nonclustered indexes, because the space for the uncompressed heap is not released until the rebuild of the nonclustered indexes is complete.

 

That first one is a cracker – it hit me once when compressing a SQL Server table (600M+ rows) on a 64 core Enterprise SQL Server.  After benchmarking several other data compression activities I thought I had a basic “rule of thumb” (based on GB data size and number of rows)… of which just happened to be coincidence!

This also begs the question of why would you use low selectivity indexes?  Well I can think of a few cases – but the one which stands out the most is the identification of a small number of rows within a greater collection – such as an Index on TYPE columns (ie; [ProcessingStatusFlag] CHAR(1) = [P]rocessed, [U]nprocessed, [W]orking, [F]ailed, etc)

… AND SO – lets do some testing to validate this puppy!

Continue reading

SQL Filegroup/File Proportional Fill Algorithm

I had a question at work recently where there was some confusion around how SQL Server allocates data across data files within a filegroup in a user database.  There was a mention that data was not being distributed evenly across files and also that a trace flag was needed for SQL Server to distribute data evenly.  I am uncertain if those circumstances were database config related or something else outside of proportional fill.

So I thought I’d do a quick post just to clarify how proportional fill works via demonstration.

SQL Server has used a proportional fill strategy across data files in a filegroup for some time (as long as I care to remember anyway) and this has been pretty well documented in SQL BoL and a number of blog posts on the web already.

Filegroups use a proportional fill strategy across all the files within each filegroup. As data is written to the filegroup, the SQL Server Database Engine writes an amount proportional to the free space in the file to each file within the filegroup, instead of writing all the data to the first file until full. It then writes to the next file.

As soon as all the files in a filegroup are full, the Database Engine automatically expands one file at a time in a round-robin manner to allow for more data, provided that the database is set to grow automatically.

https://technet.microsoft.com/en-us/library/ms187087(v=sql.105).aspx

When multiple files are involved, and if these are ideally located on different physical spindles on the underlying disk subsystem, then a rather nicely performing data striping can be achieved for the database.  If proportional fill kicks in and starts to focus on files with more free space then you may get hot spots for those files.  However nowadays with auto-tiering SAN’s, SSD and (abstracted) cloud storage (for IaaS deployments) this is beginning to matter less and less.

However – Lets get into breaking down the proportional fill algorithms!

Continue reading

SQL Saturday Melbourne (#464) – 20 Feb 2016

For those not aware SQL Saturday is coming to Melbourne on Sat 20 Feb 2016.

SQL Saturday is an excellent free learning resource for all things SQL Server – all costs are covered by donations and sponsorships.  Some of the excellent sponsors this year are PASS, RockSolid SQL, MelissaData, and Jade.

Some of the session focus areas include SQL 2016, SQL On-Prem solutions/technology, SQL / SQL DW in Azure solutions/technology, SQL MPP, Machine Learning, Agile Methods, Power BI, Powershell, BIML …and more!

For those wanting to come along here are the links you need to know.  Please go to the website and register to attend.

The event is being held at Monash University (Caulfield Campus, 888 Dandenong Road, Caulfield East, Victoria)

For those interested I am presenting a session on Practical Partitioning  which will show some interesting demos and should be a lot of fun… feel free to pop in and introduce yourself!

All of my presentation content will be posted on the SQL Saturday site at the completion of the event. http://www.sqlsaturday.com/464/Sessions/Details.aspx?sid=40479

The presentation demos are based on my 7 part blog post series on partitioning;

  1. https://mrfoxsql.wordpress.com/2015/04/26/rebuild-a-standard-table-to-a-partitioned-table/
  2. https://mrfoxsql.wordpress.com/2015/05/13/deciding-whether-to-align-non-clustered-indexes/
  3. https://mrfoxsql.wordpress.com/2015/05/21/performance-impacts-of-partitioning-dml-triggers/
  4. https://mrfoxsql.wordpress.com/2015/06/10/rebuilding-existing-partitioned-tables-to-a-new-partition-scheme/
  5. https://mrfoxsql.wordpress.com/2015/07/07/implementing-partial-backups-and-restores/
  6. https://mrfoxsql.wordpress.com/2015/11/10/implementing-partition-aware-index-optimisation-procedures/
  7. https://mrfoxsql.wordpress.com/2015/11/24/calculating-table-partition-sizes-in-advance/

I hope to see you all in Melbourne at SQL Saturday!


Disclaimer: all content on Mr. Fox SQL blog is subject to the disclaimer found here

A “Business Intelligence” (BI) Definition

Over the years I have presented many times to various clients describing Business Intelligence (BI) solutions using the Microsoft BI solutions stack.

In all of my sessions, regardless of the specific focus, each time I always start with the same 2 content slides.

  • The first is always the positioning graphic from the Gartner Magic Quadrant for Business Intelligence & Analytics.  This shows all top BI vendors and how they are positioned against each other, and is republished yearly (typically in Feb).  This is often a great introduction as to why Microsoft BI is such a great play. If interested an introduction to the paper is here, but you can also find the full report on various web sites (https://www.gartner.com/doc/2989518/magic-quadrant-business-intelligence-analytics)
  • The second is always a definition of exactly what Business Intelligence (BI) actually is, and this is the purpose of this short blog.

And so, lets dig down into a business definition for Business Intelligence!

Continue reading

Calculating Table Partition Sizes in Advance

Continuing on with my Partitioning post series, this is part 7.

The partitioning includes several major components of work (and can be linked below);

  1. partitioning large existing non-partitioned tables
  2. measuring performance impacts of partitioned aligned indexes
  3. measuring performance impacts of DML triggers for enforcing partitioned unique indexes
  4. rebuilding tables that are already partitioned (ie. apply a new partitioning scheme)
  5. implementing partial backups and restores (via leveraging partitions)
  6. implementing partition aware index optimisation procedures
  7. Calculating table partition sizes in advance

This blog post deals with calculating partitioning sizes in advance.

Sometimes (just sometimes) you need to calculate the size your table partitions upfront before you actually go to the pain and effort of partitioning (or repartition) a table.  Doing this helps with pre-sizing the database files in advance instead of having them auto-grow many many times over in small increments as you cut data over into the partitions.

As a quick aside…

  • The negative performance impacts of auto-shrink are universally well known (er, for DBA’s that is!), however I rarely hear people talk about the less universally well known negative performance impacts of auto-grow quite so much.
  • Auto-Growing your database files in small increments can cause physical fragmentation in the database files on the storage subsystem and cause reduced IO performance.  If you are interested you can read about this here https://support.microsoft.com/en-us/kb/315512

Now – back to what I was saying about pre-sizing table partitions…!

I prepared a SQL script which given some parameters can review an existing table and its indexes (whether they are already partitioned or not) and tell you what your partition sizing breakdown would be should that table be partitioned with a given partition function.

I wrote it just for what I needed but it could be expanded more if you are feeling energetic.  The script is at the end of this post.

And so, lets get into the nitty gritty of this estimation script!

Continue reading

Implementing Partition Aware Index Optimisation Procedures

Continuing on with my Partitioning post series, this is part 6.

The partitioning includes several major components of work (and can be linked below);

  1. partitioning large existing non-partitioned tables
  2. measuring performance impacts of partitioned aligned indexes
  3. measuring performance impacts of DML triggers for enforcing partitioned unique indexes
  4. rebuilding tables that are already partitioned (ie. apply a new partitioning scheme)
  5. implementing partial backups and restores (via leveraging partitions)
  6. implementing partition aware index optimisation procedures
  7. Calculating table partition sizes in advance

This blog post deals with implementing partition aware index optimisation procedures.

And so, lets get into the nitty gritty of the partitioning details!

Continue reading

PASS 2015 Session Report – Azure SQL DW Integration with the Azure Ecosystem & End of PASS 2015

PASS 2015 continues (and finishes up today!) in Seattle.

Its been an amazing conference this year with a few things really hitting home;

  • Amazing technology announcements around SQL 2016 CTP3
  • Incredible advances in almost every component in Azure Data Services
  • Full and seamless SQL/Azure ecosystem integration – and by that I mean both On-Prem and/or within the Azure Cloud.  The story of either On-Prem or Azure Cloud is compelling enough individually, however the Hybrid story is now a reality for SQL and enables dynamic and flexible architectures well beyond what competitors can offer.
  • BUT what astounds me the most is actually the pace of change – barely a day goes by where I don’t receive a new services or feature update related to SQL 2016 CTP3 or Azure.
  • I don’t recall a time (in recent memory) where the step changes have come so thick/fast – its certainly changed from where I started as a DBA on RDB/VMS back in 1994 where patches arrived by mail on tape cartridge! 🙂
  • (As a quick aside a chief designer on RDB was Jim Gray, the same who joined Microsoft to lead the SQL Server architecture to stardom soon after Oracle bought-out and shelved DEC around 1995+)

 

Enough reminiscing already – moving along – Today I attended 5 back-back sessions, and again I cannot blog about all of them in the time I have (or want to spend), but the one which stands out the most was Azure SQL Data Warehouse and Integration with the Azure Ecosystem by Drew DiPalma of Microsoft.

This session focused specifically on the Azure ecosystem surrounding the Azure SQL Data Warehouse (SQL DW) and how it can seamlessly interact with other Azure components to create different operational solutions.  To me this was very compelling, not necessarily due to the SQL DW technology (which I know well already as the on-prem APS appliance), but more-so as it showed just how easily all parts of Azure can happily work together.

Continue reading

PASS 2015 Session Report – Understanding Real World Big Data Scenarios

PASS 2015 continues in Seattle, and today was my session at 1045am on Using Azure Machine Learning (ML) to Predict Seattle House Prices.  The background and info on my session is here http://www.sqlpass.org/summit/2015/Sessions/Details.aspx?sid=7794

Overall I was pretty happy with how it went - and I think everyone who attended had a lot of fun with some of the games and tests I injected into the presentation.  Everyone had a chance to be a Real Estate Agent :) - and at the same time learn some great methods around performing Azure ML Regression Predictive Analytics.

 

BUT – moving right along – I also attended 3 other sessions today, again I cannot blog about all of them in the time I have, but the one which made me think the most about technology implementations and how they can improves lives was Understanding Real World Big Data Scenarios by Nishant Thacker of Microsoft.

It wasn’t about use cases for big data (as this is a horse already bolted), but more around really innovative and interesting ways the ecosystem of Azure technologies could be deployed to solve some complex business problems, or moreso simply ways to make our lives better!

Continue reading