SQL Saturday Melbourne (#464) – 20 Feb 2016

For those not aware SQL Saturday is coming to Melbourne on Sat 20 Feb 2016.

SQL Saturday is an excellent free learning resource for all things SQL Server – all costs are covered by donations and sponsorships.  Some of the excellent sponsors this year are PASS, RockSolid SQL, MelissaData, and Jade.

Some of the session focus areas include SQL 2016, SQL On-Prem solutions/technology, SQL / SQL DW in Azure solutions/technology, SQL MPP, Machine Learning, Agile Methods, Power BI, Powershell, BIML …and more!

For those wanting to come along here are the links you need to know.  Please go to the website and register to attend.

The event is being held at Monash University (Caulfield Campus, 888 Dandenong Road, Caulfield East, Victoria)

For those interested I am presenting a session on Practical Partitioning  which will show some interesting demos and should be a lot of fun… feel free to pop in and introduce yourself!

All of my presentation content will be posted on the SQL Saturday site at the completion of the event. http://www.sqlsaturday.com/464/Sessions/Details.aspx?sid=40479

The presentation demos are based on my 7 part blog post series on partitioning;

  1. https://mrfoxsql.wordpress.com/2015/04/26/rebuild-a-standard-table-to-a-partitioned-table/
  2. https://mrfoxsql.wordpress.com/2015/05/13/deciding-whether-to-align-non-clustered-indexes/
  3. https://mrfoxsql.wordpress.com/2015/05/21/performance-impacts-of-partitioning-dml-triggers/
  4. https://mrfoxsql.wordpress.com/2015/06/10/rebuilding-existing-partitioned-tables-to-a-new-partition-scheme/
  5. https://mrfoxsql.wordpress.com/2015/07/07/implementing-partial-backups-and-restores/
  6. https://mrfoxsql.wordpress.com/2015/11/10/implementing-partition-aware-index-optimisation-procedures/
  7. https://mrfoxsql.wordpress.com/2015/11/24/calculating-table-partition-sizes-in-advance/

I hope to see you all in Melbourne at SQL Saturday!


Disclaimer: all content on Mr. Fox SQL blog is subject to the disclaimer found here

Calculating Table Partition Sizes in Advance

Continuing on with my Partitioning post series, this is part 7.

The partitioning includes several major components of work (and can be linked below);

  1. partitioning large existing non-partitioned tables
  2. measuring performance impacts of partitioned aligned indexes
  3. measuring performance impacts of DML triggers for enforcing partitioned unique indexes
  4. rebuilding tables that are already partitioned (ie. apply a new partitioning scheme)
  5. implementing partial backups and restores (via leveraging partitions)
  6. implementing partition aware index optimisation procedures
  7. Calculating table partition sizes in advance

This blog post deals with calculating partitioning sizes in advance.

Sometimes (just sometimes) you need to calculate the size your table partitions upfront before you actually go to the pain and effort of partitioning (or repartition) a table.  Doing this helps with pre-sizing the database files in advance instead of having them auto-grow many many times over in small increments as you cut data over into the partitions.

As a quick aside…

  • The negative performance impacts of auto-shrink are universally well known (er, for DBA’s that is!), however I rarely hear people talk about the less universally well known negative performance impacts of auto-grow quite so much.
  • Auto-Growing your database files in small increments can cause physical fragmentation in the database files on the storage subsystem and cause reduced IO performance.  If you are interested you can read about this here https://support.microsoft.com/en-us/kb/315512

Now – back to what I was saying about pre-sizing table partitions…!

I prepared a SQL script which given some parameters can review an existing table and its indexes (whether they are already partitioned or not) and tell you what your partition sizing breakdown would be should that table be partitioned with a given partition function.

I wrote it just for what I needed but it could be expanded more if you are feeling energetic.  The script is at the end of this post.

And so, lets get into the nitty gritty of this estimation script!

Continue reading

Implementing Partition Aware Index Optimisation Procedures

Continuing on with my Partitioning post series, this is part 6.

The partitioning includes several major components of work (and can be linked below);

  1. partitioning large existing non-partitioned tables
  2. measuring performance impacts of partitioned aligned indexes
  3. measuring performance impacts of DML triggers for enforcing partitioned unique indexes
  4. rebuilding tables that are already partitioned (ie. apply a new partitioning scheme)
  5. implementing partial backups and restores (via leveraging partitions)
  6. implementing partition aware index optimisation procedures
  7. Calculating table partition sizes in advance

This blog post deals with implementing partition aware index optimisation procedures.

And so, lets get into the nitty gritty of the partitioning details!

Continue reading

PASS 2015 Session Report – Azure SQL DW Integration with the Azure Ecosystem & End of PASS 2015

PASS 2015 continues (and finishes up today!) in Seattle.

Its been an amazing conference this year with a few things really hitting home;

  • Amazing technology announcements around SQL 2016 CTP3
  • Incredible advances in almost every component in Azure Data Services
  • Full and seamless SQL/Azure ecosystem integration – and by that I mean both On-Prem and/or within the Azure Cloud.  The story of either On-Prem or Azure Cloud is compelling enough individually, however the Hybrid story is now a reality for SQL and enables dynamic and flexible architectures well beyond what competitors can offer.
  • BUT what astounds me the most is actually the pace of change – barely a day goes by where I don’t receive a new services or feature update related to SQL 2016 CTP3 or Azure.
  • I don’t recall a time (in recent memory) where the step changes have come so thick/fast – its certainly changed from where I started as a DBA on RDB/VMS back in 1994 where patches arrived by mail on tape cartridge! 🙂
  • (As a quick aside a chief designer on RDB was Jim Gray, the same who joined Microsoft to lead the SQL Server architecture to stardom soon after Oracle bought-out and shelved DEC around 1995+)

 

Enough reminiscing already – moving along – Today I attended 5 back-back sessions, and again I cannot blog about all of them in the time I have (or want to spend), but the one which stands out the most was Azure SQL Data Warehouse and Integration with the Azure Ecosystem by Drew DiPalma of Microsoft.

This session focused specifically on the Azure ecosystem surrounding the Azure SQL Data Warehouse (SQL DW) and how it can seamlessly interact with other Azure components to create different operational solutions.  To me this was very compelling, not necessarily due to the SQL DW technology (which I know well already as the on-prem APS appliance), but more-so as it showed just how easily all parts of Azure can happily work together.

Continue reading

PASS 2015 Session Report – Understanding Real World Big Data Scenarios

PASS 2015 continues in Seattle, and today was my session at 1045am on Using Azure Machine Learning (ML) to Predict Seattle House Prices.  The background and info on my session is here http://www.sqlpass.org/summit/2015/Sessions/Details.aspx?sid=7794

Overall I was pretty happy with how it went - and I think everyone who attended had a lot of fun with some of the games and tests I injected into the presentation.  Everyone had a chance to be a Real Estate Agent :) - and at the same time learn some great methods around performing Azure ML Regression Predictive Analytics.

 

BUT – moving right along – I also attended 3 other sessions today, again I cannot blog about all of them in the time I have, but the one which made me think the most about technology implementations and how they can improves lives was Understanding Real World Big Data Scenarios by Nishant Thacker of Microsoft.

It wasn’t about use cases for big data (as this is a horse already bolted), but more around really innovative and interesting ways the ecosystem of Azure technologies could be deployed to solve some complex business problems, or moreso simply ways to make our lives better!

Continue reading

PASS 2015 Session Report – KeyNote and SQL Server on Azure Virtual Machines: Features, Best Practices & Roadmap

So PASS officially kicked off this morning leading into the next 3 days of back to back sessions.

You could certainly tell that the keynote was on… I mean the dining room was pumping…!

WhereTheBloodyHellAreYou

 

 

Oh that’s right, everyone is at the keynote!

KeyNoteDay1

 

 

So the Keynote session was hosted by Joseph Sirosh Group Vice President, Data Group.

The big tell for the key note was undoubtedly the SQL Server 2016 CTP3 and just whats packed to the rafters within the software.  If you want to learn more about that then I recommended step across to this link here http://blogs.technet.com/b/dataplatforminsider/archive/2015/10/28/sql-server-2016-everything-built-in.aspx

Key Takeaways from the Keynote;

  • SQL 2016 is really a major release that really solidifies the Microsoft view of a solid foot in both the On Prem and In Cloud data platform camps.
  • “The future is both earth and sky!”
  • The release offers much On Prem capability like Polybase (to APS), R integration (advanced analytics), Always Encrypted, SSAS/SSRS improvements
  • The release also provides the ability to seamlessly integrate from On Prem to Azure Cloud – and/or back like Polybase (to HDInsight), Stretch Database – and SQL already has capability to use Azure VM’s for SQL AAG solutions and Azure backups.
  • An interesting takeaway – the human size of human genome is approx 1.5 Gigabytes, or about 2 CDs worth of storage space.  How small do you feel now?

 

I then attended 4 sessions, but today there is really only time to blog about this one, mostly for me it was the most impressive in regards to capability and just how far its come!

The session was SQL Server in Azure Virtual Machines – Features and Best Practices and was presented by Luis Vargas is a Senior Program Manager Lead in the SQL Server team.

Continue reading

PASS 2015 Session Report – Optimize “All Data” with a Modern Data Warehouse Solution

PASS 2015 has kicked off in Seattle, well the precon’s have anyway on Mon & Tue.  The actual conference starts on Wed-Fri!

I attended a precon session today called Optimize “All Data” with a Modern Data Warehouse Solution held by Bradley Ball and Josh Luedeman of Pragmatic Works.

The session had a focus on moderising the corporate data warehouse via focusing on Data Lifecycle Optimisation.

What does that mean?  

Well – It means focusing on a define set of critical technology and business areas around the corporate data warehouse and strategically implementing a managed approach to improving the corporate data warehouse via introduction of technologies and processes.  Specifically this looked at 6 areas around the corporate data warehouse to consider in your approach to modernisation;

  1. Architecture and Configuration
  2. Availability and Continuity
  3. Maintenance and Optimisation
  4. Enterprise BI
  5. Big Data Architecture and Deployment
  6. Business and Predictive Analytics

Continue reading

Implementing Partial Backups and Restores (via Leveraging Partitions)

Continuing on with my Partitioning post series, this is part 5.

The partitioning includes several major components of work (and can be linked below);

  1. partitioning large existing non-partitioned tables
  2. measuring performance impacts of partitioned aligned indexes
  3. measuring performance impacts of DML triggers for enforcing partitioned unique indexes
  4. rebuilding tables that are already partitioned (ie. apply a new partitioning scheme)
  5. implementing partial backups and restores (via leveraging partitions)
  6. implementing partition aware index optimisation procedures
  7. Calculating table partition sizes in advance

This blog post deals with implementing partition aware partial backup and restore procedures.

I will blog about the other steps later.

And so, lets get into the nitty gritty of the partitioning details!

Continue reading

Rebuilding Existing Partitioned Tables to a New Partition Scheme

Continuing on with my Partitioning post series, this is part 4.

The partitioning includes several major components of work (and can be linked below);

  1. partitioning large existing non-partitioned tables
  2. measuring performance impacts of partitioned aligned indexes
  3. measuring performance impacts of DML triggers for enforcing partitioned unique indexes
  4. rebuilding tables that are already partitioned (ie. apply a new partitioning scheme)
  5. implementing partial backups and restores (via leveraging partitions)
  6. implementing partition aware index optimisation procedures
  7. Calculating table partition sizes in advance

This blog post deals with rebuilding tables that are already partitioned (ie. apply a new partitioning scheme).

I will blog about the other steps later.

And so, lets get into the nitty gritty of the partitioning details!

Continue reading

Performance Impacts of Partitioning DML Triggers

Continuing on with my Partitioning post series, this is part 3.

The partitioning includes several major components of work (and can be linked below);

  1. partitioning large existing non-partitioned tables
  2. measuring performance impacts of partitioned aligned indexes
  3. measuring performance impacts of DML triggers for enforcing partitioned unique indexes
  4. rebuilding tables that are already partitioned (ie. apply a new partitioning scheme)
  5. implementing partial backups and restores (via leveraging partitions)
  6. implementing partition aware index optimisation procedures
  7. Calculating table partition sizes in advance

This blog post deals with partition aligning non-clustered indexes (unique and non-unique) and then measuring performance impacts of DML triggers for enforcing partitioned unique indexes.

I will blog about the other steps later.

And so, lets get into the nitty gritty of the partitioning details!

Continue reading