[read this post on Mr. Fox SQL blog]
The “modern data platform” architecture is becoming more and more popular as organisations shift towards identifying, collecting and centralising their data assets and driving towards embracing a “data driven culture“.
Microsoft Azure has a suite of best-of-breed PaaS based services which can be plugged together by organisations wishing to create large scale Data Lake / Data Warehouse type platforms to host their critical corporate data.
When working with customers going down the Modern Data Platform path I often hear very similar questions;
- What is the most suitable and scaleable architecture for my use case?
- How should I logically structure my Data Lake or Data Warehouse?
- What is the most efficient ETL/ELT tool to use?
- How do I manage batch and streaming data simultaneously?
While these are all very valid questions, sorry, but that’s not what this blog is about! (one for another blog perhaps?)
In my view – what often doesn’t get enough attention up front are the critical aspects of monitoring, auditing and availability. Thankfully, these are generally not too difficult to plug-in at any point in the delivery cycle, but as like with most things in cloud there are just so many different options to consider!
So the purpose of this blog is to focus on the key areas of Azure Services Monitoring and Auditing for the Azure Modern Data Platform architecture.
[read this post on Mr. Fox SQL blog]
I had a recent requirement to integrate multi-language support into a SQL DW via a SQL SSIS ETL solution. Specifically the SQL DW platform currently only supported English translation data for all Dimension tables, but the business was expanding internationally so there was a need to include other language translations of the Dimensional attributes.
We wanted to do this without having to manually translate English text attributes that exist already, or new ones that are added or modified over time. We wanted an automated method that simply “worked“.
Enter Azure Cognitive Services Translator Text API service!
So the purpose of this blog is to outline the code/pattern we used to integrate the Azure Cognitive Services API into SQL SSIS ETL packages.
For those not aware SQL Saturday is coming to Melbourne on Sat 20 Feb 2016.
SQL Saturday is an excellent free learning resource for all things SQL Server – all costs are covered by donations and sponsorships. Some of the excellent sponsors this year are PASS, RockSolid SQL, MelissaData, and Jade.
Some of the session focus areas include SQL 2016, SQL On-Prem solutions/technology, SQL / SQL DW in Azure solutions/technology, SQL MPP, Machine Learning, Agile Methods, Power BI, Powershell, BIML …and more!
For those wanting to come along here are the links you need to know. Please go to the website and register to attend.
The event is being held at Monash University (Caulfield Campus, 888 Dandenong Road, Caulfield East, Victoria)
For those interested I am presenting a session on Practical Partitioning which will show some interesting demos and should be a lot of fun… feel free to pop in and introduce yourself!
All of my presentation content will be posted on the SQL Saturday site at the completion of the event. http://www.sqlsaturday.com/464/Sessions/Details.aspx?sid=40479
The presentation demos are based on my 7 part blog post series on partitioning;
I hope to see you all in Melbourne at SQL Saturday!
Disclaimer: all content on Mr. Fox SQL blog is subject to the disclaimer found here
[read this post on Mr. Fox SQL blog]
Continuing on with my Partitioning post series, this is part 7.
The partitioning includes several major components of work (and can be linked below);
- partitioning large existing non-partitioned tables
- measuring performance impacts of partitioned aligned indexes
- measuring performance impacts of DML triggers for enforcing partitioned unique indexes
- rebuilding tables that are already partitioned (ie. apply a new partitioning scheme)
- implementing partial backups and restores (via leveraging partitions)
- implementing partition aware index optimisation procedures
- Calculating table partition sizes in advance
This blog post deals with calculating partitioning sizes in advance.
Sometimes (just sometimes) you need to calculate the size your table partitions upfront before you actually go to the pain and effort of partitioning (or repartition) a table. Doing this helps with pre-sizing the database files in advance instead of having them auto-grow many many times over in small increments as you cut data over into the partitions.
As a quick aside…
- The negative performance impacts of auto-shrink are universally well known (er, for DBA’s that is!), however I rarely hear people talk about the less universally well known negative performance impacts of auto-grow quite so much.
- Auto-Growing your database files in small increments can cause physical fragmentation in the database files on the storage subsystem and cause reduced IO performance. If you are interested you can read about this here https://support.microsoft.com/en-us/kb/315512
Now – back to what I was saying about pre-sizing table partitions…!
I prepared a SQL script which given some parameters can review an existing table and its indexes (whether they are already partitioned or not) and tell you what your partition sizing breakdown would be should that table be partitioned with a given partition function.
I wrote it just for what I needed but it could be expanded more if you are feeling energetic. The script is at the end of this post.
And so, lets get into the nitty gritty of this estimation script!
PASS 2015 continues (and finishes up today!) in Seattle.
Its been an amazing conference this year with a few things really hitting home;
- Amazing technology announcements around SQL 2016 CTP3
- Incredible advances in almost every component in Azure Data Services
- Full and seamless SQL/Azure ecosystem integration – and by that I mean both On-Prem and/or within the Azure Cloud. The story of either On-Prem or Azure Cloud is compelling enough individually, however the Hybrid story is now a reality for SQL and enables dynamic and flexible architectures well beyond what competitors can offer.
- BUT what astounds me the most is actually the pace of change – barely a day goes by where I don’t receive a new services or feature update related to SQL 2016 CTP3 or Azure.
- I don’t recall a time (in recent memory) where the step changes have come so thick/fast – its certainly changed from where I started as a DBA on RDB/VMS back in 1994 where patches arrived by mail on tape cartridge! 🙂
- (As a quick aside a chief designer on RDB was Jim Gray, the same who joined Microsoft to lead the SQL Server architecture to stardom soon after Oracle bought-out and shelved DEC around 1995+)
Enough reminiscing already – moving along – Today I attended 5 back-back sessions, and again I cannot blog about all of them in the time I have (or want to spend), but the one which stands out the most was Azure SQL Data Warehouse and Integration with the Azure Ecosystem by Drew DiPalma of Microsoft.
This session focused specifically on the Azure ecosystem surrounding the Azure SQL Data Warehouse (SQL DW) and how it can seamlessly interact with other Azure components to create different operational solutions. To me this was very compelling, not necessarily due to the SQL DW technology (which I know well already as the on-prem APS appliance), but more-so as it showed just how easily all parts of Azure can happily work together.
PASS 2015 continues in Seattle, and today was my session at 1045am on Using Azure Machine Learning (ML) to Predict Seattle House Prices. The background and info on my session is here http://www.sqlpass.org/summit/2015/Sessions/Details.aspx?sid=7794
Overall I was pretty happy with how it went - and I think everyone who attended had a lot of fun with some of the games and tests I injected into the presentation. Everyone had a chance to be a Real Estate Agent :) - and at the same time learn some great methods around performing Azure ML Regression Predictive Analytics.
BUT – moving right along – I also attended 3 other sessions today, again I cannot blog about all of them in the time I have, but the one which made me think the most about technology implementations and how they can improves lives was Understanding Real World Big Data Scenarios by Nishant Thacker of Microsoft.
It wasn’t about use cases for big data (as this is a horse already bolted), but more around really innovative and interesting ways the ecosystem of Azure technologies could be deployed to solve some complex business problems, or moreso simply ways to make our lives better!
So PASS officially kicked off this morning leading into the next 3 days of back to back sessions.
You could certainly tell that the keynote was on… I mean the dining room was pumping…!
Oh that’s right, everyone is at the keynote!
So the Keynote session was hosted by Joseph Sirosh Group Vice President, Data Group.
The big tell for the key note was undoubtedly the SQL Server 2016 CTP3 and just whats packed to the rafters within the software. If you want to learn more about that then I recommended step across to this link here http://blogs.technet.com/b/dataplatforminsider/archive/2015/10/28/sql-server-2016-everything-built-in.aspx
Key Takeaways from the Keynote;
- SQL 2016 is really a major release that really solidifies the Microsoft view of a solid foot in both the On Prem and In Cloud data platform camps.
- “The future is both earth and sky!”
- The release offers much On Prem capability like Polybase (to APS), R integration (advanced analytics), Always Encrypted, SSAS/SSRS improvements
- The release also provides the ability to seamlessly integrate from On Prem to Azure Cloud – and/or back like Polybase (to HDInsight), Stretch Database – and SQL already has capability to use Azure VM’s for SQL AAG solutions and Azure backups.
- An interesting takeaway – the human size of human genome is approx 1.5 Gigabytes, or about 2 CDs worth of storage space. How small do you feel now?
I then attended 4 sessions, but today there is really only time to blog about this one, mostly for me it was the most impressive in regards to capability and just how far its come!
The session was SQL Server in Azure Virtual Machines – Features and Best Practices and was presented by Luis Vargas is a Senior Program Manager Lead in the SQL Server team.
PASS 2015 has kicked off in Seattle, well the precon’s have anyway on Mon & Tue. The actual conference starts on Wed-Fri!
I attended a precon session today called Optimize “All Data” with a Modern Data Warehouse Solution held by Bradley Ball and Josh Luedeman of Pragmatic Works.
The session had a focus on moderising the corporate data warehouse via focusing on Data Lifecycle Optimisation.
What does that mean?
Well – It means focusing on a define set of critical technology and business areas around the corporate data warehouse and strategically implementing a managed approach to improving the corporate data warehouse via introduction of technologies and processes. Specifically this looked at 6 areas around the corporate data warehouse to consider in your approach to modernisation;
- Architecture and Configuration
- Availability and Continuity
- Maintenance and Optimisation
- Enterprise BI
- Big Data Architecture and Deployment
- Business and Predictive Analytics