Historically, users have been constrained to consume data in fixed platforms, with one specific dimension for digital viewings and a couple of printed layouts. Nowadays, with the ubiquitous mobile devices available, Management expects to know key business indicators and day-to-day operation performance on the go. Using well-designed data models and visualization techniques, Power View can leverage timely and quick dashboards for analysis on the go.
This session will discuss how to design the appropriate data model to enable self-service data exploration and insightful analysis in Power View, and how to create pixel perfect visualization for mobile devices. You will also learn how the techniques differ from traditional platforms. Using live demo, we will walk through ways to channel users focus on actionable analytics.
General Session (75 minutes)
BI Information Delivery
Add to Favorite
The performance metrics DBAs use to troubleshoot performance issues are very different in a virtual environment. Understanding the difference between the physical and virtual environment requires understanding these new metrics. This Lightning Talk describes the metrics DBAs have to monitor.
Lightning Talk (10 minutes)
Enterprise Database Administration & Deployment
Would you like to know how to keep end users and management happy? Then join us in this session where we discuss data quality and data cleansing. For those of us who have dealt with Data Profiling Tasks in SQL Server 2008, we were shocked, yet pleasantly surprised, with the great advances that Microsoft made with the advent of Data Quality Services in the SQL Server 2012 release. In this hands-on presentation, we will look at how to set up a new knowledge base based upon an existing one, set up rules, do knowledge discovery within the new knowledge base, and, finally, cleanse the data through a data quality project.
BI Platform Architecture, Development & Administration
Testing is critical to managing a high-quality data lifecycle. Unfortunately, SSIS has no built-in support for test authoring and the tools for relational database testing are limited. As a result, most organizations forgo automated testing and focus entirely on manual user testing, which is both expensive and often occurs too late in the process to address all of the issues discovered.
In this session, a new approach to unit testing (focusing on verifying the correctness of individual packages, tasks, or dataflows) and integration testing (focusing on validating that the data produced by an entire sequence of transformations) will be discussed. This approach uses metadata authored by analysts in Excel to automatically generate testing logic within the corresponding packages or to automatically generate standalone test packages that contain all of the necessary validation logic. All code will be shared with attendees for free and unrestricted use within their own projects.
What is the Microsoft Azure SQL Database all about? What does this mean for me as a DBA? What is the process for setting up a Microsoft Azure database? How would I go about migrating one of my databases to the cloud? How do I possibly leverage this new technology in my production environment?
If you are a production DBA and have contemplated one or more of these questions, this session is for you! Maybe you have been tasked with finding out about the cloud. Join me in exploring the cloud where I will show you how SQL Server works in the Microsoft Azure SQL Database world. We will run through the simple process of configuring a Microsoft Azure SQL database, and then we’ll discuss the similarities and differences between on-premises SQL Server and Microsoft Azure SQL Databases. We will even look into the DR, HA, monitoring and performance tuning options available with Microsoft Azure.
This is an imaginary tale about data wizards and any resemblance with reality is purely coincidental.
In his 10-minute Lightning Talk, Regis Baccaro will illustrate how to do rapid data mashup with Excel, Power Query, Power View, Power Pivot and Power BI . . . and show you which one of the wizards runs with the "Power" victory.
A good understanding of join algorithms is essential in diagnosing and fixing issues related to bad query plans. However, one of today’s realities is that a lot of database professionals do not have a computer science degree and didn’t sit through a formal “Introduction to Relational Databases” course.
This session seeks to fill in some of the gaps that might exist by looking in-depth at the three types of join operations. We’ll visualize how the join operations’ algorithms work so you can understand how query plans are computed, and then we’ll look at why these join operations have very different performance characteristics and why the optimizer chooses a specific join operator to use in a query plan. You’ll see through demonstrations that empirical cost calculations are similar to what the query optimizer actually returns.
Power BI for Office 365 is Microsoft's new self-service BI offering. Just because it emphasizes self-service doesn't mean a system administrator isn't an important role!
In this session, we will discuss the overall system components and how a Power BI site in SharePoint Online differs from an on-premises SharePoint BI site. We will walk through how to best handle setting up connectivity to data sources, when a gateway is needed, and what data refresh capabilities exist. We will also consider how and when to create OData feeds from your corporate on-premises data sources and how those OData feeds affect Enterprise Data Search functionality.
Disasters happen, plain and simple. When disaster strikes a database you're responsible for, and backups and repair fail, how can you salvage data, and possibly your company and your job? This is where advanced data recovery techniques come in. Using undocumented tools and deep knowledge of database structures, you can manually patch up the database enough to extract critical data.
This demo-heavy session will show you never-seen-before methods the speaker has used extensively in the last year to salvage data for real-life clients after catastrophic corruption. You won't believe what it's possible to do!
This session goes beyond the classical star schema modeling, exploring new techniques to model data with Power Pivot and SSAS Tabular. You will see how brute-force power in DAX allows different data models than those used in SSAS Multidimensional. You will see several practical examples, including creating a virtual relationship (without a physical relationship in the data model); a dynamic warehouse evaluation without a snapshot; dynamic currency conversion; a number of events in a particular state for a given period; surveys; and basket analysis. The goal of this session is to show you how to solve classical problems in an unconventional way.
Are you finally ready to unlock the power in your spatial data? In this session, we will explore some advanced spatial analysis techniques, including clustering, binning, and the basic use of spatial statistics. We will then discuss several options for visualizing the results in SQL Server Reporting Services and PowerPivot. Get ready to go beyond bars and bubble charts!
Agile methodologies, such as Scrum and Extreme Programming, are popular and effective approaches to software development that focus on creating a high value, maintainable product quickly though iterative development and close collaboration between the team and the customers. These approaches enable rapid delivery and development, along with flexibility and a focus on delivering high value features. These methodologies work great for standard software development… but what about Business Intelligence projects?
This session will cover an introduction to Agile methodologies (with a focus on Scrum), what you will need to do to make it work for Business Intelligence projects, and common challenges that you will need to consider. This “from the trenches” presentation will also cover real world examples of projects where this approach was extremely successful, and a few that weren't.
BI and DWH is all about trust. Business users will use delivered data and information if, and only if, they trust them. But they also ask for frequent changes, and how one can be sure not to introduce bugs and errors that will compromise that trust?
Test-driven development and Continuous Integration are two extremely interesting processes that are used in software development, and they can be also applied to BI solutions in order to bring Agility into the BI field. In this session, we'll see which tools can be used to unit test your solution, how data can be unit tested, and how we can automatically start and test the ETL phase, or a cube process, each time someone checks in a change made to the solution.
The Data Warehouse plays a central role in any BI solution: it's the back end upon which everything in the coming years will be created. It must be capable of being flexible in order to support the fast changes needed by today's business, but also with a well-know and well-defined structure in order to support the "engineerization" of its development process, making it cost effective. In this full-day session, we will discuss architectural design details and techniques, Agile Modeling, unit testing, automation, and software engineering applied to a Data Warehouse project.
The only way to do this is to have a clear idea of its architecture, understanding the concepts of measures and dimensions, and a proven engineered way to build it so that quality and stability can go hand-in-hand with cost reduction and scalability. This will allow you to start your BI project in the best way possible avoiding errors, making implementation effective and efficient, building the groundwork for a winning Agile approach, and helping you to define the way in which your team should work so that your BI solution will stand the test of time.
Pre-Conference Session (full day)
Self-Service BI functionality is one of the big goals of the modern Microsoft BI stack. But with the ability to bring business intelligence functionality to a much broader audience the danger of inconsistency, redundancy of information and accountability as well as security concerns and data quality problems can increase.
In this session, we will show data governance best practices and what capabilities for data governance Power BI has in place. In addition, we’ll discuss how you can enhance your self-service BI landscape with Master Data and Data Quality Services for improved governance.
SQL Server is often I/O bound, but why? Do you feel lost when talking to your storage administrator? Are your storage subsystems like a mysterious black box where your databases live but you can’t go visit? This session will get you up to speed with the fundamentals of storage subsystems for SQL Server.
You will learn about the different types of storage that are available, and how to decide what type of storage to use for different workload types. You will also learn useful tips and techniques for configuring your storage for the best performance and reliability. We’ll cover methods to effectively measure and monitor your storage performance so that you will have valuable information and evidence available the next time you have to discuss IO performance with your storage administrator. Come to this session to learn how to analyze I/Os as well as options to reduce the bottlenecks.
There are many books and articles on how to design a dashboard or create a pretty chart. But how do you know which graphical representation is the right one for your data? What kind of chart will give you the insights into the data you are looking for? When should you use multiple series on a chart versus using small multiples? What is the best way to show contribution to the whole? Why would you mix a line and bar chart?
This session will answer these questions by taking a case study approach. Different data sets will be studied and we will see how different charts bring out different aspects of the data. Perhaps more importantly, we will take a look at which charts don’t really show anything interesting at all. Time will also be spent looking at the differences between using charts in reports and dashboards as opposed to visual analytics.
What exactly does it mean to have optimistic concurrency? What is the alternative? Is SQL Server 2012's SNAPSHOT Isolation optimistic? How can SQL Server 2014's In-Memory OLTP provide truly optimistic concurrency?
In this session, we'll look at what guarantees the various isolation levels provide, the difference between pessimistic and optimistic concurrency, and the new data structures in SQL Server 2014 that allow the enormous benefits of having totally in-memory storage with no waiting!
Application & Database Development
Queries need your help! Your mission, should you choose to accept it, is to make great decisions about what indexes are best for your workload. In this session, we'll review the difference between clustered and nonclustered indexes, show when to use included columns, understand what sargability means, and introduce statistics. You'll leave this session with the ability to confidently determine why, or why not, SQL Server uses your indexes when executing queries.
Most data warehouse design efforts produce a large amount of metadata in the form of handwritten documentation. If done properly, this documentation provides everything the development team requires to build the dimensional data model and load that model with transformed data using the patterns and best practices that were also specified as part of the warehouse design. Unfortunately, the process of translating requirements to T-SQL scripts and SSIS packages requires tremendous effort and creates assets that are expensive to maintain.
In this session, we will show how to instead use a metadata database to store the requirements information, and then use this metadata to automatically generate EVERYTHING you need for your data warehouse project: documentation, schemas, SSIS packages, deployment scripts, etc. The code for the solution, which is presently in use in production data warehouses in major international companies, will be provided for all attendees to freely use.