How to Optimize/Debug the Slow Reports in Power BI

Hello Everyone,

Hope everyone is good. :slightly_smiling_face:

Today I would like to provide few links based on the topic “How to Optimize/Debug the slow reports in Power BI”. In our forum, almost every once or twice in a week, we come across a question about how one can optimize their slow reports. Now most of us might be familiar with the “Performance Analyzer” option available in the Power BI but have some doubts in mind about how we can use that option in it and then analyze what are the main reasons that impacts our performance.

Most us are also familiar with the tool “DAX Studio” and how it helps in analyzing and debugging our slow reports. Now, to understand these concepts better and more in-dept I’m providing few links below.

So for those who are reading enthusiastic I’m providing few links of the articles based on it and for those who have time constraints I’m providing few links of the videos as well.

And for more better understanding of the concept one can also refer the book on “The Definitive Guide to DAX” where you can refer the Chapters based on “The VertiPaq Engine” , “Optimizing Data Models” , “Analyzing DAX Query Plans” and “Optimizing DAX” for better understanding regarding the topic “Performance Optimization”.

Hoping you’ll find this useful and helps in report optimization.

Happy Learning, stay good and stay safe during this times.

Have a great day. :slightly_smiling_face:

Thanks & Warm Regards,
Harsh

https://www.designmind.com/business-intelligence/power-bi-report-is-too-slow/

2 Likes

Nice one

I think I might create some videos about this myself

2 Likes

Just marking the post the as “Solution”.

Hi @Harsh. This is very useful. I ran the Performance Analyser and found that my variance DAX columns are taking ca. 7 seconds to load. Hoping you could help optimise my formula:

v LY. = 
SWITCH( TRUE() , 
    [Exec Var Table Data] = "Var." && [Table Data] = "Actual" , [CY] - [LY] , 
    [Exec Var Table Data] = "Var." && [Table Data] = "PAR" , [CY] - [LY] ,
    [Exec Var Table Data] = "Var." && [Table Data] = "% TORE" && [Period Table Data] = "MTD" , FORMAT( [% TORE MTD] - [% TORE LY MTD] , "0.0%" ), 
    [Exec Var Table Data] = "Var." && [Table Data] = "% TORE" && [Period Table Data] = "YTD" , FORMAT( [% TORE YTD] - [% TORE LY YTD] , "0.0%" ), 
    [Exec Var Table Data] = "Var." && [Table Data] = "POR" , [CY] - [LY] , 
    [Exec Var Table Data] = "% Var."  && [Table Data] = "Actual" , FORMAT(DIVIDE([CY] - [LY] , [LY]) , "0.0%") ,     
    [Exec Var Table Data] = "% Var."  && [Table Data] = "PAR" , FORMAT(DIVIDE([CY] - [LY], [LY]) , "0.0%") ,
    [Exec Var Table Data] = "% Var."  && [Table Data] = "% TORE" && [Period Table Data] = "MTD" , FORMAT( [% TORE MTD] - [% TORE LY MTD] , "0.0%" ), 
    [Exec Var Table Data] = "% Var."  && [Table Data] = "% TORE" && [Period Table Data] = "YTD" , FORMAT( [% TORE YTD] - [% TORE LY YTD] , "0.0%" ), 
    [Exec Var Table Data] = "% Var."  && [Table Data] = "POR" , FORMAT( DIVIDE([CY] - [LY] , [LY]) , "0.0%" ) )

Is there anything I can do to optimise this formula?

Hello @Stuart,

Since the measure which you’ve provided is “Subsidiary/Secondary Measure” which is based on multiple “Primary Measures”. You’ll be thoroughly required to review your individual measures to find out where the problem lies. I’m providing a video from Alberto Ferrari where he has spoken about how you can find the problem and fix it. And also providing a link where it explains what are the options available to optimize the slow report.

You can follow the method that he has shown to find out where the problem persists. Please check out your dependent as well as individual measures in order to evaluate the issue.

And this is only way you can find where the problem lies, I know this is a tedious and lengthiest process. And as said by the Experts in their videos, please don’t resort to the “Trial and Error Method” unless you’re absolutely sure from where the problem has occurred.

Hoping this video helps you in optimizing the performance of your report. :slightly_smiling_face:

Please Note: I’ve also provided a link in the first post where you can find in-depth articles, posts and videos from SQLBI updated from time-to-time on topic related to “Optimization”.

Thanks & Warm Regards,
Harsh

1 Like

The main issue (well my best guess w/o seeing the underlying data and such) is that since you are using SWITCH (which internally generates a bunch of nested ifs) it results in using the slower of the two engines (the Formula Engine) instead of the faster Storage Engine. I bet if you run this in Dax Studio you will see CallBackDataID pop up, which is not good for performance.

have you looked into the new calculation groups that can be done in the new version of pbi?

1 Like

Hi @Nick_M

Where can I find the new PBI calculation Groups? Based on some feedback I have fine tuned the formula to:
Bud =
VAR Actuals = [Table Data] = “Actual”
VAR PercTore = [Table Data] = “% TORE”
VAR PeriodTableMTD = [Period Table Data] = “MTD”
VAR PeriodTableYTD = [Period Table Data] = “YTD”
RETURN
SWITCH (
TRUE (),
Actuals && PeriodTableMTD, [CY Bud MTD],
Actuals && PeriodTableYTD, [CY Bud YTD],
PercTore && PeriodTableMTD, FORMAT ( [% TORE Bud MTD], “0.0%” ),
PercTore && PeriodTableYTD, FORMAT ( [% TORE Bud YTD], “0.0%” ),
)

But still taking around 2 seconds to update. Any guidance on how to remove the SWITCH function from the above?

@Stuart

They are pretty new so I’m still looking into them as well. I’m sure there will be some info about those here in the coming future.

Variables are good. Hard to say on why it’s still slow, but I’d have to assume it has to do with the actual measures and not so much with the SWITCH. Any chance you can upload a file?

You can’t access them directly from PBI Desktop at this point - you need to go through Tabular Editor.

Here are a couple of great introductory videos:

I hope this is helpful.

  • Brian
2 Likes

Thanks for posting this. Our company has an unusual amount of Journal Entries with about 350MM lines to process. It’s nearly impossible and I’m not sure if Power BI will have the “power” to handle and process all these records without too much challenges.

I will refer to these and have my team members as well.

@yaney,

The number of records you’re processing alone won’t necessarily be a problem. I saw a demo at MBAS where Microsoft was running a report real-time on over a billion records with Power BI at a reasonable speed. Instead it will come down to the nature and efficiency of the measures you’re running.

  • Brian

Thanks for that info, @BrianJ. I’ve heard there were millions and millions of data elsewhere but we’ve had challenges so we will have to leverage these resources. Appreciate you all!

@Stuart, I think @Nick_M is on the money with this.

Combining a multi level switch, a concatenate function with a format is where the problem lies. Its a similar issue others have reported in finance template support category.

Are you running this for a financial statement template?

Try halving the switch lines to see which group of switch lines is taking the longest time to calc…maybe a way to check out the slowest lines?

@BrianJ is correct in saying the amount of records really isnt the problem and can handle billions of records. This assumes you set up the data model that is best optimized for VertiPaq (the compression engine used). Some high-level things to consider:

  • Only import columns you need. Dont just import anything and everything
  • Reduce the uniqueness of data in the columns as much as possible. For example, if you are storing DateTime, break those out into one column for Date and one for Time. (Dax studio)
  • Be sure there are no Referential Integrity violations (DAX studio can show you those)
  • Do not store partial results in calculated columns. Also try to compute calculated columns at the source (i.e. SQL Server or using Power Query) vs using DAX.
  • If using complex DAX filters or measures, try to see if those can be moves from DAX to a Calculated Column
  • Consider aggregating your fact table to a higher granularity. For example if your main fact table has every single transaction for every single day, maybe create and aggregation table of that same data but has the daily values aggregated together and then use that for your measures and such

Then after all of that setting up the data model correctly is paramount. VertiPaq is set up best to work with a star schema. Then, and only then, would trying to optimize the DAX queries make sense. But going on the assumption the above is accounted for, I would use DAX studio and run each measure separately and see what is really happening. That’s a whole other topic but some high level DAX optimizations strategies:

  1. Be extremely careful of using nested iterators. Remember that when you have a row context (like you do inside an iterator and you call a measure there is context transition. Now depending on the size of the first table you are iterating, this can be very slow.
  2. Do not use an entire table as a filter if you can help it, there are tremendous performance implications and some side effects (some good, some bad)
  3. If you can use Calculate, use it.
    When you run a query in DAX Studio, look to see if there are CallbackDataID (they will be highlighted and bolded)

More or less the strategy here is going to be making sure your data is set up best for VertiPaq and then using DAX Studio to start seeing what is really happening. It will be hard to try to optimize your data without seeing the data itself, but not sure that would be possible.

2 Likes