Follow-up , the results are accurate and works 100% , but it has one limitation (The performance) will be very slow if you have big data > 2M it will gives out of memory problem, we will try to digg down how to solve the performance issue.
Sorry ā didnāt see your question. FYI ā in the future, youāll get a faster response if instead of responding to a closed thread, you start a new topic.
While Iām not saying there arenāt ways potentially to optimize the solution I provided, I donāt think the improvement will be significant enough to give you a satisfactory result given the size of your dataset. There are certain DAX operations, such as RANKX, that are very calculation intensive and even when optimized are relatively slow. The previous value pattern at the core of this solution is another of those calculation intensive operations. Thus, I think the best bet for your situation would be to rework the solution in Power Query. It still may take significant time to execute it there, but at least you can do it on scheduled refresh and then once the report is open, it should perform quite well.
Imke Feldmann has provided a simple, and highly optimized approach to obtaining previous value in power query, using two indexes ā one starting at 1, and the other at 0. Hereās a blog entry that explains the process in detail:
I hope this is helpful, and invite others with more expertise in optimization to chime in here with their thoughts.
Hi Brian, I tried to implement this solution to my dashboard but when I try to load a table with my Customer Name, Order Date and then drop in Prev Order Date, I just get the circle spinning in the top left corner infinitely. Thereās no issue with relationships and the file file size is relatively small.
Sorry, but thereās not enough information for me to diagnose what the problem is. Iāve run this solution on some pretty large datasets without a problem. If you can post your PBIX file, Iād be glad to take a look.
I tested the DAX solution on your dataset, and it produces the right results but runs unacceptably slowly.
If you can please also send me the underlying data file in Excel or CSV, I will work on the Power Query solution, and see if I can get it to run at an acceptably fast speed using the Feldmann shift technique.
Thank you for your email , The data is the same for (EDNA) it is public DB you can take it from this forum , But i get your solution and practice it on my data , that is right it make the performance slow but the solution work 100%.
Thanks for your message. I was actually requesting @Ndz1985 's data file, since we are working on the same problem that you and I worked on, but his dataset is over a million rows, so the DAX solution is just not very workable. Going to rework this in Power Query to try to speed it up.
Really appreciate your assistance here. Iāve exported the max I can to .csv. SQL Server didnāt export the headers so Iāve entered what I think the important ones are. If you need others please let me know.
OK, I think Iāve got a pretty good outcome for you on this now.
The DAX-only visual took minutes to run, whereas if you do most of the heavy lifting in PQ, Iāve got the results down to being able to run the following results in under 3.5 seconds:
In truth, the PQ portion still takes a long time to run, but that perhaps can be done on scheduled refresh so that the user doesnāt bear the refresh time delay.
Basically, what I did was added a 0-based index and a 1-based index to the sales table, and did a self-join on the two index columns (i.e., the āFeldmann Shiftā, named for Imke Feldmann who first published this technique). This is an efficient way to calculate any previous value in PQ. I then wrote a couple of simple custom M statements to check that previous dates werenāt crossing over customers and then subtracted the current and previous date for each line.
All thatās left to do is calculate the average days between by customer, which I did via the following DAX measure (although it could be done in PQ if you donāt need for this to be dynamic, say sliced by date).
I hope this is helpful. Full solution file attached. (Note: I had trouble with your CSV file, so I pushed the whole set of CSV files out of the model with Power BI Exporter, and then reimported them - thatās why some of the column names are wonkyā¦)
Thatās great to hear. Definitely some cool tricks in this one that I learned from Imke and @Melissa that Iām happy to be able to pass along.
If you havenāt yet, I cannot recommend highly enough going through Melissaās new Power Query course on the EDNA learning portal - it is filled with gems like this, and really has had a major impact on elevating my own skill set.