Publish data model from a .pbit file?

Hello,

I refreshed my local data model. I went to save it and got an error saying there is not enough space on disk. I see this error is actually a reoccurring problem for many people. Although I could not save my .pbix data model locally, I did manage to save it as a template. So my questions then are:

  1. Can I publish to service via the template file ?
  2. Will the changes I did in my model .pbix file (which I was not able to save in a pbix) have been saved into my .pbit file?

Many thanks
Michelle

My understanding of PBIT files is that you have now created a version of your report that does not have a datamodel included (which is why you were able to save it) - I believe your measures will have saved, but not the data structure.

And with that understanding, I don’t see how you will be able to publish it to the service and have it be useful.

You probably need to focus some time on optimizing your data model

@Harsh has recently shared some great resources on this topic: How to speed up Power BI reports for opening and refreshing? - #3 by Harsh

2 Likes

Hi Heather, thanks for your reply. Yes I see that the template file is out of the quesiton. But no, this is a power bi bug. There are a lot of people complaining about it (or at least I see quite a few hits) . The model that I have open here is 500MB so not at all enormous. The other day a collegue of mine was able to fix this problem by going into C:\Users\Michelle\AppData\Local\Power BI Desktop\User Data\ and clearing out some temp files there. I could then save pbix files containing a model once again. Now the problem has returned and the same fix isn’t doing it. :frowning:

image

Hi @michellepace. Yup, PBIT is a template and contains no data, just schema, so in your case will have all your code. You can’t publish a PBIT, but rather only use it to create another PBIX file. As for the saving, it sounds to me like an operating system and/or hardware and/or policy limitation and not a Power BI limitation.
Greg,

1 Like

Hi @michellepace , did the response provided by @ help you solve your query? If not, how far did you get and what kind of help you need further? If yes, kindly mark the thread as solved. Thanks!

Hi @michellepace, we’ve noticed that no response has been received from you since the 10th of January. We just want to check if you still need further help with this post? In case there won’t be any activity on it in the next few days, we’ll be tagging this post as Solved. If you have a follow question or concern related to this topic, please remove the Solution tag first by clicking the three dots beside Reply and then untick the checkbox. Thanks!

A response on this post has been tagged as “Solution”. If you have a follow question or concern related to this topic, please remove the Solution tag first by clicking the three dots beside Reply and then untick the check box. Thanks!

I concur with @Greg. I regularly use the PBIT file for version/source code control for all of my reports, as it includes all of the markup for the report, the data model, and the data refresh steps (M code) but none of the imported data. If you open a PBIT and click Refresh (assuming you have access to all data sources), it will pull in all data and look exactly as you expect it to. With all of my reports versioned, I can also go back to a prior revision at any time and just refresh the data. I just have to remember to do a File - Export - Report Template each time I make report changes so the PBIT is updated. This applies to question 2.

For question 1, you will only publish a PBIX since opening a PBIT creates a PBIX equivalent in the desktop IDE. However, if you have the data refresh working in the service (as opposed to a static data set), you could publish the “empty” PBIX to the service and let the service refresh the data. You could also refresh the model locally, then publish the loaded PBIX to the service, then close the local version without saving the data because the PBIT has all the report details in it.

John C. Pratt

2 Likes

Thank you for all your replies. In the end, I downloaded treesize and ran it on my windows user folder. I had a huge amount of temp files hanging around for power bi. I deleted those and then was able to save. Thanks @jpratt John and @Greg Greg, how I didn’t think of putting the pbit into source control instead of the pbix I don’t know! Just a lost quick question although I think I know the answer:

if I have mon, tues, weds on service. Then a republish my pbix which only has mon. Does this mean tues and weds get wiped out, and then once after the dataset refreshes automatically ?

1 Like

@michellepace, whenever you publish the PBIX, it will only include the data loaded into the PBIX’s model. I deal with that all the time. So yes, in your situation you would lose Tuesday and Wednesday until the data refreshes again.

My largest models (so far) run in the 50-70 MB range for PBIX size, so I am able to refresh my PBIX locally when I am making report changes and then publish the full data model along with the changes.

I’ve found that a habit needs to be formed of either always refreshing the data locally when making changes, or always refresh the data after deploying the changed report. I happen to have scheduled refreshes on-prem that I can kick off whenever I want manually, but my habit is to refresh in the Desktop IDE and then deploy.

John C. Pratt

Goodness John, be thankful for small datasets. We’ve been extracting from SAP. My largest table is just over 9M rows (never counted the columns), a combination of Ekko and Ekpo tables for anyone who has scratched in SAP. The procurement data model last I checked was about 890MB. We eventually (thank goodness) ramped our synapse capacity to 400 DW which helped resolve most of the timeouts on refreshing data. I know, I know - why don’t you drop columns? The short answer is that we show everything first, reduce down second.

This is really just a convoluted way to say: I watch a lot of circles turn on my screen. Round and round and round. And then round again. I don’t have the luxury of refreshing on desktop.

1 Like

@michellepace, there have been quite a few posts dealing with issues associated with large datasets in the desktop. I have over 2 million rows in my largest model, but relatively few columns. I am intrigued to pursue a mechanism by which it may be possible to only import a subset of data if the report is loading within the desktop IDE but the full dataset once the report is published. Perhaps one of the gurus on here like @Melissa or @BrianJ may know how to do it. Within my SQL Server environments, I can detect the server name and if it includes “TEST” in the name I can change query/system behavior.

John