Dataflow and Dataset Best Practices

Update to this thread:
My practice for dataflows is good.
Having a single golden dataset with all fact tables is bad practice and was the cause of the pain points mentioned in the op.

Background:

I decided to keep dataflows as they are and be more liberal with report creation. Every report is a new pbix. I’ve moved away from thin reports for the time being.
I use power automate to refresh each dataset. The trigger is the upstream dataflow refresh. Only one automation is needed, the primary fact table in the report, but more than one automation is possible.

I created a golden dataset for each of the fact tables. The option is there if I want to make a quick ad-hoc thin report. But my main use case for the golden datasets is Analyze in excel.

My old golden dataset had grown to contain over 100 measures and 6 or 7 fact tables, with additional tables needed for the odd special-case visual.
Navigating these fields and measures was cumbersome in excel pivot table.
In thin reports based off this dataset, I lost track of which measures were from the golden dataset and which were local to the thin report.