Hi @BrianJ,
Thanks for your response.
I have renamed columns to make the scenario a little more useful. As you note, I have have made anon and generic, so had to think about scenario that would make sense to an analyst.
The scenario goes,
We have data entry operators that enter records. Each record is assessed by the database for invalid data, and if there are any data entry errors, an error ID is produced that identifies the specific error. For example, missing post code may be error 55. Missing surname may be error 88. So there could be many errors for a single entry.
Each data entry operator can enter data for one or more customers.
Each customer can have one or more data entry operators.
Each month a report is exported that shows total entries for a customer, the error types made by a data entry operator.
The 5 error ID’s I chose were just the top ones by volume, and I was thinking of if we wanted to say a specific number of error ID’s (say 5 error ID’s) were “more important” than others, we could segment them out and track those over time. I have no specific ID’s at the moment that are considered ‘more important’ than others, but it was a type of analysis I was think of, so thought of perhaps using the IN operator to do that, and track as a group total, and individually within that group.
So I suppose I am looking for some ideas around aligning the DAX functions with the analysis. I get a bit stressed about the path to take sometimes at the beginning of a dataset. I will plug away at different approaches, using Summarise, TopN, RankX, Custom groups like how you do for top profit customers, and whatever pops into mind, but any ideas that come from the forum would be well received.
So much for a ‘simple dataset’. The options sometimes flummox me, but I suppose it is just a matter of starting small each time then ‘build it out’ and not to get caught up too much in the holistic picture right at the start.
Updated pbix here: play error file.pbix (60.3 KB)