Calculate ordinal in calendar table: Resolving Circular Dependency Error in DAX

Hi Everyone,
I am facing a complex problem with a DAX circular dependency error that I am unable to resolve on my own.

In the attached model, I have simplified the issue for clarity. Each period, usually monthly, I receive a set of data that includes actual sales and daily predictions. The current month’s submission contains actual sales for the current month and forecasts for the following months. This pattern repeats in subsequent periods.

My objective is to perform calculations that require assigning an ordinal number to each data submission in the calendar table, where ‘1’ represents the most recent submission, ‘2’ the previous one, and so on. While I managed to achieve this when the only relationship was between the calendar date and the submission date, the need to switch to an inactive relationship has led to repeated instances of circular dependency in my attempts.

Any insights or suggestions on how to resolve this issue would be greatly appreciated. If needed, I am available for a meeting to discuss this in more detail.

Thank you for your assistance.

Best regards,
Roberto
data.xlsx (15.6 KB)
Ordinal.pbix (30.7 KB)

After many attempts, I could solve the issue by creating a calculate column on the calendar table.
This is the code that works as expected.

Thanks Everyone

Roberto

SubmissionRanking =

VAR CurrentDate = ‘Date’[Date]
VAR RankedDates =
ADDCOLUMNS (
FILTER (
PPMC,
[# Projects] > 0 && PPMC[ExtractionDate] = CurrentDate
),
“Rank”, RANKX (
ALL(PPMC[ExtractionDate]),
PPMC[ExtractionDate],
,
DESC,
DENSE
)
)
RETURN
SELECTCOLUMNS(RankedDates, “Rank”, [Rank])

Just ran this through data mentor for an explanation as well. Good output.

1 Like

@SamMcKay Your comment convinced me to explore the eDNA suite of documentation generators.

I started with the M doc generator because it’s something I’ve done on my own and wanted to discover if there are any advantages of using that app versus just writing the prompt myself. I noticed it’s capped at 2,000 characters. That seems a bit low for PQ steps, but I’m assuming that’s a limitation of the free plan (I also just realized access to the eDNA AI tools is in addition to membership fees).

I’ve barely scratched the surface of offerings, but my initial thoughts follow.

You guys have obviously invested resources in developing these tools. Well spent, as they seem quite useful. I’m wondering about the practicality, though. I wondered specifically about the advantage of using this suite of tools versus an OpenAI Plus plan and thoughtful prompt engineering. I’m guessing all of these AI tools, the creators, debuggers, visualizers, and advisors (and the set of tools within each group and then the language-specific tools within those) sit on top of a prompt “preface” that narrowly and precisely defines the parameters. If that’s the case, it seems like it would be more efficient (but more work) to invest time in engineering prompts and simply feed those to OpenAI.

Admittedly, I have not fully explored your suite of tools but I do intend to. But my first reaction is that rather than having loads of seemingly separate, special purpose apps what might be more useful would be a single interface that a user could control, customize, save and share.

For example, a user could land on a page with some high-level controls to identify their main interest. Like a switch to choose from one or more of creator, debugger, visualizer, or advisor. Another set could allow users to select specific platforms, applications, languages, or so forth. There could also be a list box of required output types to control result post-processing. There could also be a place to add additional instructions not included in the drop downs and selectors available. All of these options would be translated into the prompt preface and you could allow users to bookmark those settings in their personal space so they could re-use for different code they want to paste. They could also share those settings with the community.

The benefits of that approach versus investing time engineering individual prompts are clear. You could achieve the same goal without needing to navigate to or swap among loads of separate apps. You would be guiding prompt engineering and allowing customization. And you could expect the crowd-sourced community feedback would lead to improvements over time that you might want to incorporate back into your interface to make it even more attractive to start with your app rather than OpenAI.

Crafting effective prompts is challenging and I like the idea of guided prompt engineering, but I’m definitely more likely to choose a single interface with controls than choose among a large number of separate specialized apps. Especially if I can save and re-use and share those instructions and just swap out the content. I also like the idea of increasing the visibility of the underlying prompt, post-processing the return, and the potential of gathering feedback from other users.

But, like I said, I haven’t yet explored all that you guys have packaged up. I’m eager to do so.

Yep these are all good points and we have plans to enhance around some of the ideas you’ve had.

We are going all in on our AI suite and will be continuing to innovate like crazy around this.

Will be a lot more personalization, and get you to a detailed solution as fast as possible

Sam

I love this. Has given me some more ideas around what we could do. You can ofcourse already do this, but we would be minimising clicks and personalizing more

One of our priorities is to run multiple tools at the same time on the same piece of code. Creating a detailed thread of information and recommendation for the same formula or piece of code. Then being able to continue the thread if you want to dive into anything further.

It;s going to be an incredible tool, trust me. I’m determined to make it so!