Power BI Challenge 8 - Jira IT Service Desk Analysis Entry from Brad

Here’s Brad’s entry for Power BI Challenge 8. @bradsmith, feel free to add other details of your work.

image

image

image

image

image

image

image

image

Here is the link to the report:

And here’s how Brad described it:

Hello everyone!

For my submission I added an additional “Description” field which is common in Jira tickets, but primarily limited the use of this field to the topic modelling report. There’s several machine learning algorithms in this report:

  • Topic Modelling (Python)
  • K-Means Clustering (Python)
  • Anomaly Detection (Python)
  • Auto ML Forecasting ®
    • Auto ARIMA
    • Auto ARIMA with XGBoost Errors
    • ETS
    • Prophet
    • LM
    • XGBoost
    • XGBoost Workflow
    • Multivariate Adaptive Regression Splines (MARS)

If R or Python visuals had the same interactivity/reactivity and all the features that they do using Shiny or just in standard R visuals then I’d have had these done live, but so that I could fully take advantage of Power BI’s visuals and suite of features I had Power Query run the scripts.

This is just the second draft of the dash and I’d want to go through another round of development before presenting it to a client/project lead, but I think it came out pretty good. I’d want to at least add some navigation features, tune the ML algorithms, and optimize the performance, but I’d need to understand more from the client/project lead before I could do that.

To learn about the real-life scenario presented for the challenge, be sure to click on the image below.

power-bi-chal-1

4 Likes

Before I get started, I used some slightly different terminology as most so here’s a quick cheat sheet:

Descriptions

Summary

Problem: A cause of one or more incidents. The cause is not usually known at the time a problem record is created, and the problem management process is responsible for further investigation.
Incident: An unplanned interruption to an IT service or reduction in the quality of an IT service. Failure of a configuration item that has not yet affected service is also an incident – for example, failure of one disk from a mirror set.
Request: A formal request from a user for something to be provided – for example, a request for information or advice; to reset a password; or to install a workstation for a new user. Service requests are managed by the request fulfillment process, usually in conjunction with the service desk. Service requests may be linked to a request for change as part of fulfilling the request.
Case: A single incident, problem, or request ticket.
Anomaly Detection: Anomaly Detection (aka outlier analysis) used machine learning to identify data points, events, and/or observations that deviate from a dataset’s normal behavior.
Anomaly Score: The anomaly score (aka the z-score), is the number of standard deviations away from the mean.
Topic Modelling: Unsupervised machine learning to detect word and phrase patterns and automatically clustering word groups and similar expressions that best characterize a dataset.
K-Means Clustering: Unsupervised machine learning to group similar data points together and discover underlying patterns by looking for a fixed number ( k ) of clusters in a dataset.
Cluster: A collection of data points aggregated together because of certain similarities.
Ensemble Forecasting: A form of Monte Carlo forecasting that produces a forecast from a set of forecast models.

Power Query

Summary

I originally added several duplicate fields to the dataset so I could play around with the data and compare the results to the new columns without changing the original dataset. Since a lot of the date fields in the dataset were all over the place (due dates before created dates, created dates were after resolved dates, etc.), I used some helper columns to try to clean up the data. These became the ‘Opened date (modified)’ (aka Created) and the ‘Due date (modified)’ (aka Due Date).

Anomaly Detection: Also known as Outlier Analysis. I chose to ignore the issue type, assignee, and due date as features to determine if there are any outliers not being affected by any of these three columns. The due date was already determined to be something we need to address. The issue type is something identified by the creator and are out of the teams control. The assignee is often determined by a standard revolving ticketing system based on teams, but I should have used the creator instead as the main goal is to determine if any of the fields that are within our control are leading to an inconsistent number of issues arising or if they are being assigned disproportionately. The results will provide a Label column which identifies if the issue is an outlier or not while the Score column determines the z-score (or the number of standard deviations away from the mean).

Topic Modelling: A number of things are happening here. First, I am using a standard English corpus (a corpus is just a collection of words, phrases, or terminology) and selected the description column to be analyzed. Normally I’d select a specific model and set the number of topics to identify, but I chose to go with the safest option Latent Dirichlet Allocation (LDA) model with 4 topics. This is going to calculate a dominant topic for a specific issue, the percentage of that dominant topic in the description, and the percentage of each topic in the description. When someone has an issue they can only select a specific issue type, but in the description they may describe several issues that may be related or if an issue really falls into that type. This allows us to better classify issues, assign the best individuals to deal with issues in the most efficient way, and deal with concerns that are starting to arise before they become a major problem.

Clustering: Unsupervised machine learning to group similar data points together and discover underlying patterns by looking for a fixed number (k) of clusters in a dataset. I chose to ignore the issue type and description as clusters as the anomaly detection and topic modeling will already determine groups for these columns and any issue will be identified there. A cluster refers to a collection of data points aggregated together because of certain similarities. The K-means algorithm will identify the number of

Auto ML: The most complicated machine learning models in the report by far. First I needed to group all of data by month and the count of issues that month. I tried using a weekly approach by only one of the models was able to work and that doesn’t give us much data to benchmark and calibrate the model. Once the data is formatted, the R script will go through these steps:
“1. Data”
“1.1. Import and format the data so that R can read it”
“1.2. Split the data into a training and testing dataset”
“2. Modelling”
“2.1. Model 1: Auto ARIMA”
“2.2. Model 2: Boosted Auto ARIMA”
“2.3. Model 3: ETS”
“2.4. Model 4: Prophet”
“2.5. Model 5: Linear Regression”
“2.6. Model 6: MARS”
“2.7. Model 7: XGBoost”
“2.8. Model 8: XGBoost Workflow”
“3. Model Table: Used for batch forecasting, calibration, accuracy, and creating an ensemble forecast.”
“4. Calibration: Calibrate each model using the model table.”
“5. Accuracy: Assesses each models accuracy.”
“6. Refit & Forecast: Refit the calibrated models to the original dataset.”
“7. Forecast: Forecast each model 12 months out.”
“8. Ensemble: Create an ensemble forecast model from the mean of each model.”
“9. Clean: Remove unused tables.”

These all were based entirely on the original excel file and created these tables:
“• Auto ML – Accuracy”
“• Auto ML – Forecasts”
“• Auto ML – Predictions”
“• JIRA”

The other tables in the dataset are listed below:
“• Date Opened (using the function: fnDateTable)”
“• Max Year (static table)”
“• Priority (static table)”

Report

Summary

The report contains 13 total pages (eight standard pages, one information page, and four tooltip pages). The pages are:
“• Executive Dashboard”
“• Incident Report”
“• Active Incident Tracker”
“• Case Overview”
“• Forecasting”
“• Clustering”
“• Anomaly Detection”
“• Topic Modelling”
“• Glossary (I originally titled this glossary, but as you can see in the background changed it to be an information page instead).”
“• Tooltip - Incident Report”
“• Tooltip - Incident Breakdown by Priority”
“• Tooltip - Problem Breakdown by Priority”
“• Tooltip - Request Breakdown by Priority”

Measures

Summary

Total Incidents
// Calculates total distinct incidents.
CALCULATE(
 DISTINCTCOUNT(JIRA[Issue id]),
 FILTER(JIRA, [Class] = “Incident”)
)

Total Open Cases
// Calculates total active open cases.
CALCULATE(
 DISTINCTCOUNT(JIRA[Issue id]) + 0,
 REMOVEFILTERS(
  JIRA[Opened (modified)],
  JIRA[Overdue],
  JIRA[Max Year?]
 ),
 FILTER(
  JIRA,
  [Active (Excluding Resolved)] = “Active”
 )
)

Total Overdue Incidents
// Total distinct overdue incidents.
CALCULATE(
 DISTINCTCOUNT(JIRA[Issue id]),
 FILTER(
  JIRA,
  JIRA[Overdue] = “Overdue” && [Class] = “Incident”
 )
)

Total Closed Cases
// Calculates total closed or resolved cases.
CALCULATE(
 DISTINCTCOUNT(JIRA[Issue id]) + 0,
 REMOVEFILTERS(
  JIRA[Opened (modified)],
  JIRA[Overdue],
  JIRA[Max Year?]),
 FILTER(
  JIRA,
  [Active (Excluding Resolved)] = “Closed/Resolved”
 )
)

Total Cases
// Calculates total cases.
CALCULATE(
 DISTINCTCOUNT(JIRA[Issue id]) + 0,
 REMOVEFILTERS(
  JIRA[Opened (modified)],
  JIRA[Overdue],
  JIRA[Max Year?]
 )
)

Total Active Incidents
// Calculates total distinct active incidents.
CALCULATE(
 DISTINCTCOUNT([Issue id]),
 REMOVEFILTERS(
  JIRA[Opened (modified)],
  JIRA[Overdue],
  JIRA[Max Year?]
 ),
 FILTER(
  JIRA,
  [Active] = TRUE && [Class] = “Incident”
 )
)

Time to Close an Incident (seconds)
// Calculates the difference in seconds between opening and closing an incident.
CALCULATE(
 AVERAGEX(JIRA,
  DATEDIFF([Opened (modified)], [Closed], SECOND)
 ),
 FILTER(JIRA,
  [Closed] > [Opened (modified)] && [Class] = “Incident”
 )
)

Time to Close an Incident
// Calculates the difference in days between opening and closing an incident.
CALCULATE(
 AVERAGEX(JIRA,
  DATEDIFF([Opened (modified)], [Closed], DAY)
 ),
 FILTER(JIRA,
  [Closed] > [Opened (modified)] && [Class] = “Incident”
 )
)

Show Group
// Shows selected groups.
SELECTEDVALUE(
 JIRA[Issue Type],
 “Showing All Groups”
)

Show Assignees
// Shows selected assignees.
SELECTEDVALUE(
 JIRA[Assignee],
 “All Assignees”
)

Last Year Total Incidents
// Total incidents last year.
CALCULATE(
 DISTINCTCOUNT([Issue id]),
 FILTER(
  JIRA,
  [Year] = YEAR(MAX([Opened (modified)])) - 1 && [Class] = “Incident”
 )
)

Incidents Solved by Assigned
// Counts each distinct incident solved by the assigned employee.
CALCULATE(
 DISTINCTCOUNT([Issue id]),
 FILTER(
  JIRA,
  [Assignee] = [Reporter] && [Class] = “Incident”
 )
)

Current Year Total Incidents
// Counts each distinct incident made in the last year.
CALCULATE(
 DISTINCTCOUNT(JIRA[Issue id]),
 REMOVEFILTERS(
  JIRA[Opened (modified)],
  JIRA[Overdue],
  JIRA[Max Year?]
 ),
 FILTER(
  JIRA,
  [Max Year?] = TRUE && [Class] = “Incident”
 )
)

Incident Average Age (DD:HH:MM:SS)
// Transforms time in seconds into DD:HH:MM:SS format.

// Formats time in seconds into days
FORMAT(
 ROUNDDOWN(
  CALCULATE(
   AVERAGEX(JIRA, [Time to Close an Incident (seconds)]) / 86400,
   FILTER(JIRA, [Class] = “Incident”)
  ),
0
 ),
 “d”
) & " day(s) " &

// Formats time in seconds into hours, minutes, and seconds
FORMAT(
 CALCULATE(
  AVERAGEX(JIRA, [Time to Close an Incident (seconds)]) / 86400,
  FILTER(JIRA, [Class] = “Incident”)
 ),
 “hh:mm:ss”
)

% of Overdue Incidents
// Shows the percentage of incidents which are overdue.
CALCULATE(
 DISTINCTCOUNT(JIRA[Issue id]),
 FILTER(
  JIRA,
  JIRA[Overdue] = “Overdue” && [Class] = “Incident”
 )
) / CALCULATE(
  DISTINCTCOUNT(JIRA[Issue id]),
  REMOVEFILTERS(JIRA[Overdue]),
  FILTER(JIRA, [Class] = “Incident”)
 )

% of Critical and High Priority Incidents
// Shows the percentage of critical or high priority incidents.
CALCULATE(
 DISTINCTCOUNT([Issue id]),
 FILTER(
  JIRA,
  [Priority] = “Highest” && [Class] = “Incident” ||
  [Priority] = “High” && [Class] = “Incident”
 )
) / CALCULATE(
  DISTINCTCOUNT([Issue id]),
  FILTER(JIRA, [Class] = “Incident”)
 )

% Difference Incident YOY
// Calculates the total change in incidents from last year to this year.
([Current Year Total Incidents] - [Last Year Total Incidents]) / [Last Year Total Incidents]

Reopen count (Incident)
// Calculates total distinct incidents reopened after resolved date.
CALCULATE(
 DISTINCTCOUNT([Issue id]),
 FILTER(
  JIRA,
  [Updated] > [Resolved] && [Class] = “Incident”
 )
)

Diff Between Suggested vs. User Due Date
// Calculates difference between original user input due date and the suggested system due date.
CALCULATE(
 AVERAGEX(JIRA,
  DATEDIFF(
   JIRA[Due Date],
   JIRA[Suggested Due Date],
   DAY
  )
 ),
 FILTER(JIRA, [Due Date] <> BLANK()))

% of Overdue Requests
// Shows the percentage of requests which are overdue.
CALCULATE(
 DISTINCTCOUNT(JIRA[Issue id]),
 FILTER(
  JIRA,
  JIRA[Overdue] = “Overdue” && [Class] = “Request”
 )
) / CALCULATE(
  DISTINCTCOUNT(JIRA[Issue id]),
  REMOVEFILTERS(JIRA[Overdue]),
  FILTER(JIRA, [Class] = “Request”)
 )

Current Year Total Requests
// Counts each distinct request made in the last year.
CALCULATE(
 DISTINCTCOUNT(JIRA[Issue id]),
 REMOVEFILTERS(
  JIRA[Opened (modified)],
  JIRA[Overdue],
  JIRA[Max Year?]
 ),
 FILTER(
  JIRA,
  [Max Year?] = TRUE && [Class] = “Request”
 )
)

Last Year Total Requests
// Total requests last year.
CALCULATE(
 DISTINCTCOUNT([Issue id]) + 0,
 FILTER(
  JIRA,
  [Year] = YEAR(MAX([Opened (modified)])) - 1 && [Class] = “Request”
 )
)

Total # Requests
// Calculates total distinct requests.
VAR TotalRequests = CALCULATE(
 DISTINCTCOUNT(JIRA[Issue id]),
 FILTER(JIRA, [Class] = “Request”))
RETURN
 IF(ISBLANK(TotalRequests), 0, TotalRequests)

Total Active Requests
// Calculates total distinct active requests.
CALCULATE(
 DISTINCTCOUNT([Issue id]) + 0,
 REMOVEFILTERS(
  JIRA[Opened (modified)],
  JIRA[Overdue],
  JIRA[Max Year?]
 ),
 FILTER(
  JIRA,
  [Active] = TRUE && [Class] = “Request”
 )
)

% Difference Problems YOY
// Calculates the total change in problems from last year to this year.
([Current Year Total Problems] - [Last Year Total Problems]) / [Last Year Total Problems]

% of Critical and High Priority
// Shows the percentage of critical or high priority problems.
CALCULATE(
 DISTINCTCOUNT([Issue id]) + 0,
 FILTER(
  JIRA,
  [Priority] = “Highest” && [Class] = “Problem” ||
  [Priority] = “High” && [Class] = “Problem”
 )
) / CALCULATE(
  DISTINCTCOUNT([Issue id]),
  FILTER(JIRA, [Class] = “Problem”)
 )

% of Overdue Problems
// Shows the percentage of problems which are overdue.
CALCULATE(
 DISTINCTCOUNT(JIRA[Issue id]),
 FILTER(
  JIRA,
  JIRA[Overdue] = “Overdue” && [Class] = “Problem”
 )
) / CALCULATE(
  DISTINCTCOUNT(JIRA[Issue id]),
  REMOVEFILTERS(JIRA[Overdue]),
  FILTER(JIRA, [Class] = “Problem”)
 )

Current Year Total Problems
// Counts each distinct problem made in the last year.
CALCULATE(
 DISTINCTCOUNT(JIRA[Issue id]),
 REMOVEFILTERS(
  JIRA[Opened (modified)],
  JIRA[Overdue],
  JIRA[Max Year?]
 ),
 FILTER(
  JIRA,
  [Max Year?] = TRUE && [Class] = “Problem”
 )
)

Last Year Total Problems
// Total problems last year.
CALCULATE(
 DISTINCTCOUNT([Issue id]),
 FILTER(
  JIRA,
  [Year] = YEAR(MAX([Opened (modified)])) - 1 && [Class] = “Problem”
 )
)

Problems with Related Incidents
// Counts each distinct problems which has a related incident.
CALCULATE(
 DISTINCTCOUNT([Issue id]),
 FILTER(JIRA,
  JIRA[Related Incidents] > 0 &&
  ISBLANK([Related Incidents]) = FALSE &&
  [Class] = “Problem”
 )
)

Total # Problems
// Calculates total distinct problem.
CALCULATE(
 DISTINCTCOUNT(JIRA[Issue id]),
 FILTER(JIRA, [Class] = “Problem”)
)

Total Active Problems
// Calculates total distinct active problems.
CALCULATE(
 DISTINCTCOUNT([Issue id]),
 REMOVEFILTERS(
  JIRA[Opened (modified)],
  JIRA[Overdue],
  JIRA[Max Year?]
 ),
 FILTER(
  JIRA,
  [Active] = TRUE && [Class] = “Problem”
 )
)

% made SLA
// Shows the percentage of requests which made SLA
CALCULATE(
 DISTINCTCOUNT(JIRA[Issue id]),
 FILTER(JIRA, JIRA[Made SLA] = TRUE)
) / DISTINCTCOUNT(JIRA[Issue id])

% of known error
// Shows the percentage of problems that are known errors
VAR KnownError = CALCULATE(
 DISTINCTCOUNT([Issue id]),
 FILTER(JIRA,
  [Issue Type] = “Break/fix Response” && [Class] = “Problem”
 )
) / CALCULATE(
  DISTINCTCOUNT([Issue id]),
  FILTER(JIRA, [Class] = “Problem”)
 )

return
 IF(ISBLANK(KnownError), 0, KnownError)

% Difference Requests YOY
// Calculates the total change in requests from last year to this year.
IFERROR(
 (([Current Year Total Requests] - [Last Year Total Requests]) / [Last Year Total Requests]),
 BLANK()
)

Total Problems Last Two Years
// Calculates Total Problems last two years.
VAR TotalProblems = [Current Year Total Problems] + [Last Year Total Problems]
RETURN
 IF(ISBLANK(TotalProblems), 0, TotalProblems)

Total Anamolies
// Total count of anomalies detected.
CALCULATE(
 DISTINCTCOUNT([Issue id]),
 FILTER(JIRA, JIRA[Label] = 1)
)

Time to Close a Case
// Calculates the difference in days between opening and closing an case.
CALCULATE(
 AVERAGEX(JIRA,
  DATEDIFF([Opened (modified)], [Closed], DAY)
 ),
 FILTER(JIRA,
  [Closed] > [Opened (modified)]
 )
)

Time to Close a Case (seconds)
// Calculates the difference in seconds between opening and closing an case.
CALCULATE(
 AVERAGEX(JIRA,
  DATEDIFF([Opened (modified)], [Closed], SECOND)
 ),
 FILTER(JIRA,
  [Closed] > [Opened (modified)]
 )
)

Case Average Age (DD:HH:MM:SS)
// Transforms time in seconds into DD:HH:MM:SS format.

// Formats time in seconds into days
FORMAT(
 ROUNDDOWN(
  CALCULATE(
   AVERAGEX(JIRA, [Time to Close a Case (seconds)]) / 86400
  ),
0
 ),
 “d”
) & " day(s) " &

// Formats time in seconds into hours, minutes, and seconds
FORMAT(
 CALCULATE(
  AVERAGEX(JIRA, [Time to Close a Case (seconds)]) / 86400
 ),
 “hh:mm:ss”
)

Total Overdue Cases
// Total distinct overdue cases.
CALCULATE(
 DISTINCTCOUNT(JIRA[Issue id]),
 FILTER(
  JIRA,
  JIRA[Overdue] = “Overdue”
 )
)

Total Anamolies Overdue
// Total count of anomalies overdue.
CALCULATE(
 COUNT(JIRA[Label]),
 FILTER(JIRA,
  JIRA[Label] = 1 &&
  [Overdue] = “Overdue”
 )
) + 0

% of Critical and High Priority Anomalies
// Shows the percentage of critical or high priority anomalies.
CALCULATE(
 DISTINCTCOUNT([Issue id]) + 0,
 FILTER(
  JIRA,
  [Priority] = “Highest” ||
  [Priority] = “High”
 )
) / CALCULATE(
  DISTINCTCOUNT([Issue id])
 )

% of Overdue Cases
// Shows the percentage of cases which are overdue.
CALCULATE(
 DISTINCTCOUNT(JIRA[Issue id]),
 FILTER(
  JIRA,
  JIRA[Overdue] = “Overdue”
 )
) / CALCULATE(
  DISTINCTCOUNT(JIRA[Issue id]),
  REMOVEFILTERS(JIRA[Overdue])
 )

Historic
// Returns only the historic data.
CALCULATE (
  AVERAGEX(‘Auto ML - Forecasts’, ‘Auto ML - Forecasts’[.value]),
  FILTER (
   ALL ( ‘Auto ML - Forecasts’[Model] ),
   ‘Auto ML - Forecasts’[Model] = “Actual”
  )
 )

Forecast
// If a forecast model has not been selected then return an ensemble forecast else return the forecast model (excluding actual data or ensemble)
SWITCH(TRUE(),
 [Selected Model] = “Ensemble”, CALCULATE(
  AVERAGEX(‘Auto ML - Forecasts’, ‘Auto ML - Forecasts’[.value]),
  ‘Auto ML - Forecasts’[Model] = “Ensemble”
 ),
 CALCULATE(
  AVERAGEX(‘Auto ML - Forecasts’, ‘Auto ML - Forecasts’[.value]),
  ‘Auto ML - Forecasts’[Model] <> “Actual” &&
  ‘Auto ML - Forecasts’[Model] <> “Ensemble”
 )
)

Selected Model
// Returns the selected forecast model.
SELECTEDVALUE(‘Auto ML - Forecasts’[Model], “Ensemble”)

Lower
// If a forecast model has not been selected then return an ensemble lower confidence bound else return the forecast model (excluding actual data or ensemble)
SWITCH(TRUE(),
 [Selected Model] = “Ensemble”, CALCULATE(
  AVERAGEX(‘Auto ML - Forecasts’, ‘Auto ML - Forecasts’[.conf_lo]),
  ‘Auto ML - Forecasts’[Model] = “Ensemble”
 ),
 CALCULATE(
  AVERAGEX(‘Auto ML - Forecasts’, ‘Auto ML - Forecasts’[.conf_lo]),
  ‘Auto ML - Forecasts’[Model] <> “Actual” &&
  ‘Auto ML - Forecasts’[Model] <> “Ensemble”
 )
)

Upper
// If a forecast model has not been selected then return an ensemble upper confidence bound else return the forecast model (excluding actual data or ensemble)
SWITCH(TRUE(),
 [Selected Model] = “Ensemble”, CALCULATE(
  AVERAGEX(‘Auto ML - Forecasts’, ‘Auto ML - Forecasts’[.conf_hi]),
  ‘Auto ML - Forecasts’[Model] = “Ensemble”
 ),
 CALCULATE(
  AVERAGEX(‘Auto ML - Forecasts’, ‘Auto ML - Forecasts’[.conf_hi]),
  ‘Auto ML - Forecasts’[Model] <> “Actual” &&
  ‘Auto ML - Forecasts’[Model] <> “Ensemble”
 )
)

Columns

Summary

Max Year?
IF(
 YEAR([Opened (modified)]) = YEAR(MAX([Opened (modified)])),
 TRUE,
 FALSE
)

Overdue
// Calculates incident overdue status.
IF(
 [Active] = FALSE && [Closed] > [Suggested Due Date] && [Suggested Due Date] <> BLANK() ||
 [Active] = TRUE && NOW() > [Suggested Due Date] && [Suggested Due Date] <> BLANK(),
 “Overdue”,
 “Non Overdue”
)

Opened Month Tooltip
// Formats the opening date as the month name.
FORMAT([Opened (modified)], “MMMM”)

Active (Excluding Resolved)
// Calculates if an incident is active and open.
IF(
 [Active] = FALSE(),
 “Closed/Resolved”,
 “Active”
)

Opened (mmm YYYY)
// Formats the opened date as mmm YYYY.
LEFT([Opened Month Tooltip], 3) & " " & [Year]

OpenedMonthYearSort
// Formats the opening year and month as an integer.
[Year] * 100 + MONTH([Opened (modified)])

ActiveExcludingResolvedSort
// Calculates if an incident is active and open.
IF(
 [Active] = FALSE(),
 1,
2
)

Priority Status
// Defines priority status in standardized terminology.
SWITCH(TRUE(),
 [Priority] = “Highest”, “Critical”,
 [Priority] = “High”, “High”,
 [Priority] = “Medium”, “Moderate”,
 [Priority] = “Low”, “Low”,
 [Priority] = “Minor”, “Planning”,
 BLANK()
)

PrioritySort
// Calculates priority status sort order.
SWITCH(TRUE(),
 [Priority] = “Highest”, 5,
 [Priority] = “High”, 4,
 [Priority] = “Medium”, 3,
 [Priority] = “Low”, 2,
 [Priority] = “Minor”, 1,
 BLANK()
)

Opened Date
// Removes the time from the opening date to link to the Date Opened table.
FORMAT([Opened (modified)], “mm/dd/YYYY”)
Overdue Status
// Calculates incident overdue status flag.
IF(
 [Active] = FALSE && [Closed] > [Suggested Due Date] && [Suggested Due Date] <> BLANK() ||
 [Active] = TRUE && NOW() > [Suggested Due Date] && [Suggested Due Date] <> BLANK(),
 1,
0
)

Class
// Adds an issue type classification group.
VAR LevelNumber = 1
VAR LevelPath = PATH ( JIRA[Issue id], JIRA[Parent id] )
VAR LevelKey = PATHITEM ( LevelPath, LevelNumber, INTEGER )
VAR LevelName = LOOKUPVALUE ( JIRA[Issue id], JIRA[Issue id], LevelKey )
VAR LevelCount = CALCULATE(COUNTROWS(JIRA), ALL(JIRA), JIRA[Parent id] = EARLIER(JIRA[Issue id]))
VAR Result = IF(LevelName = [Issue id], IF(LevelCount <> BLANK(), “Problem”, “Incident”), “Incident”)

RETURN

SWITCH(TRUE(),
 [Issue Type] = “New User Request”, “Request”,
 [Issue Type] = “Service Request”, “Request”,
 [Issue Type] = “Service Request with Approvals”, “Request”,

// I only included these three for visualization purposed. There’s not enough information in the dataset to determine if any of these would be classified as a problem or incident.
 [Issue Type] = “Task”, “Problem”,
 [Issue Type] = “Sub-task”, “Problem”,
 [Issue Type] = “Corrective Action Plan”, “Problem”,
 [Issue Type] = “POA&M Milestone Subtask”, “Problem”,
 Result
)

Watchers (grouped)
// Concatenates all watchers, skipping blanks.
IF([Watchers] = “”, “”, [Watchers]) &
IF([Watchers_4] = “”, “”, ", " & [Watchers_4]) &
IF([Watchers_5] = “”, “”, ", " & [Watchers_5]) &
IF([Watchers_6] = “”, “”, ", " & [Watchers_6]) &
IF([Watchers_7] = “”, “”, ", " & [Watchers_7])

Suggested Due Date
// Calculates a smart due date.
VAR LevelNumber = 1
VAR LevelPath = PATH ( JIRA[Issue id], JIRA[Parent id] )
VAR LevelKey = PATHITEM ( LevelPath, LevelNumber, INTEGER )
VAR LevelName = LOOKUPVALUE ( JIRA[Due Date], JIRA[Issue id], LevelKey )
VAR DateModifier = CALCULATE(ROUND(AVERAGEX(JIRA, [Resolved] - [Created]), 0), FILTER(JIRA, [Resolved] <> BLANK()))
VAR ResultDate = IF(ISBLANK([Due Date]), LevelName, [Due Date])
VAR UpdatedResult = IF(ISBLANK(ResultDate), [Due Date (modified)], ResultDate)
VAR Result = IF(ISBLANK(UpdatedResult), [Created] + DateModifier, UpdatedResult)
RETURN
Result

Related Incidents
// Calculates total incidents related to a distinct problem.
CALCULATE(
 COUNTROWS(JIRA),
 ALL(JIRA),
 JIRA[Parent id] = EARLIER(JIRA[Issue id])
)

Made SLA
// Calculates if the service level agreement was met.
IF([Suggested Due Date] >= [Resolved], TRUE(), FALSE())

Time Span Breakdown
// Calculates a time span age category.
SWITCH(TRUE(),
 DATEDIFF([Opened (modified)], NOW(), DAY) < 180, “< 180 d”,
 DATEDIFF([Opened (modified)], NOW(), DAY) > 360, “> 360 d”,
 “180-360 d”
)

Time Span Breakdown Sort
// Calculates a time span age category sort value.
SWITCH(TRUE(),
 DATEDIFF([Opened (modified)], NOW(), DAY) < 180, 3,
 DATEDIFF([Opened (modified)], NOW(), DAY) > 360, 1,
2
)

Anomaly Status
// Returns the anomaly status based on the label flag.
IF([Label] = 1, “Anomaly”, “Normal”)

Time to Close
// Calculates the difference in days between opening and closing an case.
DATEDIFF([Opened (modified)], [Closed], DAY)

Priority Status
// Defines priority status in standardized terminology.
SWITCH(TRUE(),
 [Priority] = 5, “Critical”,
 [Priority] = 4, “High”,
 [Priority] = 3, “Moderate”,
 [Priority] = 2, “Low”,
 [Priority] = 1, “Planning”,
 BLANK()
)

Model
// Creates a shortened forecast model name for reporting purposes.
SWITCH(TRUE(),
 [.model_id] = 0, “Actual”,
 [.model_id] = 1, “Auto ARIMA”,
 [.model_id] = 2, “Auto ARIMA with XGBoost”,
 [.model_id] = 3, “ETS”,
 [.model_id] = 4, “Prophet”,
 [.model_id] = 5, “LM”,
 [.model_id] = 6, “XGBoost”,
 [.model_id] = 7, “XGBoost Workflow”,
 [.model_id] = 8, “MARS”,
 [.model_id] = 9, “Ensemble”,
 BLANK()
)

4 Likes

@bradsmith,

Thanks so much for taking the time on the detailed writeup. Much to explore and learn from here.

Also, what’s the secret to the expandable hierarchy in your forum post? I’ve never seen that trick before.

  • Brian
3 Likes

@bradsmith,

I’m just as curious as @BrianJ on the expandable hierarchy on your forum post. Please Tell!

Thanks
Jarrett

2 Likes

@JarrettM & @BrianJ you go into the Gear icon on the top right hand side of the message header bar

Select the text you want to have hidden/expandable
Then Click hide details

Summary

Dummy Data
Dummy Data

5 Likes

@DavieJoe,

Thanks! I learn new stuff here every day…

  • Brian
3 Likes

@BrianJ @JarrettM @DavieJoe I literally found out how to do that about 15 minutes after I had originally posted it while trying to figure out if I could underline text… Happy accident!

3 Likes

It’s a nifty little feature!

@bradsmith Lol I checked your writeup and saw nothing except the headings as the details were hidden and thought you forgot to add the details. Then I checked after some time and saw comments below about expanding/collapsing feature :smile:
This is an excellent writeup (in English now :smile:) and many things to learn from what you did in your report.
Great Stuff!!
Thanks for the detailed explanation :+1:

3 Likes

Hey Brad

I’m really impressed by this submission and equally impressed by your write up which is substantial.

I like how you’ve really gone out there and utilized some of the unique custom visualizations available to us in Power BI. I think you’ve utilized these really effectively in combination which I find can sometimes be quite difficult with the custom visuals just being so different to each other generally.

This report is also really detailed which showcases that you’ve put a lot of effort into the analysis part and utilized dex formulas really effectively.

Also, super impressed by some of the advanced analysis especially around forecasting that you’ve completed. I haven’t been able to do a real deep dive but from what I can see you have really worked in some advanced analytical techniques into your model and have produced a variety of insights that no one else has really shown.

Super work on this report and a really valuable contribution to what we’re doing with the challenges. Everyone is learning so much from what you have produced here.

Can’t wait to see more from you in the near future.

Sam

3 Likes

Hi everyone,

I have a question about creating a line chart based on two date columns, in this challenge for instance, “Creation Date” and “Resolution Date”. Would you please tell me how can you create charts based on 2 different date columns?
Another question is that, Do we need to create separate “date tables” for each date column in our “Fact Table”, when we have more than one date column?

Thanks in advance
Hossein

@sedhosen

It depends upon what you are trying to achieve but it is always easier to have ONE CONTINUOUS DATE table which is filtering the Fact Table. In Challenge 8, there is no need to create separate date tables as one date table is enough to filter the results in fact table like Tickets Created & Tickets Resolved.

Moreover, if you have separate date columns in Fact Table, you can handle the dates easily with TREATAS and USERELATIONSHIP DAX function as follows:

Hope it answers your question.

3 Likes

Dear @MudassirAli,
Thanks for your explanation. It really helped me.

Best Wishes
Hossein

3 Likes

Dear @MudassirAli

I have another question about handling 2 date columns in a Fact Table.
Watching the videos you have mentioned, I learned how to use the “USERELATIONSHIP” function for creating MEASURES based on different dates.

But, how can I use these date columns in a “Table”? How can I use the related columns, like"Year", or “Year Month” for different dates (Tickets Created & Tickets Resolved)?

Thanks in advance for your help
Hossein

Hi @sedhosen. As @MudassirAli said, a single date table should always be the goal; there should be no need for multiple date tables to handle multiple data columns in your fact table. I think at this point that a mockup of your specific issue and desired outcome (in Excel?) would be a great help to forum members, and a copy of your work-in-progress PBIX as well.
Greg

(Also looping in @EnterpriseDNA … split from post 10 into its own thread?)

3 Likes

Dear @Greg Thanks for your feedback.
I will explain my issue specifically.
Like the below picture, I need to have “Resolution Month-Year” & “Creation Month-Year” columns in the Table.
I put these columns from 2 different date tables, and based on what I understand from your notes, it is not the goal. So, I wanted to ask you can I create a table like the below picture with only one date table? And if the answer is YES, how can I do that?

Hi @sedhosen.

(Edit: delete reference to inactive relationships; I was following two paths before I had success with one and forgot to clean-up; both post and PBIX attachment are now revised.)

As is often the case in Power BI, there are many ways to accomplish the same thing. Here’s one:

  • add a single date table (e.g., eDNA extended date table)
    Extended Date Table (Power Query M function)

  • as the dates of interest in the [Issues] table are at a “seconds” grain and the dates in the [Dates] table is at a “day” grain, in Power Query add two columns to your [Issues] table

    • Issues[Created - Date Only]
    • Issues[Resolved - Date Only]
    • (for each, select the “base” column, duplicate it, then change its datatype to “Date”)

Then, add measures for your months:

Created Month-Year = 
VAR _CurrentCreatedDate = SELECTEDVALUE( Issues[Created - Date Only] ) 
VAR _CurrentCreatedMonthYear = LOOKUPVALUE( Dates[Month & Year], Dates[Date], _CurrentCreatedDate )
RETURN
_CurrentCreatedMonthYear

Resolution Month-Year = 
VAR _CurrentResolvedDate = SELECTEDVALUE( Issues[Resolved - Date Only] ) 
VAR _CurrentResolvedMonthYear = LOOKUPVALUE( Dates[Month & Year], Dates[Date], _CurrentResolvedDate )
RETURN
_CurrentResolvedMonthYear

Hope this help.
Greg
eDNA Forum - Single Date Table for Multiple Dates.pbix (562.5 KB)

3 Likes

@Greg

Whoa! that is great. Thank you for your thorough explanation.
It is really helpful.

Best
Hossein

1 Like

Oh my what a professional report / dashboard. I am brand new to eDNA and Power Bi. I have been learning form some of the Powerbi desktop files that I can find online. I am a big learner from reverse engendering things. Do you by any chance share the .pbix files for this project. There are so many elements that I would really enjoy reviewing and directing so that I can really learn them. Thank you in advance for any help.

Also, if I am not allowed to do this then please delete the post and accept my apologies.

This is just about the most though report I have seen and think I could learn an absolute ton from it.

2 Likes

Here you go, enjoy!

Power BI Challenge 8 PBIX Files - Enterprise DNA | Knowledge Base

1 Like