Power BI Challenge 8 - Jira IT Service Desk Analysis Entry from Brad

Before I get started, I used some slightly different terminology as most so here’s a quick cheat sheet:

Descriptions

Summary

Problem: A cause of one or more incidents. The cause is not usually known at the time a problem record is created, and the problem management process is responsible for further investigation.
Incident: An unplanned interruption to an IT service or reduction in the quality of an IT service. Failure of a configuration item that has not yet affected service is also an incident – for example, failure of one disk from a mirror set.
Request: A formal request from a user for something to be provided – for example, a request for information or advice; to reset a password; or to install a workstation for a new user. Service requests are managed by the request fulfillment process, usually in conjunction with the service desk. Service requests may be linked to a request for change as part of fulfilling the request.
Case: A single incident, problem, or request ticket.
Anomaly Detection: Anomaly Detection (aka outlier analysis) used machine learning to identify data points, events, and/or observations that deviate from a dataset’s normal behavior.
Anomaly Score: The anomaly score (aka the z-score), is the number of standard deviations away from the mean.
Topic Modelling: Unsupervised machine learning to detect word and phrase patterns and automatically clustering word groups and similar expressions that best characterize a dataset.
K-Means Clustering: Unsupervised machine learning to group similar data points together and discover underlying patterns by looking for a fixed number ( k ) of clusters in a dataset.
Cluster: A collection of data points aggregated together because of certain similarities.
Ensemble Forecasting: A form of Monte Carlo forecasting that produces a forecast from a set of forecast models.

Power Query

Summary

I originally added several duplicate fields to the dataset so I could play around with the data and compare the results to the new columns without changing the original dataset. Since a lot of the date fields in the dataset were all over the place (due dates before created dates, created dates were after resolved dates, etc.), I used some helper columns to try to clean up the data. These became the ‘Opened date (modified)’ (aka Created) and the ‘Due date (modified)’ (aka Due Date).

Anomaly Detection: Also known as Outlier Analysis. I chose to ignore the issue type, assignee, and due date as features to determine if there are any outliers not being affected by any of these three columns. The due date was already determined to be something we need to address. The issue type is something identified by the creator and are out of the teams control. The assignee is often determined by a standard revolving ticketing system based on teams, but I should have used the creator instead as the main goal is to determine if any of the fields that are within our control are leading to an inconsistent number of issues arising or if they are being assigned disproportionately. The results will provide a Label column which identifies if the issue is an outlier or not while the Score column determines the z-score (or the number of standard deviations away from the mean).

Topic Modelling: A number of things are happening here. First, I am using a standard English corpus (a corpus is just a collection of words, phrases, or terminology) and selected the description column to be analyzed. Normally I’d select a specific model and set the number of topics to identify, but I chose to go with the safest option Latent Dirichlet Allocation (LDA) model with 4 topics. This is going to calculate a dominant topic for a specific issue, the percentage of that dominant topic in the description, and the percentage of each topic in the description. When someone has an issue they can only select a specific issue type, but in the description they may describe several issues that may be related or if an issue really falls into that type. This allows us to better classify issues, assign the best individuals to deal with issues in the most efficient way, and deal with concerns that are starting to arise before they become a major problem.

Clustering: Unsupervised machine learning to group similar data points together and discover underlying patterns by looking for a fixed number (k) of clusters in a dataset. I chose to ignore the issue type and description as clusters as the anomaly detection and topic modeling will already determine groups for these columns and any issue will be identified there. A cluster refers to a collection of data points aggregated together because of certain similarities. The K-means algorithm will identify the number of

Auto ML: The most complicated machine learning models in the report by far. First I needed to group all of data by month and the count of issues that month. I tried using a weekly approach by only one of the models was able to work and that doesn’t give us much data to benchmark and calibrate the model. Once the data is formatted, the R script will go through these steps:
“1. Data”
“1.1. Import and format the data so that R can read it”
“1.2. Split the data into a training and testing dataset”
“2. Modelling”
“2.1. Model 1: Auto ARIMA”
“2.2. Model 2: Boosted Auto ARIMA”
“2.3. Model 3: ETS”
“2.4. Model 4: Prophet”
“2.5. Model 5: Linear Regression”
“2.6. Model 6: MARS”
“2.7. Model 7: XGBoost”
“2.8. Model 8: XGBoost Workflow”
“3. Model Table: Used for batch forecasting, calibration, accuracy, and creating an ensemble forecast.”
“4. Calibration: Calibrate each model using the model table.”
“5. Accuracy: Assesses each models accuracy.”
“6. Refit & Forecast: Refit the calibrated models to the original dataset.”
“7. Forecast: Forecast each model 12 months out.”
“8. Ensemble: Create an ensemble forecast model from the mean of each model.”
“9. Clean: Remove unused tables.”

These all were based entirely on the original excel file and created these tables:
“• Auto ML – Accuracy”
“• Auto ML – Forecasts”
“• Auto ML – Predictions”
“• JIRA”

The other tables in the dataset are listed below:
“• Date Opened (using the function: fnDateTable)”
“• Max Year (static table)”
“• Priority (static table)”

Report

Summary

The report contains 13 total pages (eight standard pages, one information page, and four tooltip pages). The pages are:
“• Executive Dashboard”
“• Incident Report”
“• Active Incident Tracker”
“• Case Overview”
“• Forecasting”
“• Clustering”
“• Anomaly Detection”
“• Topic Modelling”
“• Glossary (I originally titled this glossary, but as you can see in the background changed it to be an information page instead).”
“• Tooltip - Incident Report”
“• Tooltip - Incident Breakdown by Priority”
“• Tooltip - Problem Breakdown by Priority”
“• Tooltip - Request Breakdown by Priority”

Measures

Summary

Total Incidents
// Calculates total distinct incidents.
CALCULATE(
 DISTINCTCOUNT(JIRA[Issue id]),
 FILTER(JIRA, [Class] = “Incident”)
)

Total Open Cases
// Calculates total active open cases.
CALCULATE(
 DISTINCTCOUNT(JIRA[Issue id]) + 0,
 REMOVEFILTERS(
  JIRA[Opened (modified)],
  JIRA[Overdue],
  JIRA[Max Year?]
 ),
 FILTER(
  JIRA,
  [Active (Excluding Resolved)] = “Active”
 )
)

Total Overdue Incidents
// Total distinct overdue incidents.
CALCULATE(
 DISTINCTCOUNT(JIRA[Issue id]),
 FILTER(
  JIRA,
  JIRA[Overdue] = “Overdue” && [Class] = “Incident”
 )
)

Total Closed Cases
// Calculates total closed or resolved cases.
CALCULATE(
 DISTINCTCOUNT(JIRA[Issue id]) + 0,
 REMOVEFILTERS(
  JIRA[Opened (modified)],
  JIRA[Overdue],
  JIRA[Max Year?]),
 FILTER(
  JIRA,
  [Active (Excluding Resolved)] = “Closed/Resolved”
 )
)

Total Cases
// Calculates total cases.
CALCULATE(
 DISTINCTCOUNT(JIRA[Issue id]) + 0,
 REMOVEFILTERS(
  JIRA[Opened (modified)],
  JIRA[Overdue],
  JIRA[Max Year?]
 )
)

Total Active Incidents
// Calculates total distinct active incidents.
CALCULATE(
 DISTINCTCOUNT([Issue id]),
 REMOVEFILTERS(
  JIRA[Opened (modified)],
  JIRA[Overdue],
  JIRA[Max Year?]
 ),
 FILTER(
  JIRA,
  [Active] = TRUE && [Class] = “Incident”
 )
)

Time to Close an Incident (seconds)
// Calculates the difference in seconds between opening and closing an incident.
CALCULATE(
 AVERAGEX(JIRA,
  DATEDIFF([Opened (modified)], [Closed], SECOND)
 ),
 FILTER(JIRA,
  [Closed] > [Opened (modified)] && [Class] = “Incident”
 )
)

Time to Close an Incident
// Calculates the difference in days between opening and closing an incident.
CALCULATE(
 AVERAGEX(JIRA,
  DATEDIFF([Opened (modified)], [Closed], DAY)
 ),
 FILTER(JIRA,
  [Closed] > [Opened (modified)] && [Class] = “Incident”
 )
)

Show Group
// Shows selected groups.
SELECTEDVALUE(
 JIRA[Issue Type],
 “Showing All Groups”
)

Show Assignees
// Shows selected assignees.
SELECTEDVALUE(
 JIRA[Assignee],
 “All Assignees”
)

Last Year Total Incidents
// Total incidents last year.
CALCULATE(
 DISTINCTCOUNT([Issue id]),
 FILTER(
  JIRA,
  [Year] = YEAR(MAX([Opened (modified)])) - 1 && [Class] = “Incident”
 )
)

Incidents Solved by Assigned
// Counts each distinct incident solved by the assigned employee.
CALCULATE(
 DISTINCTCOUNT([Issue id]),
 FILTER(
  JIRA,
  [Assignee] = [Reporter] && [Class] = “Incident”
 )
)

Current Year Total Incidents
// Counts each distinct incident made in the last year.
CALCULATE(
 DISTINCTCOUNT(JIRA[Issue id]),
 REMOVEFILTERS(
  JIRA[Opened (modified)],
  JIRA[Overdue],
  JIRA[Max Year?]
 ),
 FILTER(
  JIRA,
  [Max Year?] = TRUE && [Class] = “Incident”
 )
)

Incident Average Age (DD:HH:MM:SS)
// Transforms time in seconds into DD:HH:MM:SS format.

// Formats time in seconds into days
FORMAT(
 ROUNDDOWN(
  CALCULATE(
   AVERAGEX(JIRA, [Time to Close an Incident (seconds)]) / 86400,
   FILTER(JIRA, [Class] = “Incident”)
  ),
0
 ),
 “d”
) & " day(s) " &

// Formats time in seconds into hours, minutes, and seconds
FORMAT(
 CALCULATE(
  AVERAGEX(JIRA, [Time to Close an Incident (seconds)]) / 86400,
  FILTER(JIRA, [Class] = “Incident”)
 ),
 “hh:mm:ss”
)

% of Overdue Incidents
// Shows the percentage of incidents which are overdue.
CALCULATE(
 DISTINCTCOUNT(JIRA[Issue id]),
 FILTER(
  JIRA,
  JIRA[Overdue] = “Overdue” && [Class] = “Incident”
 )
) / CALCULATE(
  DISTINCTCOUNT(JIRA[Issue id]),
  REMOVEFILTERS(JIRA[Overdue]),
  FILTER(JIRA, [Class] = “Incident”)
 )

% of Critical and High Priority Incidents
// Shows the percentage of critical or high priority incidents.
CALCULATE(
 DISTINCTCOUNT([Issue id]),
 FILTER(
  JIRA,
  [Priority] = “Highest” && [Class] = “Incident” ||
  [Priority] = “High” && [Class] = “Incident”
 )
) / CALCULATE(
  DISTINCTCOUNT([Issue id]),
  FILTER(JIRA, [Class] = “Incident”)
 )

% Difference Incident YOY
// Calculates the total change in incidents from last year to this year.
([Current Year Total Incidents] - [Last Year Total Incidents]) / [Last Year Total Incidents]

Reopen count (Incident)
// Calculates total distinct incidents reopened after resolved date.
CALCULATE(
 DISTINCTCOUNT([Issue id]),
 FILTER(
  JIRA,
  [Updated] > [Resolved] && [Class] = “Incident”
 )
)

Diff Between Suggested vs. User Due Date
// Calculates difference between original user input due date and the suggested system due date.
CALCULATE(
 AVERAGEX(JIRA,
  DATEDIFF(
   JIRA[Due Date],
   JIRA[Suggested Due Date],
   DAY
  )
 ),
 FILTER(JIRA, [Due Date] <> BLANK()))

% of Overdue Requests
// Shows the percentage of requests which are overdue.
CALCULATE(
 DISTINCTCOUNT(JIRA[Issue id]),
 FILTER(
  JIRA,
  JIRA[Overdue] = “Overdue” && [Class] = “Request”
 )
) / CALCULATE(
  DISTINCTCOUNT(JIRA[Issue id]),
  REMOVEFILTERS(JIRA[Overdue]),
  FILTER(JIRA, [Class] = “Request”)
 )

Current Year Total Requests
// Counts each distinct request made in the last year.
CALCULATE(
 DISTINCTCOUNT(JIRA[Issue id]),
 REMOVEFILTERS(
  JIRA[Opened (modified)],
  JIRA[Overdue],
  JIRA[Max Year?]
 ),
 FILTER(
  JIRA,
  [Max Year?] = TRUE && [Class] = “Request”
 )
)

Last Year Total Requests
// Total requests last year.
CALCULATE(
 DISTINCTCOUNT([Issue id]) + 0,
 FILTER(
  JIRA,
  [Year] = YEAR(MAX([Opened (modified)])) - 1 && [Class] = “Request”
 )
)

Total # Requests
// Calculates total distinct requests.
VAR TotalRequests = CALCULATE(
 DISTINCTCOUNT(JIRA[Issue id]),
 FILTER(JIRA, [Class] = “Request”))
RETURN
 IF(ISBLANK(TotalRequests), 0, TotalRequests)

Total Active Requests
// Calculates total distinct active requests.
CALCULATE(
 DISTINCTCOUNT([Issue id]) + 0,
 REMOVEFILTERS(
  JIRA[Opened (modified)],
  JIRA[Overdue],
  JIRA[Max Year?]
 ),
 FILTER(
  JIRA,
  [Active] = TRUE && [Class] = “Request”
 )
)

% Difference Problems YOY
// Calculates the total change in problems from last year to this year.
([Current Year Total Problems] - [Last Year Total Problems]) / [Last Year Total Problems]

% of Critical and High Priority
// Shows the percentage of critical or high priority problems.
CALCULATE(
 DISTINCTCOUNT([Issue id]) + 0,
 FILTER(
  JIRA,
  [Priority] = “Highest” && [Class] = “Problem” ||
  [Priority] = “High” && [Class] = “Problem”
 )
) / CALCULATE(
  DISTINCTCOUNT([Issue id]),
  FILTER(JIRA, [Class] = “Problem”)
 )

% of Overdue Problems
// Shows the percentage of problems which are overdue.
CALCULATE(
 DISTINCTCOUNT(JIRA[Issue id]),
 FILTER(
  JIRA,
  JIRA[Overdue] = “Overdue” && [Class] = “Problem”
 )
) / CALCULATE(
  DISTINCTCOUNT(JIRA[Issue id]),
  REMOVEFILTERS(JIRA[Overdue]),
  FILTER(JIRA, [Class] = “Problem”)
 )

Current Year Total Problems
// Counts each distinct problem made in the last year.
CALCULATE(
 DISTINCTCOUNT(JIRA[Issue id]),
 REMOVEFILTERS(
  JIRA[Opened (modified)],
  JIRA[Overdue],
  JIRA[Max Year?]
 ),
 FILTER(
  JIRA,
  [Max Year?] = TRUE && [Class] = “Problem”
 )
)

Last Year Total Problems
// Total problems last year.
CALCULATE(
 DISTINCTCOUNT([Issue id]),
 FILTER(
  JIRA,
  [Year] = YEAR(MAX([Opened (modified)])) - 1 && [Class] = “Problem”
 )
)

Problems with Related Incidents
// Counts each distinct problems which has a related incident.
CALCULATE(
 DISTINCTCOUNT([Issue id]),
 FILTER(JIRA,
  JIRA[Related Incidents] > 0 &&
  ISBLANK([Related Incidents]) = FALSE &&
  [Class] = “Problem”
 )
)

Total # Problems
// Calculates total distinct problem.
CALCULATE(
 DISTINCTCOUNT(JIRA[Issue id]),
 FILTER(JIRA, [Class] = “Problem”)
)

Total Active Problems
// Calculates total distinct active problems.
CALCULATE(
 DISTINCTCOUNT([Issue id]),
 REMOVEFILTERS(
  JIRA[Opened (modified)],
  JIRA[Overdue],
  JIRA[Max Year?]
 ),
 FILTER(
  JIRA,
  [Active] = TRUE && [Class] = “Problem”
 )
)

% made SLA
// Shows the percentage of requests which made SLA
CALCULATE(
 DISTINCTCOUNT(JIRA[Issue id]),
 FILTER(JIRA, JIRA[Made SLA] = TRUE)
) / DISTINCTCOUNT(JIRA[Issue id])

% of known error
// Shows the percentage of problems that are known errors
VAR KnownError = CALCULATE(
 DISTINCTCOUNT([Issue id]),
 FILTER(JIRA,
  [Issue Type] = “Break/fix Response” && [Class] = “Problem”
 )
) / CALCULATE(
  DISTINCTCOUNT([Issue id]),
  FILTER(JIRA, [Class] = “Problem”)
 )

return
 IF(ISBLANK(KnownError), 0, KnownError)

% Difference Requests YOY
// Calculates the total change in requests from last year to this year.
IFERROR(
 (([Current Year Total Requests] - [Last Year Total Requests]) / [Last Year Total Requests]),
 BLANK()
)

Total Problems Last Two Years
// Calculates Total Problems last two years.
VAR TotalProblems = [Current Year Total Problems] + [Last Year Total Problems]
RETURN
 IF(ISBLANK(TotalProblems), 0, TotalProblems)

Total Anamolies
// Total count of anomalies detected.
CALCULATE(
 DISTINCTCOUNT([Issue id]),
 FILTER(JIRA, JIRA[Label] = 1)
)

Time to Close a Case
// Calculates the difference in days between opening and closing an case.
CALCULATE(
 AVERAGEX(JIRA,
  DATEDIFF([Opened (modified)], [Closed], DAY)
 ),
 FILTER(JIRA,
  [Closed] > [Opened (modified)]
 )
)

Time to Close a Case (seconds)
// Calculates the difference in seconds between opening and closing an case.
CALCULATE(
 AVERAGEX(JIRA,
  DATEDIFF([Opened (modified)], [Closed], SECOND)
 ),
 FILTER(JIRA,
  [Closed] > [Opened (modified)]
 )
)

Case Average Age (DD:HH:MM:SS)
// Transforms time in seconds into DD:HH:MM:SS format.

// Formats time in seconds into days
FORMAT(
 ROUNDDOWN(
  CALCULATE(
   AVERAGEX(JIRA, [Time to Close a Case (seconds)]) / 86400
  ),
0
 ),
 “d”
) & " day(s) " &

// Formats time in seconds into hours, minutes, and seconds
FORMAT(
 CALCULATE(
  AVERAGEX(JIRA, [Time to Close a Case (seconds)]) / 86400
 ),
 “hh:mm:ss”
)

Total Overdue Cases
// Total distinct overdue cases.
CALCULATE(
 DISTINCTCOUNT(JIRA[Issue id]),
 FILTER(
  JIRA,
  JIRA[Overdue] = “Overdue”
 )
)

Total Anamolies Overdue
// Total count of anomalies overdue.
CALCULATE(
 COUNT(JIRA[Label]),
 FILTER(JIRA,
  JIRA[Label] = 1 &&
  [Overdue] = “Overdue”
 )
) + 0

% of Critical and High Priority Anomalies
// Shows the percentage of critical or high priority anomalies.
CALCULATE(
 DISTINCTCOUNT([Issue id]) + 0,
 FILTER(
  JIRA,
  [Priority] = “Highest” ||
  [Priority] = “High”
 )
) / CALCULATE(
  DISTINCTCOUNT([Issue id])
 )

% of Overdue Cases
// Shows the percentage of cases which are overdue.
CALCULATE(
 DISTINCTCOUNT(JIRA[Issue id]),
 FILTER(
  JIRA,
  JIRA[Overdue] = “Overdue”
 )
) / CALCULATE(
  DISTINCTCOUNT(JIRA[Issue id]),
  REMOVEFILTERS(JIRA[Overdue])
 )

Historic
// Returns only the historic data.
CALCULATE (
  AVERAGEX(‘Auto ML - Forecasts’, ‘Auto ML - Forecasts’[.value]),
  FILTER (
   ALL ( ‘Auto ML - Forecasts’[Model] ),
   ‘Auto ML - Forecasts’[Model] = “Actual”
  )
 )

Forecast
// If a forecast model has not been selected then return an ensemble forecast else return the forecast model (excluding actual data or ensemble)
SWITCH(TRUE(),
 [Selected Model] = “Ensemble”, CALCULATE(
  AVERAGEX(‘Auto ML - Forecasts’, ‘Auto ML - Forecasts’[.value]),
  ‘Auto ML - Forecasts’[Model] = “Ensemble”
 ),
 CALCULATE(
  AVERAGEX(‘Auto ML - Forecasts’, ‘Auto ML - Forecasts’[.value]),
  ‘Auto ML - Forecasts’[Model] <> “Actual” &&
  ‘Auto ML - Forecasts’[Model] <> “Ensemble”
 )
)

Selected Model
// Returns the selected forecast model.
SELECTEDVALUE(‘Auto ML - Forecasts’[Model], “Ensemble”)

Lower
// If a forecast model has not been selected then return an ensemble lower confidence bound else return the forecast model (excluding actual data or ensemble)
SWITCH(TRUE(),
 [Selected Model] = “Ensemble”, CALCULATE(
  AVERAGEX(‘Auto ML - Forecasts’, ‘Auto ML - Forecasts’[.conf_lo]),
  ‘Auto ML - Forecasts’[Model] = “Ensemble”
 ),
 CALCULATE(
  AVERAGEX(‘Auto ML - Forecasts’, ‘Auto ML - Forecasts’[.conf_lo]),
  ‘Auto ML - Forecasts’[Model] <> “Actual” &&
  ‘Auto ML - Forecasts’[Model] <> “Ensemble”
 )
)

Upper
// If a forecast model has not been selected then return an ensemble upper confidence bound else return the forecast model (excluding actual data or ensemble)
SWITCH(TRUE(),
 [Selected Model] = “Ensemble”, CALCULATE(
  AVERAGEX(‘Auto ML - Forecasts’, ‘Auto ML - Forecasts’[.conf_hi]),
  ‘Auto ML - Forecasts’[Model] = “Ensemble”
 ),
 CALCULATE(
  AVERAGEX(‘Auto ML - Forecasts’, ‘Auto ML - Forecasts’[.conf_hi]),
  ‘Auto ML - Forecasts’[Model] <> “Actual” &&
  ‘Auto ML - Forecasts’[Model] <> “Ensemble”
 )
)

Columns

Summary

Max Year?
IF(
 YEAR([Opened (modified)]) = YEAR(MAX([Opened (modified)])),
 TRUE,
 FALSE
)

Overdue
// Calculates incident overdue status.
IF(
 [Active] = FALSE && [Closed] > [Suggested Due Date] && [Suggested Due Date] <> BLANK() ||
 [Active] = TRUE && NOW() > [Suggested Due Date] && [Suggested Due Date] <> BLANK(),
 “Overdue”,
 “Non Overdue”
)

Opened Month Tooltip
// Formats the opening date as the month name.
FORMAT([Opened (modified)], “MMMM”)

Active (Excluding Resolved)
// Calculates if an incident is active and open.
IF(
 [Active] = FALSE(),
 “Closed/Resolved”,
 “Active”
)

Opened (mmm YYYY)
// Formats the opened date as mmm YYYY.
LEFT([Opened Month Tooltip], 3) & " " & [Year]

OpenedMonthYearSort
// Formats the opening year and month as an integer.
[Year] * 100 + MONTH([Opened (modified)])

ActiveExcludingResolvedSort
// Calculates if an incident is active and open.
IF(
 [Active] = FALSE(),
 1,
2
)

Priority Status
// Defines priority status in standardized terminology.
SWITCH(TRUE(),
 [Priority] = “Highest”, “Critical”,
 [Priority] = “High”, “High”,
 [Priority] = “Medium”, “Moderate”,
 [Priority] = “Low”, “Low”,
 [Priority] = “Minor”, “Planning”,
 BLANK()
)

PrioritySort
// Calculates priority status sort order.
SWITCH(TRUE(),
 [Priority] = “Highest”, 5,
 [Priority] = “High”, 4,
 [Priority] = “Medium”, 3,
 [Priority] = “Low”, 2,
 [Priority] = “Minor”, 1,
 BLANK()
)

Opened Date
// Removes the time from the opening date to link to the Date Opened table.
FORMAT([Opened (modified)], “mm/dd/YYYY”)
Overdue Status
// Calculates incident overdue status flag.
IF(
 [Active] = FALSE && [Closed] > [Suggested Due Date] && [Suggested Due Date] <> BLANK() ||
 [Active] = TRUE && NOW() > [Suggested Due Date] && [Suggested Due Date] <> BLANK(),
 1,
0
)

Class
// Adds an issue type classification group.
VAR LevelNumber = 1
VAR LevelPath = PATH ( JIRA[Issue id], JIRA[Parent id] )
VAR LevelKey = PATHITEM ( LevelPath, LevelNumber, INTEGER )
VAR LevelName = LOOKUPVALUE ( JIRA[Issue id], JIRA[Issue id], LevelKey )
VAR LevelCount = CALCULATE(COUNTROWS(JIRA), ALL(JIRA), JIRA[Parent id] = EARLIER(JIRA[Issue id]))
VAR Result = IF(LevelName = [Issue id], IF(LevelCount <> BLANK(), “Problem”, “Incident”), “Incident”)

RETURN

SWITCH(TRUE(),
 [Issue Type] = “New User Request”, “Request”,
 [Issue Type] = “Service Request”, “Request”,
 [Issue Type] = “Service Request with Approvals”, “Request”,

// I only included these three for visualization purposed. There’s not enough information in the dataset to determine if any of these would be classified as a problem or incident.
 [Issue Type] = “Task”, “Problem”,
 [Issue Type] = “Sub-task”, “Problem”,
 [Issue Type] = “Corrective Action Plan”, “Problem”,
 [Issue Type] = “POA&M Milestone Subtask”, “Problem”,
 Result
)

Watchers (grouped)
// Concatenates all watchers, skipping blanks.
IF([Watchers] = “”, “”, [Watchers]) &
IF([Watchers_4] = “”, “”, ", " & [Watchers_4]) &
IF([Watchers_5] = “”, “”, ", " & [Watchers_5]) &
IF([Watchers_6] = “”, “”, ", " & [Watchers_6]) &
IF([Watchers_7] = “”, “”, ", " & [Watchers_7])

Suggested Due Date
// Calculates a smart due date.
VAR LevelNumber = 1
VAR LevelPath = PATH ( JIRA[Issue id], JIRA[Parent id] )
VAR LevelKey = PATHITEM ( LevelPath, LevelNumber, INTEGER )
VAR LevelName = LOOKUPVALUE ( JIRA[Due Date], JIRA[Issue id], LevelKey )
VAR DateModifier = CALCULATE(ROUND(AVERAGEX(JIRA, [Resolved] - [Created]), 0), FILTER(JIRA, [Resolved] <> BLANK()))
VAR ResultDate = IF(ISBLANK([Due Date]), LevelName, [Due Date])
VAR UpdatedResult = IF(ISBLANK(ResultDate), [Due Date (modified)], ResultDate)
VAR Result = IF(ISBLANK(UpdatedResult), [Created] + DateModifier, UpdatedResult)
RETURN
Result

Related Incidents
// Calculates total incidents related to a distinct problem.
CALCULATE(
 COUNTROWS(JIRA),
 ALL(JIRA),
 JIRA[Parent id] = EARLIER(JIRA[Issue id])
)

Made SLA
// Calculates if the service level agreement was met.
IF([Suggested Due Date] >= [Resolved], TRUE(), FALSE())

Time Span Breakdown
// Calculates a time span age category.
SWITCH(TRUE(),
 DATEDIFF([Opened (modified)], NOW(), DAY) < 180, “< 180 d”,
 DATEDIFF([Opened (modified)], NOW(), DAY) > 360, “> 360 d”,
 “180-360 d”
)

Time Span Breakdown Sort
// Calculates a time span age category sort value.
SWITCH(TRUE(),
 DATEDIFF([Opened (modified)], NOW(), DAY) < 180, 3,
 DATEDIFF([Opened (modified)], NOW(), DAY) > 360, 1,
2
)

Anomaly Status
// Returns the anomaly status based on the label flag.
IF([Label] = 1, “Anomaly”, “Normal”)

Time to Close
// Calculates the difference in days between opening and closing an case.
DATEDIFF([Opened (modified)], [Closed], DAY)

Priority Status
// Defines priority status in standardized terminology.
SWITCH(TRUE(),
 [Priority] = 5, “Critical”,
 [Priority] = 4, “High”,
 [Priority] = 3, “Moderate”,
 [Priority] = 2, “Low”,
 [Priority] = 1, “Planning”,
 BLANK()
)

Model
// Creates a shortened forecast model name for reporting purposes.
SWITCH(TRUE(),
 [.model_id] = 0, “Actual”,
 [.model_id] = 1, “Auto ARIMA”,
 [.model_id] = 2, “Auto ARIMA with XGBoost”,
 [.model_id] = 3, “ETS”,
 [.model_id] = 4, “Prophet”,
 [.model_id] = 5, “LM”,
 [.model_id] = 6, “XGBoost”,
 [.model_id] = 7, “XGBoost Workflow”,
 [.model_id] = 8, “MARS”,
 [.model_id] = 9, “Ensemble”,
 BLANK()
)

4 Likes