Count employee allocations

I have a record of employee allocations, the organization allocates employees to various clients & their departments to work on their tasks

(Refer to attached sample records)basedata-r.xlsx (4.7 MB)

i.e. CHARANI is allocated to Godrej on 11 march to 31st Dec, it is a fresh allocation to Godrej, since there are no any previous allocation records, it is considered as “New Allocation”

i.e. Chaumel is allocated to LG on 15-March 21 (New Allocation), but again on 4-June-21 she is allocated to another department within the same client, since it is “Extension”

i.e. VIGAN is 1st allocated to Panasonic on 2-Feb-21 (New Allocation), on 3 March & 22-march he got moved to another department, but since it is within same client, this is considered “Extension”, but then on 4-Apr-21 he got allocated to LG, so considered as “New Allocation”

Problem Statement
Count “New Allocations” & “Extensions”

Notes

  1. If there is a single record in this sheet this means it is “New Allocation”
  2. if there are multiple records of a same person, but the client name is same, the entry on the MIN(startdate) is considered “New Allocation” & rest are “Extensions”
  3. if there are multiple records of a same person, every time employee is allocated to a new client, the entry on the MIN(startdate) is considered “New Allocation” & rest are “Extensions”

Hi @suhvishal,

Welcome to the Forum.

Paste this into a new blank query

let
    Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("jZFLC4JAFIX/i2sH5pm2HCJMUAt7bKSFmIQVM+Dj/3dHbcwIajNzGM7HuedOljnekjius9pImcokBBnoS13eQOzzR9nATQjCDFFjYwQRauTZzRzB8EAe43UEKgrgiPP6XraVuhpQWBDTX+Cu1peuaCutjJ0jvBhJSOeI0q+knVUWhe5UO6K9v5cmdEA5F/B0CgOZmLRc5Y1WVfHWE2bEdAplr3H/IJktSvy/yPc1UTqtaSj7je6XZAP56INAoMWrpO8ZJNimBxnuPyJnCyZeT83/9PwE", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [EmpID = _t, #"Employee Name" = _t, Client = _t, Department = _t, #"Start Date" = _t, #"End Date" = _t]),
    ChType = Table.TransformColumnTypes(Source,{{"EmpID", Int64.Type}, {"Employee Name", type text}, {"Client", type text}, {"Department", type text}, {"Start Date", type date}, {"End Date", type date}}),
    GroupRows = Table.Group(ChType, {"EmpID", "Client"}, {{"First Date", each List.Min([Start Date]), type nullable date}, {"AllRows", each _, type table [EmpID=nullable number, Employee Name=nullable text, Client=nullable text, Department=nullable text, Start Date=nullable date, End Date=nullable date]}}),
    ExpanRows = Table.ExpandTableColumn(GroupRows, "AllRows", {"Employee Name", "Department", "Start Date", "End Date"}, {"Employee Name", "Department", "Start Date", "End Date"}),
    AddStatus = Table.AddColumn(ExpanRows, "Status", each if [First Date] = [Start Date] then "New Allocation" else "Extension", type text)
in
    AddStatus 

.

Grouped your data

Expanded the Rows again

And Added a Conditional Column


With this result


.

A simple COUNTROWS will do the trick

image

I hope this is helpful

1 Like

Thanks Melissa for the reply. Appreciate your help. its working well with the dummy data

but have 1 issue, in actual data, there are some records which have multiple entries with same start date, but since the client is same 1st entry should be considered “New Allocation” and rest should be “Extension”
Dummy-emp-allocations.pbix (22.5 KB) Dummy-emp-allocations.pbix (22.5 KB)

but right now because of the condition in additional column, all rows with same FirstDate & StartDate are shown as “New Allocation”

I tried to add one more level of grouping by startdate, but with that I am loosing extension cases

Hi @suhvishal,

No worries. Give this a go.

let
    Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("lZJdC4IwFIb/i9cN9mXa5YgwoSzs40a6EJWwYgO1/9/ZXLaKqK72Ms7Dc/ZqlnnBhHgjL1JlU50gTOdCpCKJIW7yS9XCSQjCDFE9xggiVMfDKPN8huFqEfXUbjlbQFrmzbnqannUoD+AmH4D140qr0VXK6nHOcJjS4KdI0od0l3W0qIo1FV2FjXzJmppj3Lua0su81bJuoC8jyOROO+EHTF9SNl93c+k+1iHZr/Rg5cNNZHwby+lj5L7qlzalPwm5HYOhED794rC4FUYrdKtiDevn4cEhnr+Iw43", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [EmpID = _t, Client = _t, #"Employee Name" = _t, Department = _t, #"Start Date" = _t, #"End Date" = _t]),
    ChType = Table.TransformColumnTypes(Source,{{"EmpID", Int64.Type}, {"Client", type text}, {"Employee Name", type text}, {"Department", type text}, {"Start Date", type text}, {"End Date", type text}}),
    SortRows = Table.Sort( Table.Buffer( ChType ),{{"Start Date", Order.Ascending}, {"End Date", Order.Ascending}}),
    GroupRows = Table.Group(SortRows, {"EmpID", "Client"}, {{"AllRows", each _, type table [EmpID=nullable number, Client=nullable text, Employee Name=nullable text, Department=nullable text, Start Date=nullable text, End Date=nullable text]}}),
    AddStatus = Table.RemoveColumns( Table.AddColumn(GroupRows, "AddStatus", each 
        let
            myTable = [AllRows],
            NewTable = Table.AddColumn(myTable, "Status", each if myTable{0} = _ then "New Allocation" else "Extension", type text )
        in 
            NewTable ), "AllRows" ),
    ExpandData = Table.ExpandTableColumn(AddStatus, "AddStatus", {"Employee Name", "Department", "Start Date", "End Date", "Status"}, {"Employee Name", "Department", "Start Date", "End Date", "Status"})
in
    ExpandData 

.
With this result.

Here’s your sample. Dummy-emp-allocations.pbix (29.8 KB)
I hope this is helpful.

Hi @suhvishal, welcome to the Forum! It’s great to know that you are having a great experience so far. We’ve recently launched the Enterprise DNA Forum User Experience Survey, please feel free to answer it and give your insights on how we can further improve the Support forum. Thanks!

Yes Melissa, it works, Thanks a lot

BTW I tried one more way, after grouping, I added Index column starting from 1 & then expanded rows again, picked up the index = 1 in conditional column

Please mention your thoughts on this method

I am happy & enjoying learning on this platform by doing these exercises along with the structured courses, Thanks a lot … really appreciate your quick response

1 Like

Hi @suhvishal,

Good on you!

POTW #9 that’s running now, is dealing with something similiar. To advance your learning you might want to check that out (if you haven’t already) and possibly even enter or explore solutions provided by others. I always find that fun and insightful.

In general, there are often several ways to deal with a problem and that’s true for M as well. So keep discovering! :+1:

Hi @suhvishal, did the response provided by @Melissa help you solve your query? If not, how far did you get and what kind of help you need further? If yes, kindly mark as solution the answer that solved your query. Thanks!

Hi @suhvishal, we’ve noticed that no response has been received from you since the 15th of April. We just want to check if you still need further help with this post? In case there won’t be any activity on it in the next few days, we’ll be tagging this post as Solved.

Hi @suhvishal, due to inactivity, a response on this post has been tagged as “Solution”. If you have a follow question or concern related to this topic, please remove the Solution tag first by clicking the three dots beside Reply and then untick the check box.

We’ve recently launched the Enterprise DNA Forum User Experience Survey, please feel free to answer it and give your insights on how we can further improve the Support forum. Thanks!

Yes, it works very well, except few discrepancies since now I applied the same logic to actual data

I am working on it to resolve & i think I am close. would update the status here soon

but, as off now it can be happily marked as solution to the issue

Thanks a ton! @Melissa & this forum, really appreciate

@Melissa

Have a query further in the above task. but unfortunately couldn’t share the data here as sample. would you be open to connect 1 to 1 so that I can explain what the issue is and seek help?

Hi @suhvishal,

This is an open forum, so that we may all learn from the different scenarios and examples shared here within our community.

I would encourage you to use techniques for masking your data and supply all relevant details, your query, along with a mockup of the desired outcome in a new thread.

Thank you.

1 Like