Batch Parallelism in AX – Part - III

 

Top Picking:

The issue we have seen with bundling is uneven distribution of the workload. That was addressed by ‘Individual Task Modeling’. But the framework overhead of individual Tasks when there is a huge number of tasks will be so severe that this should be properly weighed in. ‘Top Picking’ is another batch technique that will address the uneven distribution problem. But this will suffer the same fate as ’Individual Task modeling’, if the number of work items is huge.

Here in this approach you will create a static number of tasks (just as in Bundling). No pre-allocation is done (just as in ‘Individual Task Modeling’). Since no pre-allocation is done and we are not relying on Batch framework to separate the work items, you will maintain a staging table that contains all the work items. Maintenance of this staging table to track the progress of the work items has its own overhead, but much smaller than the overhead of the batch framework. Once the staging table is populated, the worker threads can start processing by fetching the next available item from the staging table and they can continue until there are no more work items left. This means that there will not be workers that are idle while some other worker threads are being overloaded. To implement this, we will use the PESSIMISTICLOCK hint along with the READPAST HINT. These hints used together will enable worker threads to fetch the next available item without being blocked.

The Pseudo code for the whole process looks like this:

  • Staging table is populated with the work items.
    • You can use InsertRecordSet or RecordInsertList to populate this in an efficient way.
  • A batch job created with 'N' number of tasks is persisted.
  • Inside each worker thread,
    • A pessimistic lock (with READPAST) is used to fetch the next available work item.
    • Once the work item becomes available, the lock is retained for the rest of the transaction.
    • After the item is finished processing, the status is updated and the next available item is fetched.

Continuing the same example of invoicing a bunch of sales orders using this technique:
Note: The code used in this is only an example. Do not use it for your Sales Order Posting needs. AX 2012 default Sales Order Posting form uses a much more sophisticated and feature rich way of handling this parallelism.

DemoBatchTopPicking

Staging Table: demoTopPickProcessTrackTable

Field Name

Field Type

SalesId

SalesIdBase

ProcessedStatus

ENum:NoYes

public class DemoBatchTopPicking extends RunBaseBatch
{
}

public void new()
{
super();
}

void run()
{
SalesTable salesTable;
SalesFormLetter formletter;
DemoTopPickProcessTrackTable demoTopPickProcessTrackTable;
Map SalesMap;

DemoTopPickProcessTrackTable.readPast(true);

do
{
ttsBegin;
// when it finds no more work item to process do-while loop will exit
select pessimisticlock firstOnly * from demoTopPickProcessTrackTable
where demoTopPickProcessTrackTable.ProcessedStatus == NoYes::No;

        select * from salesTable where salesTable.salesId == demoTopPickProcessTrackTable.SalesID
&& salesTable.documentStatus == DocumentStatus::none;
if (salesTable)
{
formletter = SalesFormLetter::construct(DocumentStatus::Invoice);
formletter.getLast();
formletter.resetParmListCommonCS();
formletter.allowEmptyTable(formletter.initAllowEmptyTable(true));
SalesMap = new Map(Types::Int64,Types::Record);
SalesMap.insert(salesTable.recid,salesTable);
formletter.parmDataSourceRecordsPacked(SalesMap.pack());
formletter.createParmUpdateFromParmUpdateRecord(SalesFormletterParmData::initSalesParmUpdateFormletter(DocumentStatus::Invoice, FormLetter.pack()));
formletter.showQueryForm(false);
formletter.initLinesQuery();
formletter.update(salesTable, systemDateGet(), SalesUpdate::All, AccountOrder::None, false, false);
}
if(demoTopPickProcessTrackTable)
{
demoTopPickProcessTrackTable.ProcessedStatus = NoYes::Yes;
demoTopPickProcessTrackTable.update();
}
ttsCommit;
} while ( demoTopPickProcessTrackTable);
}

public static DemoBatchTopPicking construct()
{
DemoBatchTopPicking c;
c = new DemoBatchTopPicking();
return c;
}

Job to Schedule the above batch:

static void scheduleDemoBatchTopPickingJob(Args _args)
{
BatchHeader batchHeader;
DemoBatchTopPicking demoBatchTopPicking;
DemoTopPickProcessTrackTable demoTopPickProcessTrackTable;
SalesTable salesTable;
int totalNumberOfTasksNeeded = 10;
int counter;

    ttsBegin;
select count(RecId) from salesTable where salesTable.salesId >= ‘SO-00400001’ && salesTable.salesId <= 'SO-00500000'
&& salesTable.documentStatus == DocumentStatus::none;
if (salesTable.recid > 0)
{
//Populating the staging table with the work items

insert_recordset demoTopPickProcessTrackTable (SalesId)
select SalesId from salesTable
where salesTable.SalesId >= 'SO-00500000' && salesTable.SalesId < 'SO-00500000'
&& salesTable.DocumentStatus == DocumentStatus::None;

update_recordSet demoTopPickProcessTrackTable setting processedStatus = NoYes::No;

      batchHeader = BatchHeader::construct();
batchHeader.parmCaption(strFmt('Batch job for demoBatchTopPicking -Invoice SalesOrders %1 thru %2', ‘SO-00400001’, 'SO-00500000'));

    //Creating predefined number of tasks
for(counter = 1; counter <= totalNumberOfTasksNeeded; counter++)
{
demoBatchTopPicking = DemoBatchTopPicking::construct();
batchHeader.addTask(demoBatchTopPicking);
}
batchHeader.save();
}
ttsCommit;
info('Done');
}

Assuming I am trying to process 100,000 work items

#Tasks Created

#Batch Threads (In my test server)

#Parallel Tasks that can be executed in parallel at anytime

10

10

10

It is the same 10 tasks in action the whole time. They work till there is no more work item in the queue (no work item in the Staging table to be processed)