BLOGS

Message retry using Process Direct and Data Store in SAP CPI

Dec 31, 2021

Introduction :

Currently CPI version 6.20.20 doesn’t have inbuilt functionality to reprocess failed asynchronous messages. For the continuous business support in maintenance projects, it’s important to handle the failures (connection failures, message failures). There are many blogs available on retry using JMS queues, but with the limitation on the JMS queues for each tenant its more challenging to implement the solution in real time. I would like to share an approach to implement the message retry using process direct adapter and data store.

Scenario :

Store the failed message details with payload, message status (flag which indicates the stage where message failed while processing), process direct endpoint of actual business logic iflow in data store. Schedule an iflow which picks the message from the data store and process it again. So, in further Iflows we shall see how to handle both connection errors and mapping errors.

To achieve this, I have divided the solution in 3 parts:

  1. Passthrough iflow
  2. Actual business logic iflow
  3. Retry iflow

Integration Artifact Details: iFlow 1: Passthrough iflow

This iflow will receive the message from source system. Here you can perform some basic conversions (json to xml), log source payload if required and add custom message search which will be helpful to monitor the message flow. For example, if you are receiving an IDOC, you can add the IDOC number in custom message search. You can either use a groovy script or use SAP_ApplicationID in content modifier- message header.

Process Direct Configuration:

Integration Artifact Details: iFlow 2: Actual business logic iflow

In this iflow implement the actual business logic. I will not explain more on business logic used in above flow, as our focus in on the parameters required for message retry. Step 1: Process Direct Reuse the process direct path used in passthrough iflow

Step 2: Custom Msg Search In this script save the input payload, set the initial message status to “MapFail” as this step is performed before mapping and add custom header.

import com.sap.gateway.ip.core.customdev.util.Message; def Message processData (Message message) { def body = message. getBody (java.lang.String) as String; def parse = new XmlParser ( ) . parseText (body) ; def messageLog = messageLogFactory. getMessageLog (message) ; map = message. getHeaders ( ) ; def Parm1 = map. get ( "Parm1" ) ; def filename = map. get ( "Parm2" ) ; if (messageLog != null ) { message. setProperty ( "InputPayload" , body) ; message. setProperty ( "MsgStatus" , 'MapFail' ) ; message. setHeader ( "SAP_ApplicationID" , Parm1) ; } return message; }

Step 3: Perform the actual business logic which might include mappings, RFC calls, data conversions.

Step 4: Log MsgStatus

If the message fails due to mapping or connection issue, the exception sub-process will be invoked and below steps will be performed.

Step 5: Retrieve incoming message into body Step 6: Log Msg Headers Payload XSLT code to log msg headers and the payload. Extra parameters are defined if in case you need to pass more parameters.

?xml version= "1.0" encoding= "UTF-8" ? xsl:stylesheet version= "3.0" xmlns:xsl= "http://www.w3.org/1999/XSL/Transform" ! -- xsl:output method= "xml" omit-xml-declaration= "yes" / -- xsl:param name= "Endpoint" / xsl:param name= "EntryID" / xsl:param name= "MsgStatus" / xsl:param name= "Parm1" / xsl:param name= "Parm2" / xsl:param name= "Parm3" / xsl:param name= "Parm4" / xsl:param name= "Parm5" / xsl:template match= "node()|@*" xsl:copy xsl:apply-templates select= "node()|@*" / /xsl:copy /xsl:template xsl:template match= "/" RootNode Payload xsl:apply-templates select= "node()|@*" / /Payload MsgHeader Endpoint xsl:value- of select= "$Endpoint" / /Endpoint EntryID xsl:value- of select= "$EntryID" / /EntryID MsgStatus xsl:value- of select= "$MsgStatus" / /MsgStatus Parm1 xsl:value- of select= "$Parm1" / /Parm1 Parm2 xsl:value- of select= "$Parm2" / /Parm2 Parm3 xsl:value- of select= "$Parm3" / /Parm3 Parm4 xsl:value- of select= "$Parm4" / /Parm4 Parm5 xsl:value- of select= "$Parm5" / /Parm5 /MsgHeader /RootNode /xsl:template /xsl:stylesheet

Step 7: Backup Save the information generated in step 6 in data store “DS_RetryAllError”. EntryID can be of your choice, or you can leave it blank. Here I have used message ID. Note here all the connection and message error details are saved in common “DS_RetryAllError”.

Integration Artifact Details: iFlow 3: Retry iflow – For all errors

Picks all the messages from data store “DS_RetryAllError” if the message had failed earlier due to connection issue pushes file to corresponding iflow. If mapping issue it is sent to data store “DS_RetryMapError”.

       Step 1: Start Timer: Schedule the interface to recur daily with polling interval every           minute.  

       Step 2: Select Message:  Pick a file from data store “DS_RetryAllError”

      Step 3: Router: Processes ahead if any file present in datastore or else ends the process           execution.

        Step 4: Get Header Parameters: Retrieve the process direct path and reason for                         message failure (MsgStatus)

         Step 5 :Router: If the message failed due to mapping error move the data to new data       store = “DS_RetryMapError” if not retrieve only the payload and pass it to actual business iflow.

    Step 6:Filter Payload: Retrieve the payload

       Step 7 :Process Direct:

        Step 8: Write Map Error DS: Store the message errored due to mapping issue to data store =“DS_RetryMapError”.

Integration Artifact Details: iFlow 4: Retry iflow – For all mapping errors

Mapping errors can occur due to some data issues or mapping logics implemented due to business requirement. Logical errors can be fixed in some scenarios and in such case, we can retry the payload stored in “DS_RetryMapError” using below iflow after the mapping issue is fixed in main iflow.

Picks the mentioned message in entryID from data store “DS_RetryMapError” and pushes file to corresponding iflow. This iflow runs on demand after the identified mapping issue has been fixed in actual iflow.

Step 1 :Start Timer: Run Once

Step 2 :Get Message:  Pick a file from data store “DS_RetryMapError” with specified entry ID

Step 3: Get Header Parameters: Retrieve the process direct path and other Parameters if required.

Step 4: Filter Payload: Retrieve the payload

Step 5: Process Direct:

Message failed due to data issue cannot be retried and need to be deleted from data store “DS_RetryMapError”. For this you can schedule an iflow which is explained in blog shared in reference section.

Conclusion : We just saw how to configure the retry mechanism to handle both connection and mapping failures. The above retry mechanism can be reused across all the asynchronous interfaces built in a tenant.

References :

https://blogs.sap.com/2021/10/29/automatic-data-store-cleanup-using-sap-api/

Related Posts