Find your content:

Search form

You are here

Trigger execution context and Limits

 
Share

Is the whole data set processed within single trigger execution context? Or each 200 records has seperate execution context within trigger? Lets say we have 400 records and they are updated and processed by trigger in chunks of 200 records(chunk1 and chunk2). do these chunks share the same execution context(common limits) or each has seperate execution context:chunk1 has execution context 1(first set of limits) and chuck 2 has execution context 2(another set of limits)?

I am assking with regard to future call and 2200 records that are processed with trigger. Is it the case that we can hit the limit in 10 future calls(2200:200=11 > 10)?


Attribution to: Natallia

Possible Suggestion/Solution #1

There is no such thing as passing to a trigger; a trigger fires on a DML event. However, DML operations operate on a transaction chunk of 200 records at a time. So, let's say for example, you have a Visualforce page that edits 200 records, and an update trigger on that object, your trigger would operate on all 200 records at once, in one chunk. Another example is updating records through the API. Let's say your hit the SOAP API with an update to 200 records. The standard SOAP API allows you to specify the size of the batch but let's say your chunk size was 200, and you had an update trigger on that object, the trigger could be acting on a maximum of 200 records in one transaction chunk.

You trigger should be made bulk safe to be able to handle up to 200 records. When your trigger exercises a callout, you need to make sure you're handling up to 200 records and not going over the callout per transaction limit in your code.

Quote from the documentation:

For Apex saved using Salesforce.com API version 20.0 or earlier, if an API call causes a trigger to fire, the chunk of 200 records to process is further split into chunks of 100 records. For Apex saved using Salesforce.com API version 21.0 and later, no further splits of API chunks occur. Note that static variable values are reset between API batches, but governor limits are not. Do not use static variables to track state information between API batches.


Attribution to: greenstork

Possible Suggestion/Solution #2

It seems that all chunks share the same limits.

I ran the test: insert 2200 records with future call in trigger

trigger testtrigger on Account (after insert) {
    testtrigger.handle();
}
public class testtrigger {
    public static void handle() {handlefuture()}
    @future
    public static void handlefuture() {}
}

and got the following error: EXCEPTION_THROWN [13]|System.LimitException: Too many future calls: 11


Attribution to: Natallia
This content is remixed from stackoverflow or stackexchange. Please visit https://salesforce.stackexchange.com/questions/33395

My Block Status

My Block Content