Salesforce can skip processing of some records in a batch. SalesforceBulkWriter always expects that all records are processed (either accepted or rejected). The unprocessed records are lost - they do not appear on rejected port as they should.
Here's an example of what can happen:
SalesforceBulkWriter is inserting 100k orders. Salesforce processes 64200 records and ignores the rest. 16600 of the processed records are marked as failed by Salesforce and they appear on rejected port of the SalesforceBulkWriter.
In clover graph this situating looks like the the 16600 records failed and rest was inserted, but that is not the case. Some of the records were ignored by Salesforce.
The ignoring of some records is caused by how Salesforce processes batches. Detailed info is here: https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/asynch_api_planning_guidelines.htm
The problem is in lock contention. SalesforceBulkWriter always uses parallel batch mode, so all the batches in the job run at once. In the example case all batches are trying to lock parent Account at once, lock contention is massive and the batches fail after they can't lock parent Account 10 times. This can be solved by supporting serial batch processing.