lazy binaries from cloud providers (s3) are read prematurely
we're using the cloud plugin to read blobs from s3
we noticed a loads of queries for metadata toward our s3 bucket, for example if we ask the document model for one attachment and load the files schema the cloud provider goes and read the metadata for all attached files
This is forced by the BinaryBlobProvider at method readBlob (line 73) where it asks for the LazyBinary length, making it less lazy than it could be.
We tried reading attachments bot from a document model and from the document.get operation, with the same result.
is there any other operation that fetches a Blob without going through the readBlob calling getLength()?
What Nuxeo version are you using? There have been fixes very recently to improve the way with deal with lazy binaries and the fetching of the length metadata in some cases (NXP-18369).
To answer your last question, at this time the getLength() is always called, but usually this hits the local cache of S3 files so doesn't need to use the network as often as one could fear. There could still be improvements though, tell me if you still see the issue with a version of Nuxeo marked as fixed in the above ticket.
what's the process for getting a patch posted to the nuxeo dev team?