
Ultimate access to all questions.
The application reliability team at your company has recently added a debug feature to their backend service to send all server events to Google Cloud Storage for eventual analysis. The event records are at least 50 KB and at most 15 MB in size and are expected to peak at 3,000 events per second. Given the high volume and variability of the event sizes, you want to minimize data loss during this process. Which method should you implement to achieve this?
A
Append metadata to file body. Compress individual files. Name files with serverName - Timestamp. Create a new bucket if bucket is older than 1 hour and save individual files to the new bucket. Otherwise, save files to existing bucket.
B
Batch every 10,000 events with a single manifest file for metadata. Compress event files and manifest file into a single archive file. Name files using serverName - EventSequence. Create a new bucket if bucket is older than 1 day and save the single archive file to the new bucket. Otherwise, save the single archive file to existing bucket.
C
Compress individual files. Name files with serverName - EventSequence. Save files to one bucket. Set custom metadata headers for each object after saving.
D
Append metadata to file body. Compress individual files. Name files with a random prefix pattern. Save files to one bucket.