Ultimate access to all questions.
The application reliability team at your company has recently added a debug feature to their backend service to send all server events to Google Cloud Storage for eventual analysis. The event records are at least 50 KB and at most 15 MB in size and are expected to peak at 3,000 events per second. Given the high volume and variability of the event sizes, you want to minimize data loss during this process. Which method should you implement to achieve this?
Explanation:
According to best practices for optimizing performance and minimizing data loss in Google Cloud Storage, using a random prefix pattern helps in auto-scaling to handle high write rates effectively. A longer randomized prefix provides more effective auto-scaling when ramping to very high read and write rates. This approach distributes the load evenly across the storage system, minimizing any potential bottlenecks. The correct answer is D: Append metadata to file body. Compress individual files. Name files with a random prefix pattern. Save files to one bucket.