
Answer-first summary for fast verification
Answer: Use rowsLimit and bytesLimitPerFile to sample data and use CloudStorageRegexFileSet to limit scans.
Option C is the correct answer because it directly addresses cost reduction in Cloud DLP by using sampling techniques (rowsLimit and bytesLimitPerFile) to limit the amount of data scanned and CloudStorageRegexFileSet to restrict scans to specific files, both of which reduce API usage and costs. The community discussion strongly supports this with 100% consensus, high upvotes, and references to Google's documentation on DLP sampling. Option A is partially correct but incomplete as it only mentions BigQuery data outside the US and doesn't cover general sampling or regex file sets. Option B is incorrect because minimizing transformation units is not a primary cost optimization method for DLP scans. Option D is incorrect as FindingLimits and TimespanConfig are not standard DLP cost optimization features.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
Your organization is expanding its use of the Cloud Data Loss Prevention (Cloud DLP) API and you need to minimize costs. The target data for DLP scans resides in both Cloud Storage and BigQuery. The location and region are specified as a suffix in the resource name.
What cost reduction strategies would you recommend?
A
Set appropriate rowsLimit value on BigQuery data hosted outside the US and set appropriate bytesLimitPerFile value on multiregional Cloud Storage buckets.
B
Set appropriate rowsLimit value on BigQuery data hosted outside the US, and minimize transformation units on multiregional Cloud Storage buckets.
C
Use rowsLimit and bytesLimitPerFile to sample data and use CloudStorageRegexFileSet to limit scans.
D
Use FindingLimits and TimespanContfig to sample data and minimize transformation units.