
Answer-first summary for fast verification
Answer: Batch
The question asks about implementing a scaling strategy for local penalty detection data, with the constraint that models must be written using BrainScript (CNTK's network description language). In CNTK/BrainScript, batch normalization is the primary normalization method supported for neural networks, as confirmed by the Microsoft documentation references in the community discussion. Batch normalization helps accelerate training by reducing internal covariate shift and works well with varying input sizes/formats mentioned in the scenario. While cosine normalization might handle varying formats, it is not the standard or supported method in BrainScript for this context. Streaming and weight normalization are less relevant here, as the focus is on feature scaling for neural networks in CNTK, where batch normalization is the optimal and documented choice.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
You need to implement a feature scaling strategy for the local penalty detection data. Which normalization type should you use?
A
Streaming
B
Weight
C
Batch
D
Cosine