Goal-oriented visual search is performed when a person intentionally seeks a target in the visual environment. In augmented reality (AR) environments, visual search can be facilitated by augmenting virtual cues in the person's field of view. Traditional use of explicit AR cues can potentially degrade visual search performance due to the creation of distortions in the scene. An alternative to explicit cueing, known as subtle cueing, has been proposed as a clutter-neutral method to enhance visual search in video-see-through AR. However, the effects of subtle cueing are still not well understood, and more research is required to determine the optimal methods of applying subtle cueing in AR. We performed two experiments to investigate the variables of scene clutter, subtle cue opacity, size, and shape on visual search performance. We introduce a novel method of experimentally manipulating the scene clutter variable in a natural scene while controlling for other variables. The findings provide supporting evidence for the subtlety of the cue, and show that the clutter conditions of the scene can be used both as a global classifier, as well as a local performance measure.