The motivation of this paper is to build a bridge between phonology and phonetics with implementation models. The challenge is to explain a wide range of phonetic forms in diverse speaking styles, including laboratory speech, spontaneous speech, fluent and non-fluent speech and model them as ordely variations of one coherent communication system. In the paper, we will focus on the effect of sentence planning on articulatory gestures. Data from spontaneous speech provides strong evidence for anticipatory effects and partial reduction effects. How and when they occur reflect the speakers sentence planning strategies. It is hypothesized that much of the discrepancy between laboratory speech and spontaneous speech can be accounted for with a model that can represent these effects, such as the weights used in the Stem-ML model.