Building Anticipatory Software

Recently, I have devoted many cycles to the question, 'What is 'Personalized Software'?

In Douglas Hofstader's Gödel, Escher, Bach: An Eternal Golden Braid,I came across this self-evident but illuminating passage,

"If you punch "1" into an adding machine, and then add 1 to it, and then add 1 again, and again, and again, and continue doing so for hours and hours, the machine will never learn to anicipate you, and do it itself, although any person would pick up the repetitive behavior very quickly. Or, to take a silly example, a car will never pick up the idea, no matter how much or how well it isdriven, that it is supposed to avoid other cars and obstacles on the road; and it will never learn even the most frequently traveled routes of its owner."

Should anticipation (or adaptiveness or intuition) be considered a P0 or P1 requirement for the personalized "machines" we are creating? In other words, can we assume that you will be just as happy with a dumb car whose seats and controls revert from [Driver1_Setting] to [Driver2_Setting] when you approach it as with a smart car whose seats and controls adapt intelligently to your preferences (say, body temperature and back angle) and make incremental adjustments accordingly, over time?