Time-lapse seismic uses repetitive seismic surveys to monitor the fluid in the subsurface. Ideally, the time-lapse data should be identical except for at the target region (i.e., the reservoir), where the fluid changes occur. Unfortunately, it is almost impossible to have identical data for various reasons, such as the static changes in the near-surface or the varying positioning of sources and receivers between surveys. To increase the accuracy of the 4D signal and reduce the noise, we propose to process the time-lapse data using a machine-learning methodology. Specifically, we train a recurrent neural network (RNN) model to map the data from monitor to baseline. The learned RNN model would reveal 4D overburden changes. Therefore, the difference between the predicted baseline and the actual baseline data stets will represent the target signal. We validate the method on synthetic data and show the improvements of the 4D signal by imaging the reservoir and computing the normalized root mean square.