This paper investigates the minimum-time trajectory planning problem of an autonomous vehicle. To deal with unknown and uncertain dynamics of the vehicle, the trajectory planning problem is modeled as a Markov decision process with a continuous action space. To solve it, we propose a continuous advantage learning(CAL) algorithm based on the advantage-value equation, and adopt a stochastic policy in the form of multivariate Gaussian distribution to encourage exploration. A shared actor-critic architecture is designed to simultaneously approximate the stochastic policy and the value function, which greatly reduces the computation burden compared to general actor-critic methods. Moreover, the shared actor-critic is updated with a loss function built as mean square consistency error of the advantage-value equation, and the update step is performed several times at each time step to improve data efficiency. Simulations validate the effectiveness of the proposed CAL algorithm and its better performance than the soft actor-critic algorithm.