We introduce a general methodology of update rules accounting for arbitrary interevent time distributions in simulations of interacting agents. In particular we consider update rules that depend on the state of the agent, so that the update becomes part of the dynamical model. As an illustration we consider the voter model in fully-connected, random and scale free networks with an update probability inversely proportional to the persistence, that is, the time since the last event. We find that in the thermodynamic limit, at variance with standard updates, the system orders slowly. The approach to the absorbing state is characterized by a power law decay of the density of interfaces, observing that the mean time to reach the absorbing state might be not well defined.