Most existing iterative learning control algorithms are designed to improve tracking performance with respect to a given trajectory over a fixed time period. In this article, we design two iterative learning-based economic model predictive controllers for repetitive tasks where no target trajectory is available. The controller is able to search for suboptimal trajectories with good performance by exploiting information from previous experience. Compared with existing works, the objective function is not assumed to be positive definite so it is not limited to the tracking problem but can represent more general economic performance index. The controller can learn from the previous closed-loop trajectory, resulting in a performance which is guaranteed to be no worse than the previous one. Under some standard assumptions in model predictive control, the recursive feasibility of the algorithms is ensured. We show that the fixed operation time algorithm can guarantee that the performance is no worse than the previous iteration even without the dissipative assumption. By allowing the operation time to vary, the flexible operation time algorithm can balance the operation time and the system performance if dissipative assumption is satisfied. For both algorithms, each iteration is guaranteed to be completed within a uniformly bounded time duration.