We introduce Tutor4RL, a method to improve reinforcement learning (RL) performance during training, using external knowledge to guide the agents’ decisions and experience. Current approaches of RL need extensive experience to de- liver good performance, something that is not acceptable in many real systems when no simulation environment or con- siderable previous data are available. In Tutor4RL, external knowledge– such as expert or domain knowledge– is ex- pressed as programmable functions that are fed to the RL agent. During its first steps, the agent uses these knowledge functions to decide the best action, guiding its exploration and providing better performance from the start. As the agent gathers experience, it increasingly exploits its learned policy, eventually leaving its tutor behind. We demonstrate Tutor4RL with a DQN agent. In our tests, Tutor4RL achieves more than 3 times higher reward in the beginning of its training than an agent with no external knowledge.