Due to the highly variable execution context in which edge services run,adapting their behavior to the execution context is crucial to comply withtheir requirements. However, adapting service behavior is a challengingtaskbecause it is hard to anticipate the execution contexts in which it willbedeployed, as well as assessing the impact that each behavior change willproduce. In order to provide this adaptation efficiently, we propose aReinforcement Learning (RL) based Orchestration for Elastic Services. Weimplement and evaluate this approach by adapting an elastic service indifferent simulated execution contexts and comparing its performance to aHeuristics based approach. We show that elastic services achieve highprecision and requirement satisfaction rates while creating an overhead ofless than 0.5% to the overall service. In particular, the RL approachproves to be more efficient than its rule-based counterpart; yielding a 10to 25% higher precision while being 25% less computationally expensive.