Mostrar el registro sencillo del ítem

dc.contributor.authorAyimba, Constantine 
dc.contributor.authorCasari, Paolo 
dc.contributor.authorMancuso, Vincenzo 
dc.date.accessioned2021-07-13T09:49:29Z
dc.date.available2021-07-13T09:49:29Z
dc.date.issued2021-06
dc.identifier.issn1932-4537
dc.identifier.urihttp://hdl.handle.net/20.500.12761/971
dc.description.abstractAs a growing number of service and application providers choose cloud networks to deliver their services on a software-as-a-service (SaaS) basis, cloud providers need to make their provisioning systems agile enough to meet service level agreements (SLAs). At the same time, they should guard against over-provisioning, which limits their capacity to accommodate more tenants. To this end, we propose Short-term memory Q-Learning pRovisioning (SQLR, pronounced as “scaler”), a system employing a customized variant of the model-free reinforcement learning algorithm. It can reuse contextual knowledge learned from one workload to optimize the number of virtual machines(resources) allocated to serve other workload patterns. With minimal overhead, SQLR achieves comparable results to systems where resources are unconstrained.Our experiments show that we can reduce the amount of provisioned resources by about 20% with less than 1% overall service unavailability (due to blocking), while delivering similar response times to those of an over-provisioned system.
dc.language.isoeng
dc.publisherIEEE Communications Society
dc.titleSQLR: Short-Term Memory Q-Learning for Elastic Provisioningen
dc.typejournal article
dc.journal.titleIEEE Transactions on Network and Service Management
dc.rights.accessRightsopen access
dc.volume.number18
dc.issue.number2
dc.page.final1869
dc.page.initial1850
dc.description.statuspub
dc.eprint.idhttp://eprints.networks.imdea.org/id/eprint/2326


Ficheros en el ítem

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem