Lyricist.ai uses artificial intelligence to accelerate the speed of songwriting, aiming to become a helpful and productive tool for various creators. It can generate unlimited lyrics inspirations through writing style adjustment, multisyllabic rhyming, and keyword embedding. Lyricist.ai puts creativity back in creators’ hands and lets AI take care of all the hassles, empowering lyricists of all levels to initiate writing and overcome writer’s block.
Customer Needs
As demand continues to grow, Lyricist.ai wants to move its product to a dedicated cluster. Sharing clusters with other projects would limit its future expansion.
Engineers at Lyricist.ai have no management access to the platform owing to sharing clusters with other projects, causing management inconvenience.
Since Lyricist.ai is already a production service, the migration should be underway without affecting the users.
Existing choices for virtual machines were limited, with less flexibility and a higher price.
Migration Solution
After clarifying the requirements of Lyricist.ai, propose more choices of AWS virtual machines that meet the needs of its workload.
Help Lyricist.ai grant different permissions to the separated roles on AWS.
Use the Rehost strategy to migrate to the AWS Cloud.
Adopt the Amazon Elastic Kubernetes Service and Amazon Elastic Container Registry to deploy the website and backend API.
Use the Amazon Relational Database Service (RDS) for managed data storage.
Through the Amazon ElastiCache for Redis to support in-memory data stores.
Adopt the Amazon Database Migration Service to realize zero-downtime migration.
Outcome
The Lyricist.ai development team obtains full control permission and no longer needs to delegate to a third party, reducing the time cost of the relevant personnel by 20%.
The diversity and flexibility of AWS Virtual Machines make the virtual machine configuration more suitable for the use cases and save an average of 25% on machine costs.
Taking future expansion into consideration, the dedicated cluster is more conducive to architecture planning and cluster management, saving 40% of human resources for cross-team communication.