iCad’s Axxent Hub delivers innovative software and services designed to raise the bar for internet assisted communication, data storage and collaboration.
For Axxent Hub, migration to AWS was triggered mainly to improve on agility, scalability, disaster readiness and cost savings. Through the process, we moved the application to a better and bigger system to boost performance and forge options to scale up and down the resources according to the usage.
Migration to AWS – Resources & Architecture
SSD based storage makes the application fast, better CPU and large RAM enable handling more requests at a given instance. All in all, a better infrastructure to support the Hub at higher cost efficiency and system performance.
Our objective was to build a secure HIPAA compliant infrastructure, counting resources put to work, we chose one VPC for hosting test and development systems, one for hosting the production systems and another shared VPC to host the VPN Server, the SVN (Source Code) server.
We implemented access control lists and maintained the production VPC out of access to developers. Amazon Elastic Compute Cloud (Amazon EC2) instance was put in place with Amazon S3 for scalable compute capacity, so that it only allowed an approved user with the VPN access to development or production network.
List of AWS and other Services used:
- Amazon Simple Storage Service (Amazon S3)
- Amazon Elastic Compute Cloud (Amazon EC2)
- Virtual Private Network (VPN)
- Identity and Access Management (IAM)
One of the most convincing benefits of migrating to AWS came with ability to scale up or scale down at a given moment of usage according to the requirement. Now, we could not just increase the resource but also let go unlike the paid annual subscriptions in the previous setup.
To create this highly available on-demand system, we set up two appropriately sized EC2 instances in an AutoScalingGroup. First was used to install SVN and Apache making SVN available all over HTTP and the second one was loaded with Apache, PHP and an appropriately sized RDS required to fire up the dev Node.
Both the instances were connected with ELB. Next we copied all application files and relevant settings for server name, IP and Database connection over to the new server and required changes were applied to the configuration files.
What Improvements did we notice?
The new setup scored higher on the grounds of performance, security, scalability, disaster readiness and more. Let’s take on each aspect in detail:
- Performance: To get better performance out of a setup, beefing up the infrastructure lays the strong foundation and forms the basis of expectations. As part of the migration we moved the applications to better and bigger systems. Made available by AWS, better CPU, larger RAM, Bigger storage volumes and SSD based storage wherever performance mattered. These upgrades helped the applications run better and faster.
- Scalability: Scalability comes as a given perk with an on-demand cloud infrastructure. It helped us optimize our ratio of cost to our requirements. It also allowed us to scale up a particular heavily used system quickly if needed. Not just the systems, we could also increase the storage space if required quickly. These things were not possible in the earlier hosted environments.
- Availability: As we spoke about an Autoscaling-group, we set up production systems inside one, which spanned across 2 availability zones. It gave the system god-like ability to recover and restart in minutes automatically in case something went wrong.
- Disaster Readiness: All the conditions laid out in a Disaster Readiness Platform must meet out to fulfill a few common objectives to become effective and prove the point of having one in place. Commonly addressed issues are speed, automation and Safety. Listed here, Components of a Disaster Readiness Platform our approach focused on the user experience of triggering the fail-over, utilizing the systems on the Disaster Readiness platform to make it exactly similar to the primary site.
- Clean-up: When systems have been running on-premise for some period of time, certain inefficiencies, short-cuts or workarounds creep-in applied due to sudden short term or permanent changes, creating a deviation from standard operating environment and best practices. We took this opportunity to fix such issues and make a clean start.
Once the setup was ready we worked with the iCAD staff to make sure they were able to connect to the new infrastructure using the VPN and were able to access the SVN to check-in and check-out code. This validated the access to the new dev node application.