home..

Multicloud Migration Of Application And Database To Aws And Gcp

In this project we are going to seamlessly migrate an application and its database to a multi-cloud environment, strategically leveraging the strengths of both Amazon Web Services (AWS) and Google Cloud Platform (GCP). At the heart of this project lies a robust and versatile application that generates and manages valuable data in PDF format.

Key highlights of our project:

Data Storage: We are set to store the application’s data in the form of PDF files. These files will find their home in secure AWS S3 buckets, harnessing the robust storage capabilities of AWS.

Containerization: Our application will undergo a transformative journey, being containerized and residing within the Google Container Registry. This approach not only enhances portability but also streamlines deployment, ensuring rapid and reliable scaling.

Deployment: Google Kubernetes Engine (GKE) emerges as our deployment platform of choice, offering the orchestration and management needed to facilitate the seamless operation of containerized applications.

Database Residency: The database underpinning our application will take root in the Google Cloud SQL environment. Google Cloud SQL, with its high availability and automatic backups, promises a robust foundation for our data. Automating Infrastructure Provisioning with Terraform: Terraform will enable us to define, deploy, and manage our multi-cloud infrastructure efficiently, reducing manual intervention and enhancing consistency.

STEPS TO IMPLEMENT PROJECT

Amazon Web Services

terraform1

permissions

AmazonS3FullAccess

reviewandcreate

createaccesskey

Google Cloud Platform (GCP)

mkdir mission1_en
mv mission1.zip mission1_en
cd mission1_en
unzip mission1.zip
mv ~/accessKeys.csv mission1/en
cd mission1/en
chmod +x *.sh
./aws_set_credentials.sh accessKeys.csv
gcloud config set project <project_id>

Execute the command below

./gcp_set_project.sh
gcloud services enable containerregistry.googleapis.com 
gcloud services enable container.googleapis.com 
gcloud services enable sqladmin.googleapis.com

Ensure your Bucket_name is unique while defining it in Terraform

cd ~/mission1_en/mission1/en/terraform/

terraform init
terraform plan
terraform apply
	Type Yes and go ahead.

SQL Network Configuration

SQLCONNECTION

ENABLEAPI

Amazon Web Services

appuser

cd ~ mkdir app_en cd app_en wget https://tcb-public- events.s3.amazonaws.com/icp/app.zip unzip app.zip
mysql --host=<public_ip_cloudsql> --port=3306 -u app -p
# Command to enable Cloud Build API gcloud services enable
cloudbuild.googleapis.com
ERROR: (gcloud.builds.submit) INVALID_ARGUMENT: could not resolve source: googleapi: Error 403: 989404026119@cloudbuild.gserviceaccount.com does not have storage.objects. get access to the Google Cloud Storage object., forbidden To solve it: 1. Access IAM & Admin; 2. Click on check-box Include Google-provided role grants; 3. Click and select your Cloud Build Service Account Example: 989404026119@cloudbuild.gserviceaccount.com Cloud Build Service Account 4. On your Cloud Build Service Account, right side, click on Edit principal 5. Click on Add another role 6. Click on Select Role, and filter by Storage Admin or gcs. Select Storage Admin (Full control of GCS resources). 7. Click on Save and go to Cloud Shell.
cd ~/app_en/app/en/app gcloud builds submit --tag
gcr.io/<PROJECT_ID>/app-en
cd ~/app_en/app/en/kubernetes kubectl apply -f app.yaml

Google Cloud Platform - Database Migration

cd ~ mkdir database_en cd database_en wget https://tcb-public- events.s3.amazonaws.com/icp/database.zip unzip database.zip
mysql --host=<public_ip_address> --port=3306 -u app -p
use dbapp; source ~/databse_en/database/en/db/db_dump.sql
select * from records; exit;

Amazon Web Services - PDF Files Migration

Connect to the AWS Cloud Shell. Download the PDF files

mkdir database_en cd databse_en wget https://tcb-public-
events.s3.amazonaws.com/icp/database.zip unzip database.zip

Sync PDF Files with your AWS S3. Replace the bucket name with yours.

cd database/en/pdf_files aws s3 sync . s3://Bucket_Name

Test the application. Upon migrating the data and files, you should be able to see the entries.

CONGRATULATIONS YOU HAVE SUCCESFULLY UTILIZED MULTI-CLOUD TO MIGRATE DATA AND APPLICATION SUCCESSFULLY

© 2024 Kennedy Njuguna   •  Powered by Soopr   •  Theme  Moonwalk