Hi everyone, my name's Yagr. I'm Solution Architect from Alibaba Cloud Germany team. I'm working with customers on Cloud migration, data analytics, machine learning, and many different other topics. The most fundamental and critical task for the customer is to automate the CI/CD process for on-boarding to the Cloud. Which is continuous integration and a continuous delivery. Today I will like to give you a short introduction about two very famous CI/CD components which are Terraform and Helm. Let's first take a look at the agenda. I will brief you of Terraform and Helm, and afterwards, I will give you some insight about how the typical Cloud architecture look like with the customers. Then we will have a closer look on the challenges and potential solutions. Finally, I will present you a demo to demonstrate how the tools can work together. I bet most of you have already heard of Terraform and Helm, and some of you may already use them in your daily life. Terraform is a tool to support you with Infrastructure as a code technology. You can write declarative configuration files for your Cloud resources, and you are able to plan and predict changes before you apply to the Cloud. After all, when you are ready with one environment, you can easily reproduce them in the other environment. Helm on the other hand, is a product life-cycle management tool which help you to manage the complexities in your developer environment. Especially when you have a micro-service concept, you have multiple services and components that you would like to combine them together as a product. With Helm, you can easily update your complete solution and you can share this easily between different environment. Finally, Helm enables you to roll back the solution when it doesn't work as you are expected, so let's take a look. What happens when you use a tool working with the Cloud? When you run the Terraform, you first need to select a provider. In our case. we take Ali Cloud as one provider and we take Kubernetes and Helm as the other provider running against our Alibaba Cloud container for Kubernetes. You need to have a target region which will use Europe Central 1, and you need to declare your resources. Let's say you have A, B, and C, finally, you want to have additionally with Kubernetes cluster. Afterwards, initialize your environment with the Cloud, and then you will have your plan to print out what exactly will happen. Finally, after the review of your plan, you will apply your resource to the Cloud. After you finish the apply command, you will see that resource A, B, and C will be deployed together with Alibaba Cloud, Kubernetes cluster. When you run the provider Kubernetes and Helm, you will deploy directly to the targeting cluster, in our case is Alibaba Cloud Container Service for Kubernetes. All your dependencies as a Kubernetes resources will be deployed to the targeting ACK. On the other hand, afterwards, you will hand over the ready environment to a development team. They will package their solutions including ingress, services, deployment and cron jobs in the Helm package and deploy it also targeting to the ACK cluster. Afterwards, your solution is good to run. Let's take a look the typical Cloud architecture design on the most customer side. This is a typical development landscape. This demonstrate most of the customers daily work. They first have a development environment. After some basic test, they will deploy to the staging, and when they're satisfied, they will move to the production. At the same time for the development staging, they usually try to make it internal access possible and verify with internal users. For the production, they usually have external entry point, and will be used by the end-users from external. If we have a closer look to the development environment, we will see that most of case they have a running platform like Alibaba Cloud Container Service for Kubernetes, which running their applications, their services, and the dependencies. They need a database, most likely they will use RDS MySQL in Alibaba Cloud as a manager service for database to store their data. Additionally, they need to manage the domain service, the certificate management, the key management service, how to monitoring, and how to log all the required resources. This doesn't sound very complicated. But let's take a closer look on the challenges. What they usually see, and what could be the solution proposed to them with DevOps. The most commonly asked questions are the following: How to make sure the development, staging and production environment are exactly the same. This is a challenge to all development team. After all, they need to make sure what they have tested and working should be the exact one that they release for the customers. Also, they want to know how to review the changes before applying to the Cloud. Most of the customer want to know the consequence when they create the resources or when they're modifying the resources. They want to know what will be the impact. They will like to see what other resources will be also involved for this change. Not only in the resource change, customer also want to know how they can successfully upgrade the environment. How they're able to upgrade the solutions from the development environment to the staging environment to finally launch to the production. Sometimes the customer will also ask the question on the solution lifecycle management, how they're able to ensure the solution delivery completeness. Finally, the most critical part, how to automate the DevOps process for the above mentioned points. Our answer is infrastructure as code and a product lifecycle management, which refers to today's topics, Terraform and Helm. How the solution look like. Let's first take a look at the environment. Let's assume you have a Dev environment, which is Alibaba Cloud account. The first thing you need to do is to make sure your account is valid for the enterprise. Additionally, you need to make sure that all the service you need are already activated manually. After that, we can start our automation journey. The most important part at the very beginning is to set up your resource access management. This includes the a user setup, the role setup, and the policy setup. Afterwards, you can use the user or the role you have created in the first step to set up your Cloud manage service. In our example, you will see the managed Kubernetes, KMS, RDS MySQL, a log service will be deployed using Terraform. After the major service our deployed, you can configure the dependencies in the Kubernetes environment. With that, you want to have dependencies which is able to handle the domain service, the certificate, and the monitoring. For the monitoring service, we'll mainly talk about promitheus and the grafana. Last, you can hand over the Infrastructure and ready to use environment to your development team. They will deploy their applications using Helm. The solution is ready for use. How we're able to verify it. First of all, we will check. The domain name will be automatically generated and enabled. We will see the certificate is automatically managed. We will see you that key management service is able to synchronize with your Kubernetes secrets to be able to access the database. We will see the logging is already visible and the monitoring is already configured. With that, we will see your happy user phase. Let's revisit DevOps steps. The step one is a manual preparation. You need to make sure you have account ready to be used. Then you need to make sure all the service you plan to use are activated. Optionally, if you plan to integrate with Active Directory or enter ID, you need to make sure you have finished the integration. From step two, we will use Terraform to automate the RAM user, the RAM roles and RAM policies. At step three, will make sure you manage service will be deployed using the RAM users or the RAM roles. This includes the Kubernetes Cluster, RDS database, MySQL, the log service, and KMS. In the step four, we will make sure that your dependencies in a Kubernetes cluster will be deployed properly using Terraform with provider Helm or Kubernetes. The dependence includes the Cert Manager, External DNS, External Secrets, Prometheus Operator, and Log Exporter. Finally, in the step 5, you are ready to go with our application deployment. In this step we'll use Helm to package the solution, including deployment, secrets, service, ingress, and deploy to your Kubernetes classroom, and make sure they're working. Before we start the demo, I would like you to know that my demo is already accessible from public Internet in GitHub. I'm using Travis CI as a CI/CD pipelines to deploy to the Alibaba Cloud for testing purpose. If you want to find more detail, just visit my GitHub. Let's start the demo. This folder is which I have already shared in the GitHub. Let's take a look what is inside. There are three folders in the demo, I will demonstrate how you deploy your applications using Helm. There is a script I wrote for automating the pipelines for Travis CI. In the Terraform folder, where other resources will deploy via Terraform. I have also a local batch prepared for my vermin values. I have already export all the vermin values. Let's take a look. Within the Terraform folder, there are three folders we will run the Terraform, and there's also shared folder called modules, which provide the resources that can be shared across different folders. In the account setup, we will deploy roles, users, and policies. In the Cloud services, we will deploy the manager service in the Cloud, including Kubernetes, KMS, lock services, and RDS MySQL. For the setup, the last step will be installing the dependencies in the Kubernetes, and this will be done in the Kubernetes service folder. Let's start from the account setup. If I run Terraform in it, I will initialize modules, backend, and providers. In my case, I have done it several times, so it's already in it, so we can run the Terraform plan to see what will be deployed. Let's take a look from the beginning. There will be DNS policies that will be used for the DNS automation, and then there will be the policy for the external secrets to synchronize a KMS with Kubernetes secrets. There will be the STS policy for a kube2ram assuming roles. They're all different policies and afterwards will be attached to a user or role. Let's run it. I will run Terraform apply with auto proof to avoid interactive approval. You can see all the resource are generated already. Where the result in green apply complete resource nine added, now we can move to the next folder, which is the Cloud services. The same. We will run the Terraform in it first to make sure all the modules are loaded, the backend status are synchronized with Cloud and all the plugins already initialized. The same, afterwards we will run Terraform plan to see what will happen. If you have already the resources deployed, this one will refresh the status and tell you which will be added and which will be deleted. This will give you a overview of what you will trigger in the action. Let's take a look. In this case, we will create database account, which of course will after that database instance. Privileges for the database, the database schema, database instance will be triggered first, and of course we will store the secrets in the KMS for the database secrets. Overall there will be the VPC created on top, and then we switch for the Cloud resources where our managed Kubernetes will be sitting in. We will also try to know the role that it can be assumed from the management Kubernetes cluster. We will create log service for the Kubernetes, including log, store and index. We will also attach the policies to the manage Kubernetes default assume role. And we will give the ssh and rds_access possibility from the security group. Let's start. I will also run the terraform apply with auto-approve. You will see they will first calculate the dependencies and a sequence, and then run the code. For those who can run together, they will run together, and for the dependency relationship resources, they will run sequential. The automation will take around 10 minutes. During this time, let's take a look what will happen. This is a script for running the complete installation. So you see there will be account_setup, which we have done. What we're going right now, is to deploy the services within the cloud_services folder. Afterwards, we will use the terraform output functionality to export two values into our terminal. We will update the kubeconfig, to feed into the kube contacts. We create a secret for using the Alibaba Cloud Container Registry afterwards. We cannot automate this part in the pipeline because we also need two environment value to be injected here. Terraform provider, right now, does not support the customer resource definition, so we will deploy the log CRD via kubectl apply, which will help us to link the log servers and Kubernetes cluster. Afterwards, we will make a small tweak in the Ingress. After those glue code, we will be able to run through the Kubernetes service. Also, on the left side, let's take a look the modules. There are modules for ram, vpc, secrets, which stands for the external secrets synchronizing between KMS and Kubernetes secrets, the ram_role, kms, certificate, which is the cert manager deployment, Application Real-time Monitoring Servers, and Alibaba Cloud Container Servers for Kubernetes. Within the code, if you want to refer to the modules, you can simply say, ''module'', and then the module will be deployed with the values that you provided here. Lets look back again to see the status. Now you can see the database_instance and Kubernetes are running parallel. They do not have dependency relationship. So they will be created at the same time. One is completely finished. You can see we have outputs to cluster_id and the database connection_string. Remember, we will run the glue code in-between, we'll copy the code here, and we will paste it here. So you can see the secret is created, the log CRD is deployed, and the index Ingress is updated. So now we can continue with the Kubernetes dependency services. We need to move back to the Terraform folder again, and then to the Kubernetes services. The same, we will run terraform init and terraform plan. Let's also take a look. The Application Real-time Monitoring Servers, which is a managed Prometheus, will be deployed, will provide the cluster_id, the region_id, and also the account id. There will be also namespace created for that. We will deploy a cert_manager, using the Helm provider. We also generate a namespace for it. We will create the cluster_role for external DNS, also for kube2ram. Here, we deploy the kube2ram, external_dns, and some necessary privileged service_account for it. Here, we deploy the service_manager, which is Alibaba version of external secrets. Now, let's run it. In this case, terraform provider kubernetes will try to collect the inflammation in the cluster and afterwards upgrading or deploying directly at the helm chart, the deployment, the privilege related service account roll and roll binding directly to the cluster. This will also take around one minute. Finally, after two to three minutes, all the dependencies have been deployed, now we can continue with the demo. We have prepared two demos; one is using the ham chart to deploy the applications, the other one is to verify the basic functionality of the cluster. Let's first go to the demo01. There is a very simple.yaml file. We can see in the yaml file there will be ingress, service and deployment. The deployment is very basic nginx image. How we evaluate. First of all, we need to make sure all selector work together. There will be LoadBalancer created in the service and we'll expose the application in the Port: 80. Additionally, I've also said in the file that external-dns shall generate a new record. The same in the ingress so in case the customer wants to go through the ingress, there will be another record generate for it in which will be routed to the service name demo01 from the 80 port. Let's deploy it. All of them are generated. We can take a look what is happening right now in the DNS. We can say, ''Get pods from kube system.'' You can see this is external-dns pod and we can ask what are the logs in the external-dns? You see this one will try to retrieve Alibaba Cloud DNS Domain Record, so after a while this will do the synchronization and once more. Now you can see there's a demo01 service record generated here and there's a demo01 ingress record generated here. With that, I will simply test if this one works. This already gives me the welcome to nginx, which means it's already working. I will also test the URL using the ingress. It also works. I think the basic functionality is already there in this cluster. I will start my demo with demo02. In the demo02, I've prepared a folder which is called hello-helm, which is a helm chart I've generated in this folder. This is my ID, again. I open the hello-helm folder in my ID. You can see in the template there will be a deployment, an ingress, an issuer which support me with issuing the certificate and then there will be the secret. The secret will help me to synchronize a value from KMS and secret in kubernetes. On top I have a service for my deployment. You define variables in your values.yaml. You can see that I have a service, with type LoadBalancer from port 80 pointing to my deployment on 8080. I have enabled ingress with cert-manager and I will use external-dns to generate the record for the demo02 and then a cert-manager will issue in the certificate for the SSL. I define the TLS secret name and for which host. Now we can run it. We will run the helm install command. Give a name test pointing to the current folder, we'll use debug mode to see what will be generated. We will provide also the connection string for RDS MySQL, which we generated from the terraform outputs. Looks like it's already deployed. You see, when we're deploying, all the values will be automatically insert and adapted from values to the YAML file. Let's take a look what is currently already deployed. You can see I have demo01 from the last demo, it's in the running stage. I have a demo02, which is for this one that will connect into the database. I have also generated the certificate management issuer, which is for the ACME type. Let's first take a look what is happening in my demo02 deployment. You see, this is a Java application. It's already booted, so I use a liquid base to generate a table directly. You can see the table is already generated, which is using a secret that I synchronized directly from KMS. When it's finished, the table generated and the server is ready to go. We can already run it to verify if everything works. Let's say, I will go to the API call, you see there is no person there. I can also add another value here. In my case, the application takes the body with name and address. I will give my name Jaeger and Frank for MA to the endpoint add. It looks like it's working. If I call the API again, you should see it's already pre-existing in the database. The functionality basically works. Now we can use a browser to validate HTTPS certificate. It works. You see, the icons are showing that the certificate is valid. If you we were to take a look, it will also tell us this one is for the URL demo tool Ingress and is issued by the less encrypt. Everything works a website. Now we can take a look at the additional functions. Login to my web console, go to the log service and you can see the project is already created. When I click on the project, all the logstore will show up here. When you open the log, you can see all the padlog is already injected into the log service. In the detail, you can see, this is from the namespace kube-system. This is for the container metric server. You can also click on the namespace to filter it out. Then you can see all the logs will come from the Kube-system. There are more functionalities in the log service for searching for visualization. You can also check the automatically generated Ingress. There is a search functionality already provided by default, and additionally, there also the visualization dashboard. You can see there are already listed dashboard generated for the Ingress. You can always create your own dashboard here. This is a log and let's take a look at the monitoring service. Now I login to the monitoring service. You can see this is already linked to my kubernetes cluster with a type managedkubernetes. It already generated several dashboards for it using Grafana. We can take a look also for the Ingress. Now you can see the Ingress dashboard is already loaded. This is standard Grafana, so you can create your own dashboard. You can also configure the components here. Everything is the same as a community version. That's my demo today, I hope you enjoy it. This is GitHub, I share all my code. You can find helpful information there. If you have any request, please contact me via e-mail or GitHub. Thank you very much for watching the demo.