Amazon Web Services (AWS) - Set #8

Powered by Techhyme.com

You have a total of 130 minutes to finish the practice test of AWS Certified SysOps Administrator to test your knowledge.


1. You need to implement an in-memory cache environment. Which of the following are available within Amazon Elasticache for use? (Choose two.)
  • A. Hazelcast
  • B. Couchbase
  • C. Aerospike
  • D. Memcached
  • E. Redis
Answer - D, E
Explanation - When using Amazon Elasticache, you have the choice of either Memcached or Redis. The other options are memory caching options, but they are not used within AWS.
2. You want to be able to run containers in your AWS environment, but you would like to use a managed service to make managing your container fleet a little simpler for you. Which service should you choose?
  • A. Elastic Kubernetes Service (EKS)
  • B. Elastic Container Service (ECS)
  • C. Windows EC2 instance with virtualization capability
  • D. Linux EC2 instance with Docker installed
Answer - B
Explanation - The Elastic Container Service (ECS) is a managed service that is purpose built to run a cluster of Docker containers. Elastic Kubernetes Service (EKS) is a managed Kubernetes cluster service. While you could make this work with either Windows or LinuxEC2 instances, the best managed service solution is ECS.
3. You have multiple container images stored on Docker Hub and you would like to use them once you migrate to using Amazon ECS. Will you still be able to use Docker Hub as your container registry?
  • A. Yes, although Docker Hub is the only supported external registry.
  • B. Yes, you can use container registries outside of AWS.
  • C. No, you can only use Amazon Elastic Container Registry (ECR).
  • D. No, you can’t use external container registries.
Answer - B
Explanation - When using Amazon ECS, you can use container registries inside of AWS and outside of AWS. So you could use your existing container registry in Docker Hub, and in fact Docker Hub is used by default.
4. You have multiple images currently stored in Docker Hub, but you would like to move these container images into AWS. How would you accomplish this while still being able to use the familiar Docker commands?
  • A. Create new container images and use the Amazon EC2 Container Registry (ECR).
  • B. Create an AMI and launch the containers as you would a normal EC2 instance.
  • C. Use the Amazon EC2 Container Registry (ECR) to store the images from Docker Hub.
  • D. Continue to use Docker Hub; there’s no way to store the images in AWS.
Answer - C
Explanation - The best solution is to use the Amazon EC2 Container Registry (ECR) to store the images that you previously stored on Docker Hub. This allows you to create a private repository and allows you to still use Docker commands.
5. You need to customize an Amazon EC2 instance when it is launched from an AMI. What is the simplest way to customize the instance?
  • A. Log in manually and make the changes that are needed.
  • B. Use SSM to push a script to the instance once its available.
  • C. Run a script once the instance is up.
  • D. Use the User data field to customize the instance.
Answer - D
Explanation - Since the question seems to be asking about one instance, the User data field would be the simplest way to customize the instance. Logging in and manually making changes or logging in and running a script will require more effort than the User data field. Using SSM to push a script is certainly doable, but there is setup work involved to get it to work properly.
6. How large can the data in the User data field be when provisioning an Amazon EC2 instance?
  • A. 16 KB
  • B. 32 KB
  • C. 64 KB
  • D. 128 KB
Answer - A
Explanation - The User data field is limited to 16 KB. If you need to run something larger than that, you can link a large script from within the User data field. This keeps the size of the field under 16 KB but allows you to call a larger script if you need it.
7. You have a system that will need storage added to support caching. Which type of storage is good for caching?
  • A. Instance storage
  • B. Elastic Block Storage (EBS)
  • C. Elastic File System (EFS)
  • D. Simple Storage Service (S3)
Answer - A
Explanation - Data on instance storage does not persist after an instance is stopped or terminated; however, it works very well as a temporary drive or as a caching drive.
8. You need storage that will work really well with APIs as you begin to provision more and more resources using tools like Chef and Puppet. Which type of storage works best when API usage is a determining factor?
  • A. Instance storage
  • B. Elastic Block Storage (EBS)
  • C. Elastic File System (EFS)
  • D. Simple Storage Service (S3)
Answer - D
Explanation - S3 is a block storage service within AWS and is a clear choice when you need a storage service that will work especially well with APIs.
9. You want to create a network file share to replace what you are using a file server in your on-premises datacenter for. You do not want to provision a server specifically for this use case however. Which storage type would be the best fit?
  • A. Instance storage
  • B. Elastic Block Storage (EBS)
  • C. Elastic File System (EFS)
  • D. Simple Storage Service (S3)
Answer - C
Explanation - EFS can be set up to work much like a normal file share would and removes the need to maintain a separate file server to support network file shares.
10. You have a server that has an application that requires a high amount of IOPS, approx. 25,000 IOPS. Which type of storage would be the best fit?
  • A. Instance storage
  • B. Elastic Block Storage (EBS)
  • C. Elastic File System (EFS)
  • D. Simple Storage Service (S3)
Answer - B
Explanation - When you need to add storage to a server that requires high IOPS, you are going to want Provisioned IOPS SSD, which is a type of Elastic Block Storage (EBS).
11. You have been called to troubleshoot issues with some of your instances that use a routing policy in Amazon Route 53. They worked previously but now are not working. You suspect there may be a service issue with Amazon Route 53. Where can you go to verify if the service is experiencing issues in your region? (Choose two.)
  • A. Amazon Inspector
  • B. Amazon CloudWatch
  • C. AWS Service Health Dashboard
  • D. AWS Trusted Advisor
  • E. AWS Personal Health Dashboard
Answer - C, E
Explanation - The AWS Service Health Dashboard gives you a window into the health of various AWS services. You can look to see if Amazon Route 53 is degraded in your regions as well as subscribe to the RSS feed for a service. The AWS Personal Health Dashboard lists issues that may impact services that you use, including what time the issues started. Amazon Inspector is used to conduct security assessments. Amazon CloudWatch is used to monitor the resources within your environment. AWS Trusted Advisor makes recommendations based on best practice in accordance with the five pillars identified in the AWS Well-Architected Framework.
12. You need to ensure that your Amazon EC2 instances are being monitored, with metric being collected at least once every 5 minutes. With which tool would you be able to meet this goal?
  • A. Amazon CloudWatch, basic monitoring
  • B. Amazon CloudWatch, detailed monitoring
  • C. AWS CloudTrail, basic monitoring
  • D. AWS CloudTrail, detailed monitoring
Answer - A
Explanation - Amazon CloudWatch with basic monitoring collects metrics every 5 minutes and would fulfill your requirement. Detailed monitoring would collect metrics every 1 minute. AWS CloudTrail is used to monitor APIs and does not have basic and/or detail monitoring levels as Amazon CloudWatch does.
13. You have been using a WSUS server on-premises but would like a more scalable solution that can handle patching in AWS and on-premises and will patch both Windows and Linux operating systems. What should you choose?
  • A. Amazon GuardDuty
  • B. Amazon Inspector
  • C. AWS Trusted Advisor
  • D. AWS Systems Manager
Answer - D
Explanation - AWS Systems Manager utilized agents to manage systems in AWS and in an on-premises datacenter. Patch Manager, a component of AWS Systems Manager, can be used to patch both Windows and Linux systems. Amazon GuardDuty functions like an intrusion detection system, Amazon Inspector does automated security assessments, and AWS Trusted Advisor makes recommendations based on best practices.
14. You have been asked to make a recommendation on the kind of instance to use for large amounts of batch processing at night. It needs to be low cost, and it can tolerate being stopped at any time. Which type of instance would you recommend?
  • A. On-demand
  • B. Reserved
  • C. Spot
  • D. Dedicated instance
Answer - C
Explanation - Spot instances are great, low-cost options for running jobs that are tolerant of being stopped at any point in time. On-demand are cost effective as you pay by the hour, but they are more expensive than spot instances. Reserved instances offer significant cost savings but are better suited for long-running workloads. Dedicated instances are the most expensive of the group and are instances that run on hardware dedicated to a specific customer.
15. You need to supply your application owners with Amazon EC2 instances that can be shut down at any time. They need to be cost-effective, but they also need to complete their work before they are destroyed. What is the best kind of instance to meet this need?
  • A. On-demand
  • B. Reserved
  • C. Spot
  • D. Dedicated instance
Answer - A
Explanation - On-demand are cost effective as you pay by the hour and they can be shut down at any time. You are only billed for the time that they are active. Spot instances are great, low-cost options for running jobs that are tolerant of being stopped at any point in time. Reserved instances offer significant cost savings but are better suited for long running workloads. Dedicated instances are the most expensive of the group and are instances that run on hardware dedicated to a specific customer.
16. You work in a highly regulated environment and you must keep the security and the privacy of your Amazon EC2 instances in mind. Your supervisor has shared that they don’t want to share a physical hypervisor with any other customers. Which type of instance would meet this requirement?
  • A. On-demand
  • B. Reserved
  • C. Spot
  • D. Dedicated instance
Answer - D
Explanation - Dedicated instances are the most expensive of the group and are instances that run on hardware dedicated to a specific customer. This is a perfect fit for organizations that have strict security and/or privacy requirements. On-demand are cost effective as you pay by the hour and they can be shut down at any time. You are only billed for the time that they are active. Spot instances are great, low-cost options for running jobs that are tolerant of being stopped at any point in time. Reserved instances offer significant cost savings but are better suited for long-running workloads.
17. You are getting ready to move some of your production systems to the cloud. These will be running 24x7 and they are your primary business systems. Which type of instance will be the most cost-effective in this scenario?
  • A. On-demand
  • B. Reserved
  • C. Spot
  • D. Dedicated instance
Answer - B
Explanation - Reserved instances offer significant cost savings over on-demand when you know that you are going to use your instances for a period of time. Reserved instances done for a three-year period, for example, can result in significant cost savings. On-demand are cost effective as you pay by the hour and they can be shut down at any time, but they are more expensive than reserved instances. Spot instances are great, low-cost options for running jobs that are tolerant of being stopped at any point in time. Dedicated instances are the most expensive of the group and are instances that run on hardware dedicated to a specific customer.
18. You want to begin to use Elastic Container Service (ECS). You have created some Amazon EC2 instances from your organization’s standard AMI. What should be your next step?
  • A. Install the AWS SSM agent.
  • B. Install antivirus software.
  • C. Install the Amazon ECS agent.
  • D. Configure the host firewall.
Answer - C
Explanation - Since you used your organization’s standard AMI, you will need to install the Amazon ECS agent first. You will also need an IAM role for authentication with the Amazon ECS service endpoint and network access to the ECS service endpoint. The AWS SSM agent is utilized by AWS Systems Manager to manage systems. Installing antivirus and configuring the host firewall are all good things to do, but they are quite often baked into a standard image, or managed in some other way. They are not required to start using ECS.
19. You’re are moving your systems to the cloud and are looking at saving money whenever possible. You have a system right now that runs a small application coded in C# that sends a notification when a file is uploaded and then moves that file to another system for processing. What would be the most cost-effective way to move this to the cloud?
  • A. Amazon EC2 instance
  • B. AWS Lambda
  • C. AWS Elastic Beanstalk
  • D. AWS CloudFormation
Answer - B
Explanation - Since this application is written in C# and is event driven, AWS Lambda is a perfect fit. Consider an upload to Amazon S3. AWS Lambda could be invoked when an object is uploaded and can start its workflow. An Amazon EC2 instance would work; however, it would not be as cost effective as AWS Lambda would be. AWS Elastic Beanstalk is meant to run full web applications, not event-triggered applications, and AWS CloudFormation isn’t designed to run applications at all.
20. Your developers have requested the ability to be able to spin up single servers that have the OS and their development stack of choice. Which product will give them that capability with the least amount of administrative overhead?
  • A. AWS CloudFormation
  • B. AWS Elastic Beanstalk
  • C. Amazon Lightsail
  • D. Amazon EC2
Answer - C
Explanation - Amazon Lightsail is a perfect fit for this type of scenario where a single server with an OS and a development stack are all that is desired. Developers can be granted permissions in Amazon Lightsail so that administrators don’t need to do anything at all. AWS CloudFormation requires administrative overhead to create templates and deploy stacks. AWS Elastic Beanstalk requires some administrative overhead to configure everything properly. Amazon EC2 would require a great deal of administrative overhead as you would need to pick the OS and manually install the development stack.
21. Which operating systems can you choose from when you use Amazon Lightsail? (Choose two.)
  • A. Amazon Linux
  • B. Unix
  • C. Ubuntu
  • D. Red Hat
  • E. Windows
Answer - A, C
Explanation - When using Lightsail, you have the choice between Amazon Linux and Ubuntu for the operating system.
22. Your developers want to use WordPress and would like to be able to seamlessly build the infrastructure to support the WordPress deployment with a minimal amount of administrative overhead. What is the best way to support this effort?
  • A. AWS Elastic Beanstalk
  • B. Amazon EC2 instance with WordPress installed
  • C. Use the Elastic WordPress Service
  • D. Amazon Lightsail
Answer - D
Explanation - Since Amazon Lightsail supports WordPress, it presents the best option for the developers and can be used with minimal administrative overhead. Elastic Beanstalk requires more administrative effort than Lightsail does. An Amazon EC2 instance would allow you to install WordPress but would require a significant amount of administrative effort. The Elastic WordPress service sounds cool, but it doesn’t actually exist.
23. You need to remotely manage an Amazon Lightsail instance but you don’t want to install an SSH client on your system. How would you connect to the instance with the least amount of administrative effort?
  • A. Use the Connect option in the Amazon Lightsail Console.
  • B. Connect through Session Manager in AWS Systems Manager.
  • C. Use Remote Desktop to connect.
  • D. You will need to install an SSH client.
Answer - A
Explanation - The Connect option in the Amazon Lightsail Console gives you access to the console of the guest operating system. While Session Manager can be used to access an instance without the need for SSH software, it requires a little work to set up. Remote Desktop will not work. Amazon Lightsail only supports Linux operating systems, and they do not have an X server running. You don’t need to install an SSH client.
24. You need to schedule batch jobs and account for dependencies between jobs. This has historically been accomplished with static timed jobs on virtual machines on-premises. What would be the best way to accomplish this in AWS?
  • A. AWS Lambda
  • B. Amazon EC2 instances
  • C. AWS Batch
  • D. Amazon Simple Queue Service (SQS)
Answer - C
Explanation - AWS Batch is purpose built to schedule and run batch jobs as well as account for dependencies between jobs. AWS Lambda is not a good solution as it is tied to triggers (in this case, static timing). Amazon EC2 instances could be used in place of the virtual machines you have on-premises, but there is no gain in doing this. SQS could help with communication between the jobs but would not be able to do the needed work by itself.
25. In AWS Batch, how do you specify how a job should be run, including resource requirements?
  • A. Job
  • B. Job definition
  • C. Job queue
  • D. Scheduler
Answer - B
Explanation - Job definitions specify how your job should be run, including identifying resources needed like CPU and memory, storage, etc. A job is used to define a single unit of work in AWS Batch. A job queue is used to store jobs until they are scheduled to run. The scheduler is the brains of the outfit and determines when jobs should be run.
26. Which component of AWS Batch is responsible for storage jobs until they are ready to be executed?
  • A. Job
  • B. Job definition
  • C. Job queue
  • D. Scheduler
Answer - C
Explanation - A job queue is used to store jobs until they are scheduled to run. A job is used to define a single unit of work in AWS Batch. Job definitions specify how your job should be run, including identifying resources needed like CPU and memory, storage, etc. The scheduler is the brains of the outfit and determines when jobs should be run.
27. Which component of AWS Batch is responsible for determining when jobs should be run?
  • A. Job
  • B. Job definition
  • C. Job queue
  • D. Scheduler
Answer - D
Explanation - The scheduler is the brains of the outfit and determines when jobs should be run. A job is used to define a single unit of work in AWS Batch. Job definitions specify how your job should be run, including identifying resources needed like CPU and memory, storage, etc. A job queue is used to store jobs until they are scheduled to run.
28. You have been working with other system administrators to move your batch jobs to AWS. You need to ensure that you are all using the same terminology so there is no confusion. What is the appropriate word to describe a single unit of work in AWS Batch?
  • A. Job
  • B. Job definition
  • C. Job queue
  • D. Scheduler
Answer - A
Explanation - A job is used to define a single unit of work in AWS Batch. Job definitions specify how your job should be run, including identifying resources needed like CPU and memory, storage, etc. A job queue is used to store jobs until they are scheduled to run. The scheduler is the brains of the outfit and determines when jobs should be run.
29. You need to know how the items in an AWS Batch job queue are processed so that you can make the decision as to whether you need to adjust the algorithm used for processing. In what order are items processed from the job queue by default?
  • A. First In, Last Out (FILO)
  • B. First In, First Out (FIFO)
  • C. Last In, First Out (LIFO)
  • D. Last In, Last Out (LILO)
Answer - B
Explanation - By default, the AWS Batch job queue uses the FIFO algorithm to process jobs.
30. You have a job that you want to run with AWS Batch. The job size would be 25 KB. Would this be a good fit for AWS Batch?
  • A. Yes, the job size is unlimited.
  • B. Yes, the job size is 20 KB by default but can be adjusted.
  • C. No, the job size limit is 10 KB and can’t be changed.
  • D. No, the job size limit is 20 KB and can’t be changed.
Answer - D
Explanation - With AWS Batch, the default job size is 20 KB. This is a hard limit and can’t be changed.
31. There are two types of compute environments available for use with AWS Batch. You want AWS to manage the infrastructure for your batch jobs. Which type of compute environment should you choose?
  • A. Managed Compute Environment
  • B. Unmanaged Compute Environment
  • C. Elastic Compute Environment
  • D. Scheduled Compute Environment
Answer - A
Explanation - The Managed Compute Environment is managed by AWS, and provisioning of instances and management of said instances is done by AWS. An Unmanaged Compute Environment is managed by the customer. Neither Elastic Compute Environment nor Scheduled Compute Environment are real compute environments available in AWS.
32. There are two types of compute environments available for use with AWS Batch. You want to manage the infrastructure for your batch jobs. Which type of compute environment should you choose?
  • A. Managed Compute Environment
  • B. Unmanaged Compute Environment
  • C. Elastic Compute Environment
  • D. Scheduled Compute Environment
Answer - B
Explanation - An Unmanaged Compute Environment is managed by the customer; provisioning and management of the instances is done by the customer and not by AWS. The Managed Compute Environment is managed by AWS, and provisioning of instances and management of said instances is done by AWS. Neither Elastic Compute Environment nor Scheduled Compute Environment are real compute environments available in AWS.
33. What must you install on your compute resources to utilize them with AWS Batch?
  • A. AWS Batch Agent
  • B. Amazon Inspector Agent
  • C. Amazon ECS Agent
  • D. AWS Systems Manager Agent
Answer - C
Explanation - AWS Batch uses containers to execute batch jobs. To take advantage of AWS Batch, you must install the Amazon ECS (Elastic Container Service) Agent on your compute resources. There is no such thing as an AWS Batch Agent. While Amazon Inspector Agent and AWS Systems Manager Agent are actual things, they are not required to be installed to support AWS Batch.
34. Which of these are valid launch types for Elastic Container Service (ECS)? (Choose two.)
  • A. Elastic Beanstalk launch type
  • B. Lightsail launch type
  • C. Fargate launch type
  • D. EC2 launch type
  • E. RDS launch type
Answer - C, D
Explanation - The Fargate launch type removes the need for you to provision the support infrastructure for your containers. The EC2 launch type creates a cluster of Amazon EC2 instances that are used to run your containers. The other launch types in the question are not valid.
35. You are trying to set up your application to run on ECS. You have a task definition file that is using the EC2 launch type, but you want to use the Fargate launch type. Where do you need to define the launch type in the task definition file so that your ECS containers will use Fargate instead of EC2?
  • A. "image": "FARGATE"
  • B. "requiresCompatibilities": ["FARGATE"]
  • C. "executionRoleArn": "FARGATE"
  • D. This is not defined in the task definition file.
Answer - B
Explanation - By specifying FARGATE in the requires Compatibilities parameter, you can set the launch type to Fargate. The “image” parameter is used to specify the image that you want the container to be built from. executionRoleArn is used to specify the role that should be used to execute the task.
36. Which of these options is not a valid method to provision and manage Amazon ECS?
  • A. Amazon ECS CLI
  • B. AWS SDK
  • C. Amazon EB CLI
  • D. AWS Management Console
  • E. AWS CLI
Answer - C
Explanation - You can use the Amazon ECS CLI, AWS Management Console, AWS CLI, or the AWS SDK to manage and provision resources in Amazon ECS. The Amazon EB CLI is used to manage Elastic Beanstalk but is not used to directly manage ECS.
37. You have chosen to use an Amazon EC2 instance to host Docker. You have launched the initial Amazon EC2 instance from an Amazon Linux 2 AMI. What do you need to type to install Docker?
  • A. amazon-linux-extras install docker
  • B. sudo amazon-linux-extras install docker
  • C. sudo install Docker
  • D. sudo install docker
Answer - B
Explanation - The command to install Docker on Amazon Linux 2 is sudo amazon-linux-extras install docker. Without sudo, you will run into permissions issues trying to install. sudo install docker will not work, and Linux is case sensitive, so it is important to understand that docker and Docker would not be treated the same.
38. You have installed Docker and started the Docker service. Every time you want to run a docker command, you have to add sudo to the beginning of the command. What can you do to remove the need to do this with every command?
  • A. There is nothing you can do. You must use sudo.
  • B. Add ec2-user to the root group on the EC2 instance.
  • C. Change ownership of the Docker folders to ec2- user.
  • D. Add ec2-user to the docker group on the EC2 instance.
Answer - D
Explanation - If the user account is added to the docker group on the Amazon EC2 instance, then that user account will no longer need to use sudo in front of all Docker commands. In this case, the default user ec2-user needs to be added to the docker group; however, any user account could be added to get this effect. Adding the ec2-user to the root group would create an overly permissive account and is certainly not a best practice. Changing the ownership of the Docker files could actually bring Docker down.
39. You have added the user accounts of your administrators to the docker group so that they no longer have to use sudo in front of docker commands. However, when they try to use a simple docker command, they get the error “Cannot connect to the Docker daemon. Is the docker daemon running on this host?” What should you do?
  • A. Reboot the host.
  • B. Add them to the root group instead.
  • C. Restart the Docker service.
  • D. Reinstall Docker.
Answer - A
Explanation - Occasionally a reboot is needed after granting permissions to the user accounts so that they can access the Docker daemon without sudo. Adding your administrator’s account to the root group gives them far more access than what is needed. Restarting the Docker service will not fix the permission issue. Reinstalling Docker will not resolve the issue.
40.You are using a dockerfile to build your containers. Your security team finds that you are opening port 80 on your containers and has asked that you change that to 443 as port 80 is insecure. How would you update the dockerfile to meet the request from your security team?
  • A. Add EXPOSE 443 under EXPOSE 80.
  • B. Change EXPOSE 80 to EXPOSE 443.
  • C. Add EXPOSE 443 above EXPOSE 80.
  • D. Remove the EXPOSE 80 line from your dockerfile.
Answer - B
Explanation - The EXPOSE line in the dockerfile tells Docker which ports you want the containers to listen on. By changing the 80 to 443, you have met the request of your security team. Your security team wants 443 and not 80, so you would want to remove it from the docker file altogether; re-ordering it won’t make any difference. While removing EXPOSE 80 from your ,dockerfile will make your security team happy, it will not result in a very usable container.
41. During a penetration test on your container, the tester was able to get your authentication string for your Elastic Container Registry (ECR). You were logged in on the container and were pushing an image to ECR. What was the likeliest method used to get your credentials?
  • A. The pen tester installed a key logger on the container.
  • B. The pen tester was sniffing the network traffic leaving the system and they captured your username and password.
  • C. You used an authentication string interactively with docker login using -p, and they ran ps -e.
  • D. You used an authentication string with docker login and you were prompted for your password, then they ran ps -e.
Answer - C
Explanation - When you log in interactively with docker login, you use an authentication string that is visible in the process list; the process list is displayed using ps -e. The pen tester could install a key logger but that is not very likely as most organizations do not allow the installation of malicious software. While the pen tester may have been sniffing the network, you are authenticating with an authentication string, not a username and password. If you leave off the -p and are prompted for your password, the authentication string would not have shown up in the ps -e command.
42. During a penetration test on your container, the tester was able to get your authentication string for your Elastic Container Registry (ECR). You were logged in on the container and pushing an image to ECR. You determined that they got your login by running ps -e, which exposed the authentication string. What can you do to prevent this from happening again?
  • A. Run the docker login command as the root user.
  • B. Drop the -p from the docker login command so that it prompts you for the password.
  • C. Run the docker login command with sudo in front of it.
  • D. Restart the container.
Answer - B
Explanation - By dropping the -p from the docker login command, you will be prompted for the password to connect. Since you are prompted for the password, it will not show up in the process list. Running the docker login command as root user is too permissive and will still show up in the process list. With sudo in front of the docker login command, it will still show up in the process list. Restarting the container might clear the process list, but the authentication string is good for 12 hours, so an attacker could simply reconnect, or would watch the process list for when you reconnect.
43. You need a place to store the container images that you have created. Where are container images stored?
  • A. Registry
  • B. Repository
  • C. Authorization token
  • D. Image
Answer - B
Explanation - Images are stored in repositories, and repositories are kept within a registry. Authorization tokens are used to authenticate and can be retrieved with the get-login command. An image is used to provide the operating system and dependencies for your application to run.
44. You need a place to store the container images that you have created. You know that your container images will be saved in a repository, but before you can create a repository, what do you need to create first?
  • A. Registry
  • B. Repository
  • C. Authorization token
  • D. Image
Answer - A
Explanation - Images are stored in repositories, and repositories are kept within a registry. Authorization tokens are used to authenticate and can be retrieved with the get-login command. An image is used to provide the operating system and dependencies for your application to run.
45. You want to launch your containers with the base operating system and application dependencies already taken care of so that your application can be run without any further configuration. What should you create to accomplish this goal?
  • A. Registry
  • B. Repository
  • C. Authorization token
  • D. Image
Answer - D
Explanation - An image is used to provide the operating system and dependencies for your application to run. Images are stored in repositories, and repositories are kept within a registry. Authorization tokens are used to authenticate and can be retrieved with the get-login command.
46. You have created a container image and are ready to push to your Amazon ECR repository. You run the getlogin command so that you can log into the repository. What does the get-login command return?
  • A. Registry
  • B. Repository
  • C. Authorization token
  • D. Image
Answer - C
Explanation - Authorization tokens are used to authenticate and can be retrieved with the get-login command. Images are stored in repositories, and repositories are kept within a registry. An image is used to provide the operating system and dependencies for your application to run.
47. You have created a server using Amazon Lightsail. You want to connect it to an RDS instance in your default VPC. How should you configure communication to work between Lightsail and the RDS instance?
  • A. Direct Connect
  • B. VPN gateway
  • C. VPC endpoint
  • D. VPC peering
Answer - D
Explanation - You would need to enable VPC peering on the Lightsail account page, and from there Lightsail will configure everything for you. Direct Connect and VPN gateways are for external connections to your AWS resources and are not applicable here. VPC endpoints are available for specific services but would not work in this case.
48. You have a virtual private server in Lightsail and you need to add a storage volume to it. What kind of storage drives can you use?
  • A. Solid-state drives (SSDs)
  • B. Magnetic
  • C. Throughput optimized
  • D. Cold hard disk drive (HDD)
Answer - A
Explanation - Amazon Lightsail supports solid-state drives only. The other drive types in the question are valid EBS volume types, but they are not supported in Amazon Lightsail.
49. You need to add more storage disks to a system in Amazon Lightsail. How many disks can you add?
  • A. Up to 10 disks
  • B. Up to 15 disks
  • C. Up to 20 disks
  • D. Up to 25 disks
Answer - B
Explanation - Each Amazon Lightsail instance can support up to 15 disks.
50. You are hosting a system in Amazon Lightsail that needs a large amount of storage. What is the maximum amount of storage that you can attach to Amazon Lightsail with a single disk?
  • A. 4 TB
  • B. 8 TB
  • C. 16 TB
  • D. 32 TB
Answer - C
Explanation - Each disk attached to an Amazon Lightsail instance can be a maximum of 16 TB in size.