Amazon Web Services (AWS) - Set #9

Powered by Techhyme.com

You have a total of 130 minutes to finish the practice test of AWS Certified SysOps Administrator to test your knowledge.


1. You need to increase the size of a disk that your Amazon Lightsail instance is using. What are the steps involved to increase the disk size? (Choose two.)
  • A. Take a snapshot of all of the disks.
  • B. Take a snapshot of the disk you want to enlarge.
  • C. Enlarge the disk directly from the Amazon Lightsail Console.
  • D. Create a disk of the same size, then restore the snapshot, then enlarge the disk using the AWS CLI.
  • E. Create a larger disk from the snapshot.
Answer - B, E
Explanation - To enlarge your disk, you need to take a snapshot of the disk you want to make larger, then create a larger disk from that snapshot. You don’t need snapshots of all the disks, just the disk that you want to enlarge. You can’t enlarge a disk directly from the Amazon Lightsail Console. You can’t enlarge a disk directly through the CLI.
2. You have a system in Amazon Lightsail that is running your website. You want to build another system in another availability zone to make the site highly available. How will you route traffic between the two instances with the least amount of administrative overhead?
  • A. Network Load Balancer
  • B. Lightsail load balancer
  • C. Application Load Balancer
  • D. Route 53
Answer - B
Explanation - The Lightsail load balancer is able to balance traffic across instances in other availability zones and will automatically scale when traffic loads increase. Network Load Balancers and Application Load Balancers require more administrative overhead to get them to work, as does Route 53.
3. Your website in Amazon Lightsail is highly available and is using an Amazon Lightsail load balancer. Your security team has noticed that you are using HTTP and has requested you switch to HTTPS. What is the simplest way to take care of their request?
  • A. Manually upload a certificate you have created on-prem and attach it to the load balancer.
  • B. Use AWS KMS to create and manage SSL certificates.
  • C. Use Amazon Lightsail’s built-in certificate management utilities.
  • D. Ignore your security team’s request and continue to use HTTP.
Answer - C
Explanation - Using Amazon Lightsail’s built-in certificate management utilities makes the most sense as they will request and renew certificates for you as well as add the certificate to your load balancer. The other two certificate options in this question require more manual effort to make them work. Ignoring your security team is never a good thing to do, so I wouldn’t recommend it.
4. You have built an e-commerce application and you are running it in Amazon Lightsail. You are using a Lightsail load balancer. It is important that users who visit the site are directed to the same server throughout their visit so that they don’t have to log in multiple times. How can you accommodate this request?
  • A. Enable session persistence on the Lightsail load balancer.
  • B. Enable session persistence on the Lightsail instance.
  • C. Enable sticky cookies on the Lightsail load balancer.
  • D. Enable sticky cookies on the Lightsail instance.
Answer - A
Explanation - You need to enable session persistence on the Lightsail load balancer so that it will direct the user to the correct instance. While you may hear the term sticky cookies in reference to load balancers and persistent user connections, the appropriate AWS terminology is session persistence.
5. You want to use a Lightsail-provisioned SSL certificate for your site. What will you need to do to validate that you are the owner of the domain that you are trying to issue the certificate from?
  • A. Add a TXT record to your zone at your DNS hosting provider.
  • B. Add an ALIAS record to your zone at your DNS hosting provider.
  • C. Add a SRV record to your zone at your DNS hosting provider.
  • D. Add a CNAME record to your zone at your DNS hosting provider.
Answer - D
Explanation - You will need to add a CNAME record to your DNS zone at your DNS hosting provider to validate that you own the domain for which you are trying to issue certificates.
6. Your Amazon Lightsail instance is running multiple websites. When you hosted these sites on-premises, you used a wildcard certificate. You have 5 sites total, and you will add 2 more in the next year. What type of certificate should you use in AWS?
  • A. Use a Lightsail certificate since it can support up to 10 domains.
  • B. Use a Lightsail certificate since it can support up to 15 domains.
  • C. Use a Lightsail certificate since it can support up to 20 domains.
  • D. Use a wildcard certificate as you have been onpremises.
Answer - A
Explanation - Lightsail certificates can support up to 10 domains or subdomains, and since Lightsail does not support the use of wildcard certificates, this is the best option.
7. You have been running your web application in Amazon Lightsail for a year now. You want to have more finegrained control over configuration items for your instances as well as support larger instance sizes. What is your best conversion option that reduces administrative overhead?
  • A. Create an EC2 instance, and install/configure your web application there.
  • B. Use the Upgrade to EC2 feature to convert your Lightsail instance to an EC2 instance.
  • C. Create a CloudFormation template and deploy the new EC2 instance with your web application already installed and configured.
  • D. There is no conversion option available.
Answer - B
Explanation - The Upgrade to EC2 feature is the simplest way to convert from Lightsail to an EC2 instance. You simply export a Lightsail snapshot and use the Upgrade to EC2 wizard to convert it. You do need to set up the networking infrastructure to support your new EC2 instance. Manually creating an EC2 instance and installing your application results in higher administrative overhead, as does creating a CloudFormation template from which to create an EC2 instance.
8. When using Elastic Beanstalk, which of the following is not a platform update type used?
  • A. Patch update
  • B. Major update
  • C. Hotfix update
  • D. Minor update
Answer - C
Explanation - Hotfix updates are not a valid type of platform update. Major, minor, and patch updates are all valid types of platform updates.
9. You are using Elastic Beanstalk for application deployments. You have chosen to use application lifecycle settings and have opted to keep the five most recent versions. You notice that the applications are being removed if they are older than the most recent five, but you notice that the application bundles are not being removed from S3. What can you do to ensure that the source bundle is removed when the application version is removed automatically?
  • A. In the Application Versions settings, set it to delete the source bundle from S3 whenever the application version is deleted.
  • B. Manually remove the application source bundle from S3.
  • C. Create a function in AWS Lambda that will delete the S3 source bundle when the application is deleted.
  • D. You can’t remove the S3 source bundles automatically.
Answer - A
Explanation - To take care of the source bundle automatically, change the Application Versions settings to delete the source bundle from S3 when the application version is removed. While an AWS Lambda function could work in this way, it is simpler to use the built-in versioning tools.
10. You are using Elastic Beanstalk for application deployments. You have chosen to use application lifecycle settings and have opted to keep the 5 most recent versions. You need to keep the 10 most recent versions in S3 in case you need to roll back. What is the best method to remove the old source bundles from S3 while ensuring that you have no fewer than 10 versions of the source bundle?
  • A. In the Application Versions settings, set it to delete the source bundle from S3 whenever the application version is deleted.
  • B. Manually remove the application source bundles from S3.
  • C. Create a function in AWS Lambda that will delete the oldest S3 source bundle when an application is deleted.
  • D. You can’t remove the source bundles from S3.
Answer - B
Explanation - Since the lifecycle settings are set to 5 versions and you want to keep 10 versions in S3, your best choice is to manually remove the source bundles from S3. The AWS Lambda function would potentially remove too many bundles, leaving you with fewer than 10.
11. You are on an Amazon Linux instance in Elastic Beanstalk. You want to upgrade the platform to Amazon Linux 2. What is the recommended deployment type to use for this type of upgrade?
  • A. Fresh install.
  • B. Manually upgrade all instances.
  • C. Blue/green deployment.
  • D. There is no upgrade path from Amazon Linux to Amazon Linux 2.
Answer - C
Explanation - AWS recommends using a blue/green deployment when upgrading from Amazon Linux to Amazon Linux 2 as there may be certain things that are not backwardcompatible. It would not be recommended to manually upgrade all of the instances as you may discover the upgrade has caused your application to crash.
12. You have a stand-alone instance in Elastic Beanstalk serving out your web application. You want to ensure that it will only accept traffic over HTTPS. What should you do?
  • A. Modify the instance security group to only accept TCP/443.
  • B. Modify the instance security group to only accept TCP/80.
  • C. Modify the instance security group to only accept UDP/443.
  • D. Modify the instance security group to only accept UDP/80.
Answer - A
Explanation - The instance security group serves as a stateful firewall; by ensuring it only allows TCP/443, you have locked down the instance to HTTPS. HTTP uses TCP/80, which is what you want to avoid. HTTPS does not use UDP/443 or UDP/80.
13. You have a highly available setup with four instances in Elastic Beanstalk serving out your web application. You want to ensure that they will only accept traffic over HTTPS. What should you do?
  • A. Use the instance security group and set it to TCP/443.
  • B. Use the load balancer security group and set it to TCP/443.
  • C. Use the instance security group and set it to TCP/80.
  • D. Use the load balancer security group and set it to TCP/80.
Answer - B
Explanation - The best response is to use a load balancer security group to lock down traffic to only TCP/443. HTTP uses TCP/80, so you will want to use TCP/443 to support HTTPS.
14. Which of the following is a good example of block storage?
  • A. Amazon Simple Storage Service (S3)
  • B. Amazon Elastic File System (EFS)
  • C. Amazon Elastic Block Store (EBS)
  • D. None of these
Answer - C
Explanation - Amazon Elastic Block Store (EBS) is a block storage service. Changes made to an existing file will only change the blocks containing data that has changed.
15. Which of the following is a good example of object storage?
  • A. Amazon Simple Storage Service (S3)
  • B. Amazon Elastic File System (EFS)
  • C. Amazon Elastic Block Storage (EBS)
  • D. None of these
Answer - A
Explanation - Amazon Simple Storage Service (S3) is an object storage service. Changes to a file require the upload of an entirely new file (object).
16. Which of the following is a good example of a managed file storage service?
  • A. Amazon Simple Storage Service (S3)
  • B. Amazon Elastic File System (EFS)
  • C. Amazon Elastic Block Storage (EBS)
  • D. None of these
Answer - B
Explanation - Amazon Elastic File System (EFS) is a managed file storage service. It allows you to create filesystems that can be attached to Amazon EC2 instances.
17. What type of storage is destroyed when the instance it is attached to is stopped or terminated?
  • A. Amazon S3
  • B. Instance store
  • C. Amazon EBS
  • D. Amazon EFS
Answer - B
Explanation - The instance store used with Amazon EC2 instances is ephemeral. It is destroyed whenever the instance is stopped or terminated.
18. You have an application that requires a minimum of 25,000 IOPS. Which type of storage should you use to ensure that the application gets what it requires?
  • A. General Purpose SSD
  • B. Provisioned IOPS SSD
  • C. Throughput Optimized HDD
  • D. Cold HDD
Answer - B
Explanation - The only type of storage that can deliver the IOPS needed by the application is Provisioned IOPS SSD, which can support 64,000 max IOPS per volume.
19. You need to save the data from an application that is used infrequently, and you have been asked to identify the lowest-cost storage. Which storage type would be the best fit for this use case?
  • A. General Purpose SSD
  • B. Provisioned IOPS SSD
  • C. Throughput Optimized HDD
  • D. Cold HDD
Answer - D
Explanation - Cold HDD is the lowest-cost storage available in AWS and is a great choice for workloads that are not accessed frequently.
20. You have an application that doesn’t need high speed but is frequently accessed, and many of the workloads are throughput intensive. Which storage type would be the best fit for this use case?
  • A. General Purpose SSD
  • B. Provisioned IOPS SSD
  • C. Throughput Optimized HDD
  • D. Cold HDD
Answer - C
Explanation - For workloads with high throughput needs such as data warehouses and log processing systems, the Throughput Optimized HDD is a great fit.
21. You need to choose a storage type for your developers that will minimize cost but still give them the performance they need to test applications and code against. Which storage type would be the best fit for this use case?
  • A. General Purpose SSD
  • B. Provisioned IOPS SSD
  • C. Throughput Optimized HDD
  • D. Cold HDD
Answer - A
Explanation - The General Purpose SSD is a good fit for this use case. It’s a great balance between performance and cost and is good for dev and test instances.
22. Which of these is not an important consideration when choosing the right storage tier for S3?
  • A. Cost efficiency
  • B. Durability
  • C. Retrieval times
  • D. Granular level of control
Answer - B
Explanation - All tiers of storage in S3 offer 99.999999999% durability (11 9s). So durability would not be a factor that you would need to look at when deciding on which storage tier to use.
23. Which of these storage types is not tied to a specific region?
  • A. Amazon S3
  • B. Amazon EBS
  • C. AWS Snowball
  • D. Amazon Glacier
Answer - C
Explanation - All of the storage types listed are tied to a region except AWS Snowball. AWS Snowball is designed to transport large amounts of data (think petabytes) from your datacenter to AWS.
24. You have created three Amazon EC2 instances and you want to attach EBS volumes for additional data storage. You have created the three EBS volumes, but when you select Attach in the EBS Dashboard, only two of your Amazon EC2 instances are displayed. What is the most likely reason that the third EC2 instance is not displayed?
  • A. The third EC2 instance is in a public subnet, and the other two EC2 instances are in a private subnet.
  • B. The third EC2 instance is encrypted so the EBS volume has to be encrypted to be attached to it.
  • C. The third EC2 instance is an instance type that will not allow you to attach an EBS volume.
  • D. The Amazon EC2 instance that is not showing up is in a different availability zone than the EBS volume.
Answer - D
Explanation - EBS volumes can’t span availability zones. The volume can only be attached to an Amazon EC2 instance that is in the availability zone where the EBS volume resides. None of the other choices are correct.
25. What is the minimum size drive you would need to provision to be able to get to 7500 IOPS?
  • A. 150 GB
  • B. 200 GB
  • C. 350 GB
  • D. 400 GB
Answer - A
Explanation - The formula for this calculation is 50 IOPS per provisioned GB. You would need a 150 GB drive to support 7500 IOPS. A drive of 400 GB would allow you to provision the maximum amount of IOPS.
26. Which command would you use to mount an EBS volume?
  • A. aws ec2 mount-volume --volume-id <volumeid> --instance-id <instanceid> --device /dev/<drivename>
  • B. aws ebs mount-volume --volume-id <volumeid> --instance-id <instanceid> --device /dev/<drivename>
  • C. aws ec2 attach-volume --volume-id <volumeid> --instance-id <instanceid> --device /dev/<drivename>
  • D. aws ebs attach-volume --volume-id <volumeid> --instance-id <instanceid> --device /dev/<drivename>
Answer - C
Explanation - To mount an EBS volume, you would use the command aws ec2 attach-volume --volume-id <volumeid> --instance-id <instanceid> --device /dev/<drivename>.
27. You have an instance with an EBS volume that you want to copy to an instance in another availability zone within the same region. What is the best way to make the data available to the other instance?
  • A. Copy the drive to the other availability zone.
  • B. Download the drive and then upload it into the other availability zone.
  • C. Create an AMI from the instance and then use the AMI to build an instance in the other availability zone.
  • D. Create a snapshot and restore the snapshot to a volume in the other availability zone.
Answer - D
Explanation - The best way to make the data available to the other instance is to create a snapshot of the EBS volume and then use the snapshot to create an EBS volume in the other availability zone and attach it to the other EC2 instance. Snapshots are stored regionally, so they work great for crossing availability zones. You can’t copy the volume to another availability zone, nor can you download an EBS volume from the console. While you could create an AMI and use that since AMIs are stored regionally, this would only work if you wanted to copy the whole instance. In this case, you only wanted to copy the EBS volume.
28. Which of these types of encryption can you use with your EBS volumes? (Choose two.)
  • A. Client level
  • B. Server level
  • C. Instance level
  • D. Volume level
Answer - A, D
Explanation - With EBS volumes, you have a choice between either client-level encryption, which is done by the operating system, or volume-level encryption, which is managed by AWS.
29. Your security team has mandated that you keep control of your encryption keys. Which type of encryption should you use for your EBS volumes?
  • A. Client level
  • B. Server level
  • C. Instance level
  • D. Volume level
Answer - A
Explanation - Client-level encryption is done by the operating system and requires you to manage your encryption keys. For organizations with strict security requirements, client-level encryption is the best fit.
30. You want to simplify the administration of your encryption keys while still ensuring strong encryption. What type of encryption should you use?
  • A. Client level
  • B. Server level
  • C. Instance level
  • D. Volume level
Answer - D
Explanation - Volume-level encryption is managed by AWS, which makes it the simplest from an administrative standpoint. Since each volume gets its own encryption key, and those keys use the AES-256 algorithm to perform encryption, you can be sure that you are using strong encryption.
31. You want to ensure that you are using strong encryption and that key use is audited. Which service would meet this need?
  • A. AWS IAM
  • B. AWS KMS
  • C. AWS CloudTrail
  • D. Amazon CloudWatch
Answer - B
Explanation - AWS KMS offers key management capability and should be used when you want to audit key use. AWS KMS works in concert with AWS IAM and AWS CloudTrail to provide greater visibility into key use and key protection.
32. You need a storage service that can be shared among multiple EC2 instances. You also want it to act like a filesystem as that is what your users are used to. Which storage service would be the best fit for this role?
  • A. Amazon EBS
  • B. Amazon EFS
  • C. Amazon S3
  • D. Instance store
Answer - B
Explanation - Amazon EFS acts like a filesystem and can be attached to multiple EC2 instances, so it is the best choice. Amazon EBS can only be attached to one EC2 instance at a time. Amazon S3 is object storage solution and does not act like a filesystem. Instance stores are ephemeral storage that is available for some EC2 instance types. They can’t be moved to another instance or attached to multiple instances.
33. You need to create an S3 bucket. You know that the address is to be globally unique, but you are worried that the 50-character name you came up with is too long. What is the maximum allowed length of an S3 bucket name?
  • A. 24
  • B. 56
  • C. 63
  • D. 42
Answer - C
Explanation - Amazon S3 bucket names can be up to 63 characters long, so the 50-character name you want to use will be allowed so long as it is unique.
34. Which of the following is not a legal character to use in an Amazon S3 bucket name?
  • A. Underscores
  • B. Lowercase characters
  • C. Numbers
  • D. Dashes
Answer - A
Explanation - Amazon S3 bucket names can’t contain underscores. They can contain lowercase characters, numbers, periods, and dashes.
35. Which of the following are valid access methods for Amazon S3 buckets? (Choose two.)
  • A. Bucket-style
  • B. Virtual-hosted-style
  • C. URL-style
  • D. Path-style
Answer - B, D
Explanation - Amazon S3 buckets can be either virtual-hostedstyle or path-style. Virtual-hosted-style includes the bucket name as part of the domain name in the URL. Using path-style, the bucket name is not part of the domain name in the URL. The other two options were made up for this question.
36. You need to ensure that you choose the storage class in Amazon S3 with the highest amount of durability. Which storage class should you choose?
  • A. S3 Standard
  • B. S3 Standard-IA
  • C. S3 One Zone-IA
  • D. They all have the same durability.
Answer - D
Explanation - All tiers of Amazon S3 have the same durability. Amazon guarantees 99.999999999 (11 9s) durability for all Amazon S3 storage classes.
37. You need to ensure that you choose the storage class in Amazon S3 with the highest amount of availability. The data is crucial to your business and is access 24x7x365. Which storage class should you choose?
  • A. S3 Standard
  • B. S3 Standard-IA
  • C. S3 One Zone-IA
  • D. Amazon Glacier
Answer - A
Explanation - S3 Standard and Amazon Glacier both offer 99.99% availability; however, S3 Standard is the better choice since the data is accessed regularly.
38. You need to choose the right storage class for your data, which is being moved to Amazon S3. You want to save money, and the data isn’t accessed frequently, but it needs to be available immediately when needed. It also needs to be safe from the failure of the availability zone in which it resides. Which S3 storage class would you choose?
  • A. S3 Standard
  • B. S3 Standard-IA
  • C. S3 One Zone-IA
  • D. Amazon Glacier
Answer - B
Explanation - Choosing S3 Standard-IA makes the most sense here. It is less expensive than S3 Standard, is still highly available as your data is replicated across three or more availability zones, and is a great fit for data that is not accessed frequently.
39. You need to choose the right storage class for your data, which is being moved to Amazon S3. You want to save money, and the data isn’t accessed frequently (once per year). However, it needs to be available within five hours of a request. It also needs to be safe from the failure of the availability zone in which it resides. Which S3 storage class would you choose?
  • A. S3 Standard
  • B. S3 Standard-IA
  • C. S3 One Zone-IA
  • D. S3 Glacier
Answer - D
Explanation - Amazon S3 Glacier is an excellent fit for archiving data. It is less expensive than the other S3 tiers and will allow retrieval with the needed time span. Its data is replicated across three or more availability zones, so it is highly available.
40. You need to choose the right storage class for your data, which is being moved to Amazon S3. You want to save money, and the data isn’t accessed frequently, but it should be available immediately when needed. If the data is lost, it can be re-created fairly easily, so it does not need to be highly available. Which S3 storage class would you choose?
  • A. S3 Standard
  • B. S3 Standard-IA
  • C. S3 One Zone-IA
  • D. Amazon Glacier
Answer - C
Explanation - The best fit in this case would be S3 One Zone-IA. The data is stored in one availability zone only but can be accessed immediately and is the least expensive option that meets the requirements.
41. Which S3 storage class offers 99.99% availability?
  • A. S3 Standard
  • B. S3 Standard-IA
  • C. S3 One Zone-IA
  • D. None of them offer 99.99%.
Answer - A
Explanation - S3 Standard offers 99.99% availability; none of these other options do.
42. Which S3 storage class offers 99.9% availability?
  • A. S3 Standard
  • B. S3 Standard-IA
  • C. S3 One Zone-IA
  • D. None of them offer 99.9%.
Answer - B
Explanation - S3 Standard-IA offers 99.9% availability; none of these other options do.
43. Which S3 storage class offers 99.5% availability?
  • A. S3 Standard
  • B. S3 Standard-IA
  • C. S3 One Zone-IA
  • D. None of them offer 99.5%.
Answer - C
Explanation - S3 One Zone-IA offers 99.5% availability; none of these other options do.
44. Which of these options is lowest cost and best for longterm archival, where hours are fine for recovery times?
  • A. S3 Standard
  • B. S3 Standard-IA
  • C. S3 Glacier
  • D. S3 Glacier Deep Archive
Answer - D
Explanation - S3 Glacier Deep Archive is the best fit. Its retrieval times are into the hours; however, it is the least expensive option for long-term archival.
45. You need to prevent accidental deletions in your S3 bucket from authorized users. What is the best method to protect your data from accidental deletions?
  • A. IAM group
  • B. IAM role
  • C. Enable MFA Delete
  • D. Enable versioning
Answer - C
Explanation - By enabling MFA Delete, you are able to protect against accidental deletions. Since the question asked about authorized users specifically, you don’t need to make changes in IAM. Versioning doesn’t protect against accidental deletion; it just provides a mechanism to recover a file if it were deleted.
46. You need to ensure that your data is retained for a year, but it is often not accessed much after 30 days and not at all after 90 days. You’ve decided that you would like to take advantage of a few of the storage classes that S3 offers to save money. Which of these should you do? (Choose two.)
  • A. After 30 days, move the data to S3 Standard-IA.
  • B. After 30 days, move the data to S3 Glacier.
  • C. After 90 days, move the data to S3 Standard-IA.
  • D. After 90 days, move the data to S3 Glacier.
Answer - A, D
Explanation - By moving data that is 30 days old to S3 Standard-IA, and then to S3 Glacier after 90 days, you are saving money while still meeting the requirements set.
47. You want to automatically move data through the various S3 storage classes. You want the data to be moved to S3 Standard-IA after 90 days and then to S3 Glacier after 180 days. You want the data to be retained for five years, at which point it is deleted. What is the best way to automate this process?
  • A. Scheduled job in AWS Lambda
  • B. Run a script each night in the AWS CLI.
  • C. Lifecycle policy in S3
  • D. Manually move the data as there is no automated process.
Answer - C
Explanation - This is a perfect example of when you would want lifecycle policies. A lifecycle policy in S3 can automatically move data between the various storage classes and then finally delete data as well.
48. You have a website being served out from S3. You would like to store copies of your site closer to your customers around the world. What is the best option?
  • A. Store copies of the website on EC2 instances and use Auto Scaling groups to meet demand.
  • B. Store copies of the website on EC2 instances running in the appropriate regions for your customers.
  • C. Cache copies of the website in Amazon CloudFront.
  • D. Store copies in S3 in the appropriate regions for your customers.
Answer - C
Explanation - Amazon CloudFront caches your content closer to your customers. You can have your website in S3 in one region, and CloudFront can serve it out worldwide.
49. You currently have snapshots of your EBS volumes going to S3. You need to access the snapshots. How would you access them?
  • A. Amazon S3 API
  • B. Amazon EC2 API
  • C. Amazon EBS API
  • D. The AWS Management Console
Answer - B
Explanation - While the snapshots are stored in Amazon S3, they are not directly accessible. You must use the Amazon EC2 API to work with the snapshots.
50. You are moving your on-premises datacenter to AWS. You need a solution that will allow your Linux EC2 instances to access a file share that acts similarly to the file servers you have now. You want to avoid creating file servers in AWS, and you want it to grow automatically as more data is stored. What is the best solution to replace your file servers in AWS?
  • A. Amazon S3
  • B. Amazon EBS
  • C. Amazon EFS
  • D. Amazon EC2 file server
Answer - C
Explanation - Amazon EFS offers many of the same features you would expect from your file servers and can grow dynamically as you accumulate more data. Amazon S3 doesn’t act like the file servers you currently have. Amazon EBS can only be used by one Amazon EC2 instance at a time. The question stated that you want to avoid creating file servers, so that removes option D.