Amazon Web Services (AWS) - Set #10

Powered by Techhyme.com

You have a total of 130 minutes to finish the practice test of AWS Certified SysOps Administrator to test your knowledge.


1. You need to move the data from your on-premises file servers to Amazon EFS. What is the simplest method for copying data from your file servers to Amazon EFS?
  • A. Restore from a backup.
  • B. Robocopy.
  • C. AWS DataSync.
  • D. Manually upload the files.
Answer - C
Explanation - AWS DataSync is built for this use case. It allows you to sync your existing filesystems with your Amazon EFS filesystem and can work over the Internet or via an AWS Direct Connect/AWS VPN connection. The other options require more effort or are simply not feasible, and AWS will never have a third-party product like Robocopy as an answer on one of its exams.
2. Which of the following are storage classes you can choose from for Amazon EFS? (Choose two.)
  • A. Amazon EFS Glacier
  • B. Amazon EFS Infrequent Access One Zone
  • C. Amazon EFS Standard
  • D. Amazon EFS Infrequent Access
Answer - C, D
Explanation - For Amazon EFS, you can choose either Standard or Infrequent Access. The other two options don’t belong to Amazon EFS.
3. You are moving files to Amazon EFS from your on-prem file servers. You want to save the company money, and you know that some of the data is stale, but you don’t know if it’s safe to delete the data. What should you do?
  • A. Create an age-off policy to move stale data to EFS IA.
  • B. Create an expiration policy to move stale data to EFS IA.
  • C. Create an age-off policy to move stale data to Amazon S3 IA.
  • D. Create an expiration policy to move stale data to Amazon S3 IA.
Answer - A
Explanation - You can use lifecycle policies in Amazon EFS to move data from Amazon EFS Standard to Amazon EFS IA. In Amazon EFS, these are called age-off policies; expiration policies are used in Amazon S3.
4. You have some small text files that are around 100 KB in size. You have enabled Amazon EFS Lifecycle Management and have noticed that these files have not been moved to Amazon EFS IA even though they have not been accessed for a long time. What is the most likely reason these files have not been moved?
  • A. The files are smaller than 64 KB.
  • B. The files are smaller than 128 KB.
  • C. The files are smaller than 256 KB.
  • D. The files are smaller than 512 KB.
Answer - B
Explanation - Files smaller than 128 KB in size will not be moved by Amazon EFS Lifecycle Management.
5. How can you secure your Amazon EFS deployment so that only authorized Amazon EC2 instances can access the file share with the least amount of administrative effort? (Choose two.)
  • A. Network access control lists
  • B. Security groups
  • C. IAM policies
  • D. IAM groups
Answer - B, C
Explanation - VPC security groups can be used to specify which systems or IP ranges are allowed to access your file shares. IAM policies can be applied to the filesystem.
6. You would like to create a shared directory in Amazon EFS and ensure through the operating system that certain users will see the shared directory as their root directory. How can this be accomplished with Amazon EFS?
  • A. Amazon EFS Peering
  • B. AWS IAM
  • C. Amazon EFS Access Point
  • D. Amazon EFS Endpoint
Answer - C
Explanation - Amazon EFS Access Points allow you to use an operating system user or group to access a particular shared directory as their root directory. You can further enforce this by adding an IAM policy on to the access point.
7. Your security team has required that data be encrypted while in Amazon S3 and that you maintain control over the keys at all times. As a SysOps administrator, you don’t want to implement a client-side encryption library; you want something that will not have a high degree of administrative effort. What should you choose?
  • A. SSE-S3
  • B. SSE-C
  • C. SSE-KMS
  • D. Amazon S3 Encryption Client
Answer - B
Explanation - SSE-C allows you to maintain control over your keys while still allowing Amazon S3 to handle the actual encryption process. This simplifies administration as you don’t need to implement a client-side encryption library; you can instead leverage the tools provided by AWS.
8. Your security team has required that data be encrypted while in Amazon S3. As a SysOps administrator, you don’t want to implement a client-side encryption library; you want something that will not have a high degree of administrative effort. What should you choose?
  • A. SSE-S3
  • B. SSE-C
  • C. SSE-KMS
  • D. Amazon S3 Encryption Client
Answer - A
Explanation - Since there is no requirement to keep control over the keys, SSE-S3 is going to be the best fit in this case. The level of administrative effort is kept low as well, since AWS manages the keys for you.
9. Your security team has required that data be encrypted while in Amazon S3 and that you be able to audit key access. As a SysOps administrator, you don’t want to implement a client-side encryption library; you want something that will not have a high degree of administrative effort. What should you choose?
  • A. SSE-S3
  • B. SSE-C
  • C. SSE-KMS
  • D. Amazon S3 Encryption Client
Answer - C
Explanation - SSE-KMS uses the AWS KMS service to manage the encryption of your keys and provides an audit trail for who has accessed your key and which object or objects were accessed by the key. Whenever there is an encryption question and auditability is a requirement, you will want SSE-KMS.
10. Your security team has required that data be encrypted while in Amazon S3 and that you maintain control of the keys. You want to reduce the overhead of encryption on the server side, so you would like to use a client-side encryption library. What should you choose?
  • A. SSE-S3
  • B. SSE-C
  • C. SSE-KMS
  • D. Amazon S3 Encryption Client
Answer - D
Explanation - The Amazon S3 Encryption client allows you to maintain control of your keys and take advantage of client-side encryption libraries.
11. Your security team wants to be able to discover sensitive data within your S3 buckets and to get alerts if there is suspected unauthorized access. Is there a tool with AWS that can provide this functionality?
  • A. Yes, there is Amazon Inspector.
  • B. Yes, there is Amazon GuardDuty.
  • C. Yes, there is Amazon Macie.
  • D. No, there is no native tool that provides this function.
Answer - C
Explanation - Amazon Macie can be used to discover sensitive data and will send alerts if there is reason to believe an unauthorized access has occurred or there has been data leakage.
12. Your security team would like to be able to audit who has access to S3 buckets and remediate excessive permissions. Is there a tool in AWS that will allow them to audit permissions?
  • A. No, there is no tool built into AWS that provides this function.
  • B. Yes, there is Amazon Macie.
  • C. Yes, there is Amazon Inspector.
  • D. Yes, there is Access Analyzer for S3.
Answer - D
Explanation - Access Analyzer for S3 is able to examine your bucket policies and remediate buckets that are overly permissive.
13. You want to use Amazon S3 to store data; however, you don’t want to set up lifecycle policies. Your management has requested that you save money on storage whenever possible. What is the best solution?
  • A. Use lifecycle policies.
  • B. Use S3 Intelligent-Tiering.
  • C. Use a Lambda function.
  • D. Move objects manually.
Answer - B
Explanation - By using S3 Intelligent-Tiering, you are getting the best of both worlds. You aren’t having to set up lifecycle policies, and you are saving money on storage as it will automatically move stale data to less expensive storage. With S3 Intelligent-Tiering, you have two tiers for storage; you have one tier that is set up for frequent access, and one tier set up for infrequent access. It is able to intelligently move data between the tiers automatically.
14. You have decided to use S3 Intelligent-Tiering to store your data. You notice that some small text files that are around 50 KB have not been transitioned to the infrequent access tier even though they have not been accessed for over 90 days. Why is this happening?
  • A. The file is under 64 KB.
  • B. The file is under 128 KB.
  • C. The file is under 256 KB.
  • D. The file is under 512 KB.
Answer - B
Explanation - Much like lifecycle policies, files under 128 KB will remain in the frequent access tier rather than be moved.
15. You put some data into S3 Standard-IA but then decide to delete it 10 days later. When you receive the bill, you see that you were charged for 30 days. Why did this occur?
  • A. The minimum storage duration is 20 days.
  • B. The minimum storage duration is 15 days.
  • C. The minimum storage duration is 30 days.
  • D. It was a billing error on the AWS side.
Answer - C
Explanation - S3 Standard-IA is designed for long-term storage and has a minimum storage duration of 30 days. While your data was only stored for 10 days before being deleted, you were charged for the full 30 days as that is the minimum duration.
16. You have some data that has been archived in Amazon S3 Glacier. You need to retrieve it within 5 hours. Which retrieval speed should you use to keep the costs down?
  • A. Expedited
  • B. Standard
  • C. Bulk
  • D. You can’t retrieve data that quickly.
Answer - B
Explanation - Standard retrieval in Amazon S3 Glacier will usually be complete within 3–5 hours.
17. You have some data that has been archived in Amazon S3 Glacier. You need to retrieve it as soon as possible. Which retrieval speed should you use to keep the costs down?
  • A. Expedited
  • B. Standard
  • C. Bulk
  • D. You can’t retrieve data that quickly.
Answer - A
Explanation - Expedited retrieval in Amazon S3 Glacier will usually be complete within 1–5 minutes. While you have been asked to keep the costs down, if the data is truly needed as soon as possible, then this is the way to go.
18. You have some data that has been archived in Amazon S3 Glacier. You need to retrieve it within 12 hours. Which retrieval speed should you use to keep the costs down?
  • A. Expedited
  • B. Standard
  • C. Bulk
  • D. You can’t retrieve data that quickly.
Answer - C
Explanation - Bulk retrieval in Amazon S3 Glacier will usually be complete within 5–12 hours.
19. You have some data that has been archived in Amazon S3 Glacier. You were told that you needed to retrieve the data within 4 hours so you chose a standard retrieval speed. Your management has just informed you that they now need it as soon as possible. What should you do?
  • A. Cancel the current restore and make an expedited retrieval request instead.
  • B. Cancel the current restore and make a bulk retrieval request instead.
  • C. Keep the current restore going as you can’t change or cancel it once it is requested.
  • D. Use S3 Restore Speed Upgrade to request the expedited retrieval speed.
Answer - D
Explanation - If you need to get a restore from Amazon S3 Glacier faster than initially requested, you can use the S3 Restore Speed Upgrade. This will require you to choose a faster restore speed, and you will be charged for both retrieval requests.
20. You put some data into S3 Glacier but then decide to delete it 30 days later. When you receive the bill, you see that you were charged an additional 60 days for an early deletion fee. Why did this occur?
  • A. The minimum storage duration is 60 days.
  • B. The minimum storage duration is 90 days.
  • C. The minimum storage duration is 30 days.
  • D. It was a billing error on the AWS side.
Answer - B
Explanation - S3 Glacier is designed for long-term storage and has a minimum storage duration of 90 days. Your data was stored for 30 days, which you would have been charged for, and then you were charged a pro-rated 60 days for the early deletion fee.
21. You have some data that has been archived in Amazon S3 Glacier Deep Archive. You need to retrieve it within 12 hours. Which retrieval speed should you use to keep the costs down?
  • A. Expedited
  • B. Standard
  • C. Bulk
  • D. You can’t retrieve data that quickly.
Answer - B
Explanation - Standard retrieval in Amazon S3 Glacier Deep Archive will usually be complete within 12 hours.
22. You have some data that has been archived in Amazon S3 Glacier Deep Archive. You need to retrieve it within 48 hours. Which retrieval speed should you use to keep the costs down?
  • A. Expedited
  • B. Standard
  • C. Bulk
  • D. You can’t retrieve data that quickly.
Answer - C
Explanation - Bulk retrieval in Amazon S3 Glacier Deep Archive will usually be complete within 48 hours.
23. Which of these is not a recommended method for migrating data from magnetic tape to Amazon S3 Glacier Deep Archive?
  • A. AWS Tape Gateway
  • B. AWS Snowball
  • C. Transfer over the Internet
  • D. Transfer over AWS Direct Connect
Answer - C
Explanation - Transferring over the Internet is not a recommended method to migrate from tape backup to Amazon S3 Deep Archive. Using AWS Tape Gateway or AWS Snowball or transferring over AWS Direct Connect are all recommended solutions for data migration from tape.
24. You put some data into S3 Glacier Deep Archive but then decide to delete it 90 days later. When you receive the bill, you see that you were charged an additional 90 days. Why did this occur?
  • A. The minimum storage duration is 90 days.
  • B. The minimum storage duration is 120 days.
  • C. The minimum storage duration is 180 days.
  • D. It was a billing error on the AWS side.
Answer - C
Explanation - S3 Glacier Deep Archive is designed for long-term storage and has a minimum storage duration of 180 days. Your data was stored for 90 days, which you would have been charged for, and then you were charged a pro-rated 90 days for the remainder of the minimum storage duration.
25. You need to query S3 for data and would like to use SQL queries as that is what you are most familiar with. How would you do this in AWS against S3?
  • A. Amazon Athena
  • B. Amazon Macie
  • C. AWS Lambda
  • D. You can’t query using SQL against Amazon S3.
Answer - A
Explanation - Amazon Athena makes it possible to query data in Amazon S3 using standard SQL queries.
26. You have an S3 bucket that has sensitive information in it that should not change. You want to be notified anytime there is a potential change to the data. Which of these is not a method that will work for sending notifications when events like this occur?
  • A. Amazon SNS
  • B. Amazon SQS
  • C. AWS Lambda
  • D. Amazon CloudWatch
Answer - D
Explanation - Amazon S3 notifications can be tied into Amazon SNS, Amazon SQS, or AWS Lambda. Amazon CloudWatch does not have the same level of integration into S3 that the other three do.
27. You need to ensure that offices around the globe can upload data to your Amazon S3 bucket with the least amount of latency and administrative overhead possible. What is the best way to achieve this goal?
  • A. Upgrade the connections at your offices.
  • B. Enable S3 Transfer Acceleration.
  • C. Use geolocation routing in Amazon Route 53.
  • D. Enable Multipart upload.
Answer - B
Explanation - The best answer here is to enable S3 Transfer Acceleration. When this is enabled on your S3 bucket, your offices connect to the Amazon CloudFront edge location nearest them, which routes the traffic to your Amazon S3 bucket.
28. You would like to do an analysis of the data that you have in Amazon S3 to see if the lifecycle policies you have in place are adequate or if you should modify them. What is the simplest way to perform this type of analysis?
  • A. Check AWS CloudTrail for the last access dates for each object.
  • B. Use Amazon Athena to perform a SQL query for the last access dates greater than 90 days.
  • C. Perform a Storage Class Analysis.
  • D. Manually review the last access date.
Answer - C
Explanation - A Storage Class Analysis is able to look at access patterns to determine if there is data that should be moved to a less frequent access tier of storage. It can look at whole buckets, or more specifically at prefixes and/or object tags.
29. You need to replace the tag sets on over 500 Amazon S3 objects. How can you accomplish this with the least amount of administrative effort?
  • A. Use S3 Batch Operations.
  • B. Use Amazon Athena.
  • C. Create a function in AWS Lambda.
  • D. Manually replace the tag sets.
Answer - A
Explanation - Using S3 Batch Operations will allow you to automate this process easily with very little administrative effort.
30. You have several buckets in Amazon S3 and you have a regulatory requirement to store the data in two locations that are at least 350 miles apart. How can you meet this requirement with the least amount of administrative effort?
  • A. Use an AWS Lambda function to copy the data over.
  • B. Use Amazon Athena to copy the data over.
  • C. Enable Amazon S3 Same-Region Replication.
  • D. Enable Amazon S3 Cross-Region Replication.
Answer - D
Explanation - When you enable Amazon S3 Cross-Region Replication, you are able to replicate the contents of your Amazon S3 bucket to another region. You can choose a region that guarantees your data has at least 350 miles between each copy of the data. For instance. you could replicate between us-west-1 and us-east-1.
31. You want to use Amazon CloudFront with your website and you are trying to decide where you want to host your static files. What is the best origin server to host static files?
  • A. Amazon S3 bucket
  • B. AWS Lambda
  • C. Amazon EC2 instance
  • D. You can’t use Amazon CloudFront with static files.
Answer - A
Explanation - For static files, Amazon S3 is the best choice for an origin server.
32. You want to use Amazon CloudFront with your website and you are trying to decide where you want to host your dynamic files. What is the best origin server to host dynamic files?
  • A. Amazon S3 bucket
  • B. AWS Lambda
  • C. Amazon EC2 instance
  • D. You can’t use Amazon CloudFront with dynamic files.
Answer - C
Explanation - For dynamic files, an Amazon EC2 instance running some kind of web server is the best choice for an origin server.
33. You want to ensure that people trying to access your website get access through Amazon CloudFront, but you want to ensure that they can type the web address that you have advertised. What is the best way to accommodate this need?
  • A. Set up an A record for the domain name.
  • B. Set up a PTR record for the IP address of CloudFront.
  • C. Set up a CNAME record for your domain name.
  • D. Set up an ALIAS record for your domain name.
Answer - C
Explanation - You should set up an CNAME record with your domain name and point it to the CloudFront distribution address.
34. How does Amazon CloudFront improve performance for your end users in another country?
  • A. Caches content closer to end users in an S3 bucket in their region.
  • B. Caches content closer to end users at an Amazon CloudFront edge location.
  • C. Caches content into a central location that everyone accesses.
  • D. It does not improve performance; it just improves cost.
Answer - B
Explanation - Amazon CloudFront improves performance by caching the content of your site closer to your end users. It intelligently routes them to the nearest edge location to them.
35. Which of the following is not something that would benefit from being cached on Amazon CloudFront?
  • A. Image downloads
  • B. Dynamic PHP page
  • C. Video downloads
  • D. Software downloads
Answer - B
Explanation - Amazon CloudFront is at its best when it is being used to cache static content like images, videos, software downloads, etc. Dynamic content that changes often does not benefit as much from caching. In fact, it can cause a problem if the old version is cached and you need the new version to show up.
36. You have been asked to ensure that the origin server you are using for Amazon CloudFront is highly available. What is the best solution to meet this requirement?
  • A. Enable origin redundancy in Amazon CloudFront.
  • B. Create an AWS Lambda function that will point to a different origin server should the primary fail.
  • C. Manually point Amazon CloudFront to a new origin server.
  • D. Point Amazon CloudFront to an application load balancer.
Answer - A
Explanation - Origin redundancy in Amazon CloudFront allows you to add a “backup origin.” You can specify what should trigger the usage of the backup origin and can choose a combination of HTTP 400 and/or HTTP 500 response codes.
37. You are using Amazon CloudFront, and you want to ensure that people from certain countries can’t access your website as you don’t do business in those countries. How can you block the desired countries while making sure that your valid customers can still access your site?
  • A. Whitelist the desired countries in Geolocation.
  • B. Blacklist the desired countries in Geolocation.
  • C. Block the IP ranges of the desired countries in the security group.
  • D. You can’t block countries from accessing your Amazon CloudFront distribution.
Answer - B
Explanation - In the Amazon CloudFront Console, you can choose to whitelist allowed countries or blacklist countries you want to block. In the scenario posed in the question, you would want to blacklist the desired countries in the Geolocation tab.
38. You want to ensure that your customers get a customized error page that gives them numbers they can call if they encounter an error on your web page. You had a custom page when the website was onpremises, and you would like to create a similar page now that you have moved your website to S3 and created a distribution in Amazon CloudFront. How could you present a customized error page in AWS?
  • A. Configure a customized error page in Amazon CloudWatch.
  • B. Configure a customized error page in Amazon S3.
  • C. Configure a customized error page in Amazon CloudFront.
  • D. You can’t use customized error pages in Amazon CloudFront.
Answer - C
Explanation - You can create customized error pages in Amazon CloudFront that include your logo and different messages (if desired) for different types of HTTP 400 or HTTP 500 errors.
39. You changed a file in Amazon S3 that you are using as an origin server. The new file wasn’t displayed until the next day. Why did it take so long for the changed file to show up on your website?
  • A. Amazon CloudFront checks for new versions every 6 hours.
  • B. Amazon CloudFront checks for new versions every 12 hours.
  • C. Amazon CloudFront checks for new versions every 18 hours.
  • D. Amazon CloudFront checks for new versions every 24 hours.
Answer - D
Explanation - Amazon CloudFront looks for new versions of files 24 hours after the last time the file was checked. You can set the expiration to 0 to make it check the file right away. However, it is important to change it back to 24 hours (or whatever your setting is) as the 0 will cause it to request a new version of the file from your origin server every time there is a request. This is a popular exam topic.
40. You are hosting your site in Amazon S3 and are using Amazon CloudFront to cache content. You need to remove a new advertisement from your site immediately as it was unintentionally offensive to some of your customers. How can you remove the new advertisement immediately without impacting the site?
  • A. Delete the file in Amazon S3.
  • B. Invalidate the file in Amazon CloudFront.
  • C. Change the file expiration to 1 hour.
  • D. Contact AWS to have it removed from Amazon CloudFront.
Answer - B
Explanation - Invalidating the file in Amazon CloudFront will ensure that the ad is removed from future requests for the page that the ad was on. This is the closest thing to an immediate removal you have in Amazon CloudFront. Deleting the file in Amazon S3 will not result in immediate removal; you will have to wait for the ad’s file to expire on Amazon CloudFront.
41. You are using Amazon CloudFront in front of your origin servers to cache content closer to your customers. While you are using HTTPS to encrypt web traffic from your customers to your end systems, your security team has requested that you secure the sensitive fields in your application that are asking for credit card information. They want to ensure that only a request from an authorized application can access the credit card number. What is the best way to accomplish what your security team has asked you to do?
  • A. Enable TLS 1.2, but disable the weaker TLS 1.0 and 1.1.
  • B. Encrypt the database with transparent data encryption (TDE).
  • C. Use field-level encryption.
  • D. HTTPS is the only option to secure the web traffic.
Answer - C
Explanation - Field-level encryption can be used to further secure sensitive fields like those asking for credit card numbers. As the name suggests, it must be enabled for individual fields. The input is encrypted with a public key and only authorized applications have access to the private key needed to decrypt the data.
42. You are using Amazon CloudFront to cache your website, and are currently using a third-party certificate for HTTPS connections. You would like to make certificate management more automated rather than requiring you to provision a new certificate or renew a certificate from your third-party certificate authority. What would be the best way to automate the process so that administrative overhead is reduced?
  • A. Create a script to renew certificates for you.
  • B. Use AWS Certificate Manager.
  • C. Use AWS KMS.
  • D. You have to use third-party certificates.
Answer - B
Explanation - AWS Certificate Manager has integrations into CloudFront so it is the best choice. AWS Certificate Manager can take care of certificate renewals for you as well.
43. Your security team has requested that you choose controls to provide a greater deal of protection than you have currently through Amazon CloudFront. What protection do you have by default in your AWS account?
  • A. AWS Shield Standard
  • B. AWS Shield Advanced
  • C. AWS WAF
  • D. Amazon GuardDuty
Answer - A
Explanation - By default, you have access to AWS Shield Standard. You can pay to upgrade to AWS Shield Advanced if desired. AWS Shield provides protection from DDoS attacks.
44. Your security team has requested that you choose controls to provide a greater deal of protection than you have currently through Amazon CloudFront. What control can you use to protect against web application layer attacks?
  • A. AWS Shield Standard
  • B. AWS Shield Advanced
  • C. AWS WAF
  • D. Amazon GuardDuty
Answer - C
Explanation - The AWS WAF is purpose-built to detect web application attacks. It can be put in place to protect your web applications that use Amazon CloudFront.
45. You want to ensure that requests sent to your origin servers are from Amazon CloudFront. How can you provide this assurance with the least amount of administrative effort and no additional cost?
  • A. There is no way to assure that requests for the origin servers came from Amazon CloudFront.
  • B. Check all requests to the origin servers for Amazon CloudFront’s IP address.
  • C. Use the request headers to prove traffic came from Amazon CloudFront.
  • D. Have Amazon CloudWatch monitor for source addresses that don’t belong to Amazon CloudFront.
Answer - C
Explanation - You can modify the request headers to prove that traffic bound for your origin servers came from Amazon CloudFront.
46. Your website contains dynamic content as it is customized for every customer that visits with sales recommendations. You want to use Amazon CloudFront for the performance gains your customers would notice, but you have to make sure that the cookies will still work. How can you make sure that the cookies still work after moving to Amazon CloudFront?
  • A. Allow the cookies through AWS WAF.
  • B. Use a DynamoDB database to track cookies instead of tracking them in Amazon CloudFront.
  • C. Allow your origin server to use cookies.
  • D. Allow Amazon CloudFront to forward cookies to your origin server.
Answer - D
Explanation - All you need to do here is allow CloudFront to forward cookies to your origin servers. So long as this is allowed, you can continue to use the cookies for dynamic content on the website.
47. You want to embed a URI query parameter into your Amazon CloudFront address. How do you indicate the start and stop of the URI query?
  • A. URI query starts after a & and ends with a ? character.
  • B. URI query starts after a ? and ends with a & character.
  • C. URI query starts after a ? and ends with a $ character.
  • D. URI query starts after a $ and ends with a ? character.
Answer - B
Explanation - When you use a URI query parameter in an HTTP GET request, the parameter starts after a “?” and ends with a “&” character.
48. You are trying to deliver a 30 GB virtual appliance file through Amazon CloudFront for a customer who needs the latest version. While the file uploads to Amazon S3 with no issues, it is not getting cached by Amazon CloudFront. What is the most likely cause?
  • A. The limit for a single file delivery in Amazon CloudFront is 10 GB.
  • B. The limit for a single file delivery in Amazon CloudFront is 15 GB.
  • C. The limit for a single file delivery in Amazon CloudFront is 20 GB.
  • D. The limit for a single file delivery in Amazon CloudFront is 25 GB.
Answer - C
Explanation - Amazon S3 doesn’t put a limit on file size, but Amazon CloudFront has a limit for single files, which is 20 GB. That would explain why you can upload the file to Amazon S3 with no issue and why it is not being delivered by Amazon CloudFront.
49. You are wanting to make the move from your onpremises datacenter to AWS. You currently have 50 TB of data and are trying to figure out the best way to move the data to AWS. What should you recommend?
  • A. AWS DataSync
  • B. Manual transfer over AWS Direct Connect
  • C. Amazon S3 multipart upload
  • D. AWS Snowball
Answer - D
Explanation - The best answer here is going to be AWS Snowball. It is designed to transfer petabytes of data and is an expedient solution.
50. You are just starting to work with AWS Snowball. You have ordered the 80 TB unit and you need to start transferring data to it. What do you need to do first to prepare the source host for data transfer?
  • A. Install the file server role on the source host.
  • B. Install the AWS Snowball client.
  • C. Compress the directories that you want to move.
  • D. Deduplicate the files on the source host.
Answer - B
Explanation - To prepare your source host to transfer data to the AWS Snowball device, you will need to install the AWS Snowball client. This handles the encryption and compression of the data as well as the transfer to the AWS Snowball device.