Amazon Web Services (AWS) - Set #5

Powered by Techhyme.com

You have a total of 130 minutes to finish the practice test of AWS Certified SysOps Administrator to test your knowledge.


1. Which product provides the fastest performance when you need to run a large report that includes complex queries?
  • A. Amazon EMR
  • B. Amazon RedShift
  • C. Amazon Athena
  • D. Amazon RDS
Answer - B
Explanation - Amazon RedShift provides the fastest performance when you need to run a large report that includes complex queries.
2. Which AWS product is best suited to replace an onpremises data lake using Hadoop?
  • A. Amazon EMR
  • B. Amazon RedShift
  • C. Amazon Athena
  • D. Amazon RDS
Answer - A
Explanation - Amazon Elastic Map Reduce (EMR) is best suited to replace a data lake using Hadoop on-premises. You can define the requirements of the workload that you are wanting to run to support whatever analysis you want to perform.
3. You need to be able to run ad hoc queries against data in Amazon S3. Which product is best suited for this task?
  • A. Amazon EMR
  • B. Amazon RedShift
  • C. Amazon Athena
  • D. Amazon RDS
Answer - C
Explanation - Amazon Athena is designed to run ad hoc queries against data contained in Amazon S3.
4. You are using Amazon Kinesis Firehouse to collect a large amount of data in real time. How can you analyze the data if it is stored in Amazon S3 in a cost-effective manner?
  • A. Amazon RDS
  • B. Amazon CloudWatch
  • C. AWS Lambda
  • D. Amazon Athena
Answer - D
Explanation - Amazon Athena is the most cost-effective option to use to query a large dataset in Amazon S3. Amazon RDS is a relational database, Amazon CloudWatch is used to monitor logs, not analyze datasets, and AWS Lambda works off of triggers. Since you get charged each time AWS Lambda is run, it would not be a cost-effective option for this use case.
5. What is a benefit provided by Amazon Macie?
  • A. Performing security assessments in AWS
  • B. Visibility into the locations where you store data
  • C. Running ad hoc queries against Amazon S3
  • D. Management of storage encryption keys
Answer - B
Explanation - Amazon Macie can provide visibility into the management functions that are used to work with storage locations in AWS. At the time of this writing, that is limited to Amazon S3, but it will be expanded in the future. Security assessments are performed by Amazon Inspector, ad hoc queries against S3 are performed by Amazon Athena, and the management of encryption keys for storage is performed by AWS KMS.
6. What is a benefit provided by Amazon Macie?
  • A. Monitor API usage for storage access.
  • B. Manage storage versioning in S3.
  • C. Integration with Amazon CloudWatch Events
  • D. Manage the storage lifecycle in S3.
Answer - C
Explanation - Amazon Macie integrates with Amazon CloudWatch Events, which allows you to build custom alerts off of events identified by Amazon Macie and to perform automatic remediation if desired. API usage is monitored by AWS CloudTrail, and versioning is provided within S3 and can be managed from the S3 Dashboard. Lifecycle events in S3 are managed from within S3.
7. Your website has been suffering performance issues, and you have been able to determine that this is due toa spike in traffic to your servers. The servers are behind an ELB and the CPU on both Amazon EC2 instances hovers around 95% during this time frame. Your boss has asked you to find a way improve performance without impacting cost any more than is absolutely necessary. What should you do?
  • A. Create an EC2 Auto Scaling group and have Amazon CloudTrail trigger an autoscale event to scale up when the CPU reaches 80% and scale down when the CPU drops to 40%.
  • B. Create an EC2 Auto Scaling group and have Amazon CloudWatch trigger an autoscale event to scale up when the CPU reaches 80% and scale down when the CPU drops to 40%.
  • C. Create an EC2 Auto Scaling group and have Amazon CloudWatch trigger an autoscale event to scale up when the CPU reaches 95% and scale down when the CPU drops to 40%.
  • D. Create an EC2 Auto Scaling group and have Amazon CloudWatch trigger an autoscale event to scale up when the CPU reaches 80% and scale down when the CPU drops to 75%.
Answer - B
Explanation - By using Amazon CloudWatch to trigger an autoscale event, you can provision new servers before performance is impacted. You can create another Amazon CloudWatch trigger to scale down when CPU usage drops again. You would not use AWS CloudTrail to trigger an autoscaling event; it is used to log API calls. You know that performance is impacted when the CPU is at 95% utilization, so you would want to scale before it reaches that point. Your boss wants a costefficient solution…by having scale-up happen at 80% and scale-down happen at 75%, it is likely that you will have constant scaling events, which will become expensive and will not help performance.
8. You are trying to set up EC2 Auto Scaling groups. What must you do before you can set up an Auto Scaling group?
  • A. Create an ELB.
  • B. Create an Amazon EC2 instance.
  • C. Create a launch configuration.
  • D. Set up monitoring in Amazon CloudWatch.
Answer - C
Explanation - When you want to set up an EC2 Auto Scaling group, you must first create a launch configuration to define how the Amazon EC2 instances being launched by autoscaling should be configured. While an ELB does aid in high availability, you are not required to make one to create an Auto Scaling group. Launch configurations can be created by copying an existing EC2 instance, but you only need an AMI to create the launch configuration. Setting up monitoring in Amazon CloudWatch is a great idea if you want to use Amazon CloudWatch to kick off autoscaling events but is not required to create an Auto Scaling group.
9. Your EC2 Auto Scaling group is being monitored by Amazon CloudWatch. When the CPU goes over 80%, Amazon CloudWatch enters an alarm state. Amazon CloudWatch sends a message to the Auto Scaling group containing instructions on what to do. This set of instructions is referred to as what?
  • A. Amazon Machine Image
  • B. Config file
  • C. Launch configuration
  • D. Policy
Answer - D
Explanation - The set of instructions sent by Amazon CloudWatch to the Auto Scaling group is referred to as a policy. The policy defines what the Auto Scaling group should do with the alarm it receives from Amazon CloudWatch. An Amazon Machine Image (AMI) can be used by the Auto Scaling group to create an Amazon EC2 instance when needed. The “config file” option was created for this question. The launch configuration is needed by the Auto Scaling group to define how it should launch new instances.
10. Your boss has heard that EC2 Auto Scaling groups can scale based on metrics monitored in Amazon CloudWatch. However, the traffic to your web servers follows very predictable patterns, so your boss would like to know if you can schedule a scaling event instead. What should your response be?
  • A. Yes, scaling events can be triggered on a schedule.
  • B. No, scaling events can’t be triggered on a schedule.
  • C. Yes, you can schedule scaling events through Amazon CloudWatch.
  • D. No, scaling events can only be triggered based on Amazon CloudWatch metrics.
Answer - A
Explanation - Scaling events can be triggered by schedule. The schedule is not created in Amazon CloudWatch, and you don’t need an alarm from Amazon CloudWatch to scale.
11. You want to be able to automatically scale your resources but you don’t want to have to specify scaling policies or schedule scaling. What should you do?
  • A. Use analytics scaling.
  • B. Use behavioral scaling.
  • C. Use predictive scaling.
  • D. There are no other options.
Answer - C
Explanation - Predictive scaling is a feature of AWS Auto Scaling that can look back at previous activity and use that to schedule the needed scaling changes based on both daily and weekly patterns. There is no such thing as analytics scaling or behavioral scaling in this context.
12. You have just set up your EC2 instances and have configured an Auto Scaling group that uses predictive scaling. Scaling events are not occurring. What is the most likely reason why the scaling events are not occurring?
  • A. Predictive scaling needs at least two weeks of data.
  • B. Predictive scaling needs at least one month of data.
  • C. Predictive scaling needs at least one week of data.
  • D. Predictive scaling doesn’t support EC2 instances.
Answer - A
Explanation - Predictive scaling needs at least two weeks of data before it can generate a scaling schedule based on your normal activity. Predictive scaling does support EC2 instances; in fact, EC2 instances are the only supported type at the time of this writing.
13. Which of these cannot take advantage of Auto Scaling groups?
  • A. Amazon Elastic Container Service (ECS)
  • B. Amazon DynamoDB
  • C. Aurora replica in Amazon Aurora
  • D. Oracle database in Amazon RDS
Answer - D
Explanation - Amazon Relational Database Service (RDS) as a general rule is not one of the services that can take advantage of Auto Scaling groups. Amazon ECS, DynamoDB, and Aurora replicas within Amazon Aurora can take advantage of Auto Scaling groups using either Auto Scaling groups or the Auto Scaling API.
14. You have created your Auto Scaling group but you notice that you have no EC2 instances within the group. What is the most likely cause?
  • A. You didn’t set a desired capacity.
  • B. Minimum capacity is set to 0, and there is no load.
  • C. Maximum capacity is set to 1.
  • D. Autoscaling is not available in your region.
Answer - B
Explanation - If minimum capacity is set to 0 and there is no load, then it is entirely possible for you to have 0 instances. If you leave desired capacity blank, then the minimum capacity is used. If maximum capacity is set to 1, then your Auto Scaling group can have up to one EC2 instance. If autoscaling wasn’t available in your region, you wouldn’t have been able to set up your ASG in the first place.
15. Which of these options are valid states for an EC2 instance in an Auto Scaling group? (Choose two.)
  • A. Active
  • B. Online
  • C. Pending
  • D. Starting
  • E. InService
Answer - C, E
Explanation - The Pending state is used when an EC2 instance is starting up, and InService is used when an EC2 instance is ready to service requests. Active, online, and starting are not valid states.
16. Which of these is a valid reason for an Amazon EC2 instance in an Auto Scaling group to be terminated?
  • A. It is getting hit by too many requests and the CPU is nearing 90% utilization.
  • B. It is getting hit by too many requests and the memory is nearing 85% utilization.
  • C. It is exceeding the threshold set in AWS Budgets.
  • D. The instance has failed a defined number of health checks.
Answer - D
Explanation - If an Amazon EC2 instance fails health checks, the Auto Scaling group will terminate it and launch a new instance to take its place. If the EC2 instance is getting hit by too many requests, that is a prime opportunity for autoscaling to scale out, not to terminate. If the threshold set in AWS Budgets is met or exceeded, instances are not terminated. You will simply get an alert if you have it set to send one.
17. To create an Auto Scaling group, you must first create a non-versioned template that defines what the EC2 instances should be configured to do. That template is called what?
  • A. Launch configuration
  • B. Launch template
  • C. Autoscaling template
  • D. EC2 template
Answer - A
Explanation - The launch configuration defines the AMI ID to use, the instance type, the keypair for connecting to the instances, security groups, and storage that the EC2 instances will need. A launch template is similar to a launch configuration, but it is versioned. The other options were made up for this question.
18. You have an EC2 instance that has been in your environment for six months. You would like to create an Auto Scaling group to put it in, and you want to ensure that the other instances are configured exactly the same. What should you do that will involve the least amount of administrative effort?
  • A. Attach the EC2 instance to the Auto Scaling group after manually creating the Auto Scaling group.
  • B. Attach the EC2 instance to a security group.
  • C. Manually create a launch configuration with the same settings as the EC2 instance.
  • D. Create a launch configuration using the EC2 instance as a template.
Answer - D
Explanation - Creating a launch configuration using the EC2 instance as a template is the simplest way to create a matching launch configuration. Manually creating a launch configuration would work but requires more administrative effort and the question asked for the least amount of administrative effort. You can attach the EC2 instance to the Auto Scaling group, but if the Auto Scaling group was created manually and the launch configuration was created manually, this is more administrative work than using the EC2 instance as a template. Attaching the EC2 instance to a security group is a best practice but is not the correct answer to this question.
19. You have an EC2 instance that you have used as a template for a launch configuration. After creating the launch configuration for the Auto Scaling groups and deploying them, you decide that you need another EBS volume added to the instances. You add the volume. What do you need to do next? (choose two.)
  • A. You need to manually create a new launch configuration.
  • B. Change the Auto Scaling group to use the new launch configuration.
  • C. You don’t need to do anything…when you update the EC2 instance, the launch configuration is automatically updated.
  • D. Use the EC2 instance to create another launch configuration.
Answer - B, D
Explanation - First, you will use the EC2 instance to create another launch configuration, and then you will change the Auto Scaling group to use the new launch configuration. Making changes to the EC2 instance that was used as a template will not make the changes to the launch configuration.
20. You have assigned a new launch configuration to your Auto Scaling group. You need to refresh all of your instances, but you can’t have downtime. What is the best option?
  • A. Set the desired capacity to 0, then once they are all terminated, set it back to its previous setting.
  • B. Manually terminate the old instances so they are relaunched using the new configuration.
  • C. Choose each instance and assign the new launch configuration.
  • D. Let the instances age out over time.
Answer - B
Explanation - Since you need to avoid downtime, your best option is to manually terminate the old instances so they are relaunched using the new launch configuration. This allows you to control how many instances are offline and avoid downtime. If you set the desired capacity to zero, you may cause an outage, so this would not be a great solution if the most important factor is to avoid an outage. You can’t set the launch configuration per instance. Since you need to refresh your instances now, waiting for them to age out is not a good solution.
21. You’ve been using launch configurations, but as part of a DevOps model, you want to begin using versioning to track changes to your launch configurations. How can you enable versioning for launch configurations?
  • A. Create a launch template from your launch configurations.
  • B. Enable versioning on your launch configurations.
  • C. Manually name your launch configurations with a version number.
  • D. There is no way to set up versioning for launch configurations.
Answer - A
Explanation - Launch templates use versioning to track changes. You can’t enable versioning on launch configurations directly. Manually numbering launch configurations is not easily scalable and would be error prone.
22. You currently have a MySQL database running in RDS. You want to ensure that the database is highly available. How can you accomplish this?
  • A. Take frequent snapshots of your database.
  • B. Create a read replica.
  • C. Create multiple read replicas.
  • D. Set the database to be multi-AZ.
Answer - D
Explanation - To get true high availability, you will want to set the database to be multi-AZ. Taking frequent snapshots is great if your main goal is to have recent backups but will not make the database highly available. Read replicas are meant to boost performance. While they can be promoted to be the primary database, they are not designed for high availability as the synchronization is asynchronous.
23. You notice that your multi-AZ deployment is running out of a different availability zone than normal. Looking through the logs, you notice that a failover occurred two days prior. What could be the cause of the failover?
  • A. Your database became corrupt.
  • B. Your database became too big.
  • C. The primary availability zone became unavailable.
  • D. The database in the primary AZ ran out of memory.
Answer - C
Explanation - The most likely cause of the failover is that the primary availability zone became unavailable. If your database becomes corrupt, it will impact your customers but will not cause a failover event in the AZ. RDS is a managed service, but disk space is not. If your database becomes too big and runs out of space, it will impact your customers but will not cause a failover event to the other availability zone. Running out of memory would impact customers but would not cause a failover.
24. You notice that your multi-AZ deployment is running out of a different availability zone than normal. Looking through the logs, you notice that a failover occurred two days prior. What could be the cause of the failover?
  • A. Network connectivity in the primary AZ was interrupted.
  • B. Network connectivity was slow but online.
  • C. The IP address of your RDS instance changed.
  • D. Your database became unresponsive due to resource constraints.
Answer - A
Explanation - Your database may have failed over because network connectivity to the primary AZ was interrupted. If network connectivity is slow but remained online, then a failover wouldn’t have occurred. You don’t really have control of the IP address of your RDS instance, and the IP changing would not cause it to fail over. Your database becoming unresponsive is bad…but would not initiate a failover event.
25. You notice that your multi-AZ deployment is running out of a different availability zone than normal. Looking through the logs, you notice that a failover occurred two days prior. What could be the cause of the failover?
  • A. Your database became corrupt.
  • B. Your RDS instance ran out of disk space.
  • C. Your RDS instance ran out of memory.
  • D. The host your RDS was running on suffered a hardware failure.
Answer - D
Explanation - If the host your RDS instance was running on suffered a hardware failure, it would fail over to the other AZ. The other reasons—database becoming corrupt, RDS instance running out of space or memory — would be customer impacting but would not cause a failover to another AZ.
26. You notice that your multi-AZ deployment is running out of a different availability zone than normal. Looking through the logs, you notice that a failover occurred two days prior. What could be the cause of the failover?
  • A. Your RDS instance ran out of space.
  • B. Your RDS instance ran out memory.
  • C. The storage your RDS instance was using suffered a failure.
  • D. You encrypted the storage for your RDS instance.
Answer - C
Explanation - If the storage your RDS instance was running on suffered a failure, then that would cause a failover to the other availability zone. Your RDS instance running out of space or memory would not initiate a failover, and encrypting your storage would certainly not cause a failover event.
27. What is a region?
  • A. AWS uses regions to create datacenters in countries.
  • B. AWS uses regions to create datacenters in whole continents.
  • C. A region is a geographic area, not necessarily bound to country boundaries.
  • D. A region is a geographic area bound by country boundaries.
Answer - C
Explanation - A region is a geographic area that is not bound by country boundaries.
28. What is an availability zone?
  • A. One or more isolated datacenters connected with low-latency links
  • B. A single isolated datacenter connected with a low-latency link
  • C. A datacenter that spans multiple regions
  • D. A datacenter that spans multiple countries
Answer - A
Explanation - An availability zone consists of one or more isolated datacenters connected with low-latency links.
29. You need to ensure that your application is highly available. It runs on two EC2 instances that are part of an Auto Scaling group. What is the simplest way to make your application highly available while still using the same AMI?
  • A. Ensure that your EC2 instances are using a Windows AMI.
  • B. Ensure that your EC2 instances are using a Linux AMI.
  • C. Ensure that the EC2 instances are in different regions.
  • D. Ensure that the EC2 instances are in different availability zones.
Answer - D
Explanation - Ensure that your EC2 instances are in separate availability zones to make your application highly available and still able to use the same AMI. If you place your EC2 instances into different regions, you will need to use different AMIs per region. It doesn’t matter if the ,EC2 instances are running on Windows or Linux in this case.
30. You have been asked to make your application more highly available. What would be a good recommendation to make?
  • A. Use loose coupling whenever possible.
  • B. Add more memory to your EC2 instances.
  • C. Add more CPU to your EC2 instances.
  • D. Create more frequent backups.
Answer - A
Explanation - By using loose coupling, you can take advantage of managed services that are naturally highly available like SWF, SQS, SNS, ELB, and Route 53. This will enable you to make your application more highly available than relying on servers alone. Adding more memory or CPU might improve your application’s performance but not its availability. Creating more frequent backups will assist in lowering your recovery time; however, it will not make your application more highly available.
31. You need a highly available solution to manage the passing of messages from your application to another application. The messages must be delivered at least once…your application can tolerate receiving messages more than once. What should you use to manage the messages between these systems?
  • A. Simple Notification Service (SNS)
  • B. Simple Workflow Service (SWF)
  • C. Simple Queue Service (SQS)
  • D. Email
Answer - C
Explanation - Simple Queue Service (SQS) guarantees delivery of a message at least once. Simple Notification Service (SNS) will send messages but doesn’t guarantee atleast- once delivery. Simple Workflow Service (SWF) is made to run jobs that have multiple steps, not to queue or deliver messages. Email doesn’t guarantee at-leastonce delivery, nor is it designed to facilitate communication between applications.
32. What is Amazon Simple Queue Service?
  • A. Highly available message queueing service that offers FIFO at-least-once delivery
  • B. Highly available message queueing service that offers LIFO at-least-once delivery
  • C. Highly available message queueing service that offers FILO at-least-once delivery
  • D. Highly available message queueing service that offers FIFO only-once delivery
Answer - A
Explanation - Amazon Simple Queue Service (SQS) is best described as a highly available message queueing service that offers FIFO at-least-once delivery.
33. You have a backend system that has been having issues lately. You need to ensure that messages sent to this system will be saved if it goes offline. What is the best option?
  • A. Simple Notification Service (SNS)
  • B. Simple Queue Service (SQS)
  • C. Simple Workflow Service (SWF)
  • D. Simple Storage Service (S3)
Answer - B
Explanation - Simple Queue Service (SQS) will queue up the messages that need to be sent until the destination system is back up and able to process messages again. SNS, SQS, and S3 are not queuing solutions.
34. Your application is producing more data than your backend systems can handle. Your boss doesn’t want to add more backend systems, so what is the best choice to ensure that data isn’t lost and backend systems aren’t overwhelmed?
  • A. Simple Workflow Service (SWF)
  • B. Simple Notification Service (SNS)
  • C. Simple Storage Service (S3)
  • D. Simple Queue Service (SQS)
Answer - D
Explanation - Your best solution in this case will be Simple Queue Service (SQS). As messages come in for the backend system, they are queued, and the backend systems can pull messages from the queue when they are ready to process them. SWF, SNS, and S3 will not provide a mechanism like this that will help the backend systems cope with the amount of traffic coming through.
35. You have decided to implement Simple Queue Service (SQS) to support your application. Your boss wants to ensure that messages sent to the queue can be kept for 7 days before being discarded. What should you tell your boss?
  • A. Messages can be kept for up to 7 days, so you can meet their requirements.
  • B. Messages can be kept for up to 14 days, so you can meet their requirements.
  • C. Messages can be kept for up to 30 days, so you can meet their requirements.
  • D. Messages can’t be retained for more than 1 day.
Answer - B
Explanation - Messages sent to an SQS queue can be kept for up to 14 days, so you can meet their requirements.
36. You have decided to implement Simple Queue Service (SQS) to support your application. Your boss wants to ensure that messages sent to the queue can be kept for 30 days before being discarded. What should you tell your boss?
  • A. Messages can be kept for up to 30 days, so you can meet their requirements.
  • B. Messages can’t be retained for more than 1 day.
  • C. Messages can be kept for up to 7 days, so you can’t meet their requirements.
  • D. Messages can be kept for up to 14 days, so you can’t meet their requirements.
Answer - D
Explanation - Messages can be kept for up to 14 days, so you can’t meet their requirements.
37. You have chosen to implement an SQS queuing chain. You notice that often the backend servers are only busy during certain times of the day. Other times they are sitting idle. What should you do with the backend servers that will still allow for high availability but that will also allow you to terminate idle instances?
  • A. Configure the backend servers to use an Elastic Load Balancer.
  • B. Configure the backend servers to use an Auto Scaling group.
  • C. Configure the backend servers to use Route 53.
  • D. Configure the backend servers to use a placement group.
Answer - B
Explanation - The best option is to configure an Auto Scaling group for the backend servers. Have them scale out when the SQS queues start getting busy, then have them scale in when the SQS queues are idle. An elastic load balancer could be handy for communications to the backend systems, but it will not automate the termination and spinning up of instances. Neither Route 53 nor a placement group will help with the spinning up and terminating of instances either.
38. You have an SQS queue that you haven’t used for over 45 days. When you start your application, you find that it is running into failures, and when you check, you find that your SQS queue is no longer there. What could be the cause?
  • A. The SQS queue expired and disappeared.
  • B. It was a glitch on the AWS side and they will need to restore it.
  • C. AWS deleted it as it had been inactive for over 30 consecutive days.
  • D. You have the wrong region selected in the AWS Management Console.
Answer - C
Explanation - The most likely cause is that AWS deleted it as it had been inactive for over 30 consecutive days. They have the right to delete it without any notification. SQS queues don’t expire, although the messages within them can. A glitch like this is highly unlikely as the SQS is a highly available service. If you had the wrong region selected, you likely wouldn’t have been able to start your application.
39. You notice that messages being fed into your SQS queue are not coming out in the same order they went in. What is the likely cause?
  • A. You are using a standard queue rather than a FIFO queue.
  • B. You are using a standard queue rather than a LIFO queue.
  • C. You are using a standard queue rather than a FILO queue.
  • D. You are using a standard queue rather than a LILO queue.
Answer - A
Explanation - Standard queues do try to preserve the order of the messages, but they don’t guarantee the order of messages. If the order of the messages is important for your application to function properly, then you should define the queue as a FIFO queue rather than a standard queue. LIFO, FILO, and LILO are not options for SQS queues.
40. Which of these are valid polling methods used with Amazon Simple Queue Service? (Choose two.)
  • A. Small polling
  • B. Short polling
  • C. Tall polling
  • D. Fast polling
  • E. Long polling
Answer - B, E
Explanation - Short polling and long polling are the two methods used with Amazon SQS. Short polling consists of a sample done across all your queues regardless of whether they are empty or not, where long polling doesn’t send a response until there is a message in the queue. Small, tall, and fast polling don’t really exist.
41. How does an SQS queue prevent multiple systems from processing the same message from the queue?
  • A. Visibility lockout
  • B. Processing lockout
  • C. Visibility timeout
  • D. Processing timeout
Answer - C
Explanation - The visibility timeout is used by SQS to prevent multiple systems from processing the same message. It places a lock on the message while it is being processed and then deletes the message once it has been processed. The other options were made up for this question.
42. You need to configure SQS to be able to route messages to a queue when they have failed to successfully process after a certain number of attempts has been reached. What kind of a queue do you need to create?
  • A. Normal
  • B. Standard
  • C. FIFO
  • D. Dead letter queue (DLQ)
Answer - D
Explanation - A dead letter queue is created to deal with messages that have failed to process successfully after a threshold of attempts has been reached. There is no such thing as a normal queue. Standard queues attempt to process messages in the order they are received, and FIFO queues do process messages in the order they are received.
43. You are attempting to share an SQS queue across two regions, but you are unable to do so. Why is that?
  • A. SQS queues can only be shared within the same region.
  • B. You don’t have permissions to share an SQS queue.
  • C. SQS isn’t available in the region you are trying to share the queue with.
  • D. You’ve reached the max limit of SQS queues in your account.
Answer - A
Explanation - You can only share an SQS queue within the same region. If you have access to SQS and can set up queues normally, then sharing a queue would not be an issue. It is highly unlikely that SQS is unavailable in one of the regions you are attempting to use. There is no limit on the number of SQS queues you can create in your account.
44. You need a highly available messaging service that can send messages when systems and/or services go down to a select group of cell phone numbers via SMS text. Which AWS service would meet this need?
  • A. Amazon Simple Queue Service (SQS)
  • B. Amazon Simple Storage Service (S3)
  • C. Amazon Simple Notification Service (SNS)
  • D. Amazon Simple Texting Service (STS)
Answer - C
Explanation - Amazon Simple Notification Service (SNS) can push messages to those who have subscribed to a topic. In this case, availability alerts from Amazon CloudWatch could be sent through an SNS topic. SQS manages messages for applications but does not send notifications to cell phone numbers via SMS text. S3 is object storage and does not send notifications. Amazon Simple Texting Service is not a real product.
45. Which of these can be used to subscribe to an SNS topic?
  • A. Amazon Simple Storage Service (S3)
  • B. AWS Lambda
  • C. Amazon EC2
  • D. Amazon Simple Workflow Service (SWF)
Answer - B
Explanation - There are five different ways to subscribe to an SNS topic. They are AWS Lambda, Amazon Simple Queue Service (SQS), HTTP and HTTPS, email, and SMS text.
46. Which of these can be used to subscribe to an SNS topic?
  • A. Amazon Elastic Beanstalk
  • B. Amazon IAM
  • C. HTTP and HTTPS
  • D. Amazon CloudFormation
Answer - C
Explanation - There are five different ways to subscribe to an SNS topic. They are AWS Lambda, Amazon Simple Queue Service (SQS), HTTP and HTTPS, email, and SMS text.
47. Which of these can be used to subscribe to an SNS topic?
  • A. Simple Queue Service (SQS)
  • B. Amazon CloudWatch
  • C. AWS CloudTrail
  • D. Amazon Elastic Beanstalk
Answer - A
Explanation - There are five different ways to subscribe to an SNS topic. They are AWS Lambda, Amazon Simple Queue Service (SQS), HTTP and HTTPS, email, and SMS text.
48. Which of these can be used to subscribe to an SNS topic?
  • A. Amazon EC2
  • B. Amazon Simple Storage Service (S3)
  • C. Amazon CloudFormation
  • D. Email
Answer - D
Explanation - There are five different ways to subscribe to an SNS topic. They are AWS Lambda, Amazon Simple Queue Service (SQS), HTTP and HTTPS, email, and SMS text.
49. Which of these can be used to subscribe to an SNS topic?
  • A. Amazon CloudWatch
  • B. AWS CloudTrail
  • C. Amazon Route 53
  • D. SMS text
Answer - D
Explanation - There are five different ways to subscribe to an SNS topic. They are AWS Lambda, Amazon Simple Queue Service (SQS), HTTP and HTTPS, email, and SMS text.
50. You need to ensure that systems can reach out for updates but that they are not accessible from the Internet. What is the simplest way to allow this while still remaining highly available?
  • A. AWS Direct Connect
  • B. Security groups
  • C. NAT instance
  • D. NAT gateway
Answer - D
Explanation - A NAT gateway is a highly available service and when deployed in each availability zone can create a highly available architecture. AWS Direct Connect provides a connection to another network but doesn’t meet the goals stated in the question. Security groups would not work since they would allow Internet accessibility for downloads. A NAT instance is not inherently highly available as it is a specialized Amazon EC2 instance.