Amazon Web Services (AWS) - Set #11

Powered by Techhyme.com

You have a total of 130 minutes to finish the practice test of AWS Certified SysOps Administrator to test your knowledge.


1. You need to continually move large amounts of data from your on-premises datacenter to AWS. What is the best way to accommodate large ongoing file transfers?
  • A. AWS DataSync
  • B. Transfer over AWS Direct Connect
  • C. Amazon S3 multipart upload
  • D. AWS Snowball
Answer - B
Explanation - For large ongoing transfers, Amazon recommends AWS Direct Connect as it is a dedicated high-speed connection. AWS Snowball is only meant to be used for an initial data transfer; in fact, you can only keep the device for 90 days.
2. You need to transfer data from us-west-1 to us-east-1. What is the best way to facilitate the data transfer?
  • A. AWS Snowball
  • B. S3 Cross-Region Replication
  • C. S3 multipart uploads
  • D. Transfer the data through the AWS Management Console.
Answer - B
Explanation - Of the possible answers, the only one that will work would be S3 Cross-Region Replication. AWS Snowball doesn’t allow transferring of data to different regions, and the other options don’t work to move data from one region to another.
3. You have decided to use AWS Snowball to do the initial transfer of data to AWS from your on-premises datacenter. Your security team wants assurances that the data on the AWS Snowball device is secure. What should you tell them?
  • A. AWS Snowball data is not encrypted but is password protected.
  • B. AWS Snowball data is encrypted with a key stored on the AWS Snowball device.
  • C. AWS Snowball data is encrypted with a key stored in AWS KMS.
  • D. AWS Snowball data is encrypted with a key stored in AWS Certificate Manager.
Answer - C
Explanation - The best answer to give them is that the data on the AWS Snowball device is encrypted with an AES-256 bit key and that the private key is not stored on the AWS Snowball device, it is managed by AWS KMS.
4. How does AWS Snowball guarantee that your AWS Snowball device has not been tampered with before its arrival at an AWS datacenter? (Choose two.)
  • A. Tamper-resistant enclosure
  • B. Encryption
  • C. TPM chip
  • D. Inspection stickers
Answer - A, C
Explanation - The AWS Snowball device uses a combination of a tamper-resistant enclosure and a TPM chip to ensure that the hardware, software, and firmware have not been tampered with in any way. AWS will also inspect the device when it arrives at its datacenter to ensure that nothing has been tampered with. Encryption will protect the data but won’t prevent tampering. Inspection stickers can show that someone may have opened a device but will not prevent tampering,
5. You are the system administrator for a small hospital and are trying to determine the best way to move your data into AWS. Is AWS Snowball a viable option?
  • A. Yes, it’s HIPAA compliant, so you just need a BAA with AWS.
  • B. Yes, it’s HIPAA compliant, so you don’t need a BAA with AWS.
  • C. No, AWS Snowball is not GLBA compliant.
  • D. No, AWS Snowball is not HIPAA compliant.
Answer - A
Explanation - You should have some knowledge of the basic regulatory situations. In this case, hospitals have personal health information (PHI) and must be compliant with the Health Insurance Portability and Accountability Act (HIPAA). Since AWS Snowball is HIPAA-eligible, you would just need a Business Associates Agreement (BAA) with AWS. The Gramm-Leach-Bliley Act (GLBA) is a privacy law and does not deal with medical information specifically; it is more commonly used in financial institutions.
6. Which of these is not needed for AWS Snowball setup?
  • A. AWS Snowball client unlock code
  • B. Job manifest file
  • C. AWS Snowball client
  • D. Job manifest unlock code
Answer - A
Explanation - To set up AWS Snowball, you need the AWS Snowball client as well as the job manifest file and the job manifest unlock code. There is no unlock code for the AWS Snowball client.
7. You need to remove a large amount of data from Amazon S3 and bring it back to your on-premises datacenter. The data is approximately 75 TB. What is the best method to transfer the data back to your onpremises datacenter?
  • A. Scripted download
  • B. AWS Direct Connect
  • C. AWS Snowball
  • D. Manual download
Answer - C
Explanation - AWS Snowball can be used to export large amounts of data from an AWS datacenter, just as it can be used for import.
8. You have a large amount of data in Amazon S3 and Amazon S3 Glacier that you need to move back to your on-premises datacenter. You have decided that you are going to use AWS Snowball to do the export. How will you export the data in Amazon S3 Glacier?
  • A. Initiate the request for Amazon S3; Amazon S3 Glacier will be included.
  • B. Restore the data from Amazon S3 Glacier and then create the export request.
  • C. Initiate the request for Amazon S3 Glacier; it must be done separately from Amazon S3.
  • D. You can’t export data once it is in Amazon S3 Glacier.
Answer - B
Explanation - While you can export Amazon S3 data to the AWS Snowball device, you must first restore data from Amazon S3 Glacier to Amazon S3 before you can export it to the AWS Snowball.
9. Which of the following is not a lifecycle policy available for Amazon EFS?
  • A. AFTER_7_DAYS
  • B. AFTER_14_DAYS
  • C. AFTER_30_DAYS
  • D. AFTER_45_DAYS
Answer - D
Explanation - The valid lifecycle policies in Amazon EFS are AFTER_7_DAYS, AFTER_14_DAYS, AFTER_30_DAYS, AFTER_60_DAYS, and AFTER_90_DAYS.
10. Which command is used to enable lifecycle management for Amazon EFS via the AWS CLI?
  • A. aws ec2 put-lifecycle-configuration
  • B. aws efs enable-lifecycle-configuration
  • C. aws efs put-lifecycle-configuration
  • D. aws efs create-lifecycle-configuration
Answer - C
Explanation - To enable lifecycle management for Amazon EFS via the AWS CLI, you would use the command aws efs put-lifecycle-configuration.
11. You need to protect the data that is stored in your Amazon EFS implementation. Which of the following are methods that will allow you to safeguard the Amazon EFS data? (Choose two.)
  • A. Enabling lifecycle management
  • B. AWS Backup Service
  • C. EFS-to-EFS backup solution
  • D. EFS-to-S3 backup solution
Answer - B, C
Explanation - To safeguard your data in Amazon EFS, you can use the AWS Backup Service or the EFS-to-EFS backup solution. Enabling lifecycle management doesn’t safeguard data; it simply helps reduce cost. EFS-to-S3 backup doesn’t exist.
12. How do you protect your data in Amazon EFS when it is at rest?
  • A. Use AWS KMS.
  • B. Use Certificate Manager.
  • C. Password protect your data.
  • D. You don’t need to do anything; your data is automatically encrypted.
Answer - A
Explanation - Data at rest in Amazon EFS is protected by AWS KMS if you create the filesystem to use encryption.
13. How do you protect your data in Amazon EFS when it is in transit?
  • A. Use AWS KMS.
  • B. Use Certificate Manager.
  • C. Password protect your data.
  • D. You don’t need to do anything; your data is automatically encrypted.
Answer - D
Explanation - Data in transit from and to Amazon EFS is automatically encrypted, and the keys are managed by Amazon EFS.
14. You have enabled encryption on a new Amazon EFS filesystem. Your users are complaining that they can’t access anything on Amazon EFS. What is a likely cause?
  • A. Their computers don’t support encryption.
  • B. Encryption wasn’t enabled properly on Amazon EFS.
  • C. Your users don’t understand how to decrypt data.
  • D. The CMK is not in an enabled state.
Answer - D
Explanation - The customer master key (CMK) must be in an enabled state or the users will not have access to the contents of the filesystem. The process is seamless to users; they don’t need to know how to decrypt data.
15. You have chosen to delete the CMK you were using for your Amazon EFS deployment. How can you immediately delete the CMK?
  • A. You will need root level permissions.
  • B. You will need full administrator permissions.
  • C. You will need to have permissions in AWS KMS.
  • D. You can’t immediately delete it.
Answer - D
Explanation - The deletion of a CMK is irreversible so you can’t do it immediately; you have to schedule the deletion. You can schedule it for anywhere from 7–30 days. If you must get rid of it more immediately, you can revoke or disable the key.
16. Which of the following is not a logging solution for performance for Amazon EFS?
  • A. AWS CloudTrail
  • B. Amazon CloudWatch
  • C. Amazon CloudWatch Logs
  • D. Amazon CloudWatch Events
Answer - A
Explanation - Remember that AWS CloudTrail is used to audit API activity, not performance activity. Amazon CloudWatch can be used to monitor performance activity.
17. You are using Amazon EFS, and you want to be able to identify which departments have the most data as you want to be able to do chargebacks. How would you accomplish this in Amazon EFS?
  • A. Name the folders with the department names.
  • B. Use tags to identify departments.
  • C. Use folder metadata to identify departments.
  • D. Use labels to identify departments.
Answer - B
Explanation - You can use tags with Amazon EFS. In this case, a tag named Department could be used to identify the owner of various folders and files.
18. You want to tag a folder in Amazon EFS for data sensitivity. How can you set the tag in the AWS CLI?
  • A. aws efs create-tags
  • B. aws ebs create-tags
  • C. aws ec2 create-tags
  • D. aws efs create-labels
Answer - A
Explanation - You would use the command aws efs create-tags to create tags in Amazon EFS.
19. You are curious which tags have been created in Amazon EFS. What is the simplest method to determine the tags that currently exist?
  • A. Using the AWS CLI, type aws ebs describetags.
  • B. Using the AWS CLI, type aws efs retrievetags.
  • C. Using the AWS CLI, type aws efs describetags.
  • D. Using the AWS CLI, type aws efs list-tags.
Answer - C
Explanation - The simplest way to retrieve the tags that have already been created in Amazon EFS is to use the AWS CLI and type aws efs describe-tags.
20. You want to limit which hosts can access the Amazon EFS filesystem. What is the best way to do this that uses the least amount of administrative overhead?
  • A. Use a mount target NACL.
  • B. Use a mount target security group.
  • C. Block the unwanted hosts from access with IAM permissions.
  • D. You can’t limit which hosts can access the Amazon EFS filesystem.
Answer - B
Explanation - If you want to restrict which hosts can access a filesystem, the simplest way to do this is to create a mount target security group.
21. How can you reduce costs for using Amazon EFS across multiple availability zones?
  • A. Create mount points in half of the availability zones.
  • B. Create mount points in two availability zones.
  • C. Create mount points in each availability zone.
  • D. Use one mount point in one availability zone.
Answer - C
Explanation - When you create mount points for Amazon EFS, it is recommended to create them in each availability zone as this will reduce the amount of cross-availability zone access, which incurs additional cost.
22. You want to review the list of mount targets to see if you should create additional mount targets for cost savings. How would you review the mount targets that exist currently using the AWS CLI?
  • A. aws efs list-mount-targets
  • B. aws efs retrieve-mount-targets
  • C. aws efs pull-mount-targets
  • D. aws efs describe-mount-targets
Answer - D
Explanation - You would use aws efs describe-mount-targets to retrieve the list of current mount targets.
23. Your security team has requested that you rotate your customer master key at least once every 365 days. How do you enable key rotation using the AWS CLI?
  • A. aws kms enable-key-rotation
  • B. aws kms use-key-rotation
  • C. aws kms automatic-key-rotation
  • D. aws kms enable-key-management
Answer - A
Explanation - Using the AWS CLI, you can use the command aws kms enable-key-rotation to enable key rotation in AWS KMS.
24. You want to enable the automatic rotation of your CMK. Your security team requires that it be rotated at least once every 365 days. Will the automatic key rotation feature in AWS KMS meet the security team’s requirement?
  • A. No, key rotation automatically happens every 720 days.
  • B. Yes, key rotation automatically happens every 90 days.
  • C. Yes, key rotation automatically happens every 180 days.
  • D. Yes, key rotation automatically happens every 365 days.
Answer - D
Explanation - From the moment the automatic rotation of keys is enabled, the key will be rotated every 365 days.
25. You need to prove to an auditor that your CMK is automatically rotated. Which command in the AWS CLI could be used to prove to them that key rotation is enabled?
  • A. aws kms retrieve-key-rotation-status
  • B. aws kms list-rotation-status
  • C. aws kms get-key-rotation-status
  • D. aws kms describe-key-rotation-status
Answer - C
Explanation - To prove to an auditor that key rotation is enabled, you can use the command aws kms get-key-rotationstatus followed by the key ID.
26. You believe that a CMK is no longer in use. You are cleaning up inactive resources in your AWS environment to reduce costs. What should you do with the CMK that you suspect is not being used?
  • A. Delete the CMK.
  • B. Disable the CMK.
  • C. Revoke the CMK.
  • D. Leave it there as it is not incurring additional cost.
Answer - B
Explanation - You should disable the CMK for a period of time to see if it really is not being used. If you delete it, there is no way to recover it, and you will have potentially lost access to your data.
27. In an identity-based policy statement, which of the following values are allowed for the Effect element? (Choose two.)
  • A. *
  • B. Permit
  • C. Allow
  • D. Deny
  • E. Notify
Answer - C, D
Explanation - Deny and Allow are the only two options for the Effect element. A wildcard (*), Permit, and Notify aren’t valid options.
28. Which of the following AWS services allows using Microsoft Active Directory credentials to authenticate to AWS?
  • A. Cognito
  • B. AWS Single-Sign On (SSO)
  • C. SAML
  • D. AWS Organizations
Answer - B
Explanation - AWS SSO is a managed service that allows you to grant users to an AWS account using their Active Directory credentials. Cognito doesn’t support this. SAML is a markup language, not an AWS service. AWS Organizations allows you to centrally manage multiple AWS accounts.
29. Which of the following IAM policies can be applied to only one IAM principal?
  • A. Inline policy
  • B. Customer managed policy
  • C. AWS managed policy
  • D. Permissions policy
Answer - A
Explanation - An inline policy is embedded in a principal and thus applies to only that principal. The other policy types can apply to more than one principal.
30. How many versions of a customer managed policy will IAM retain?
  • A. One
  • B. Two
  • C. Three
  • D. Four
  • E. Five
  • F. Six
Answer - E
Explanation - IAM will retain five versions of every customer managed policy.
31. You need to terminate an EC2 Linux instance, but your IAM user doesn’t have the permissions to do so. Which of the following will allow you to terminate the instance while posing the lowest security risk?
  • A. Use the root user for the AWS account.
  • B. Use the aws ec2 terminate-instances CLI command.
  • C. Assume a role that can perform the TerminateInstances action.
  • D. Log into the instance and issue the shutdown -h now command.
Answer - C
Explanation - Assuming a role that has access to the TerminateInstances action allows you to terminate the instance with the least security risk. Logging in as the root user poses a greater security risk because the root user has access to all aspects of the AWS accounts. Using the CLI command won’t work if the IAM user doesn’t have the permissions to terminate the instance. Logging into the instance and shutting it down from the command line will stop the instance but won’t terminate it.
32. Which of the following are the patching responsibilities of AWS? (Choose two.)
  • A. Patching the hypervisors running a customer’s EC2 instances
  • B. Patching the operating systems on a customer’s EC2 instances
  • C. Patching any applications running on a customer’s EC2 instances
  • D. Patching the operating system on a customer’s RDS instance
Answer - A, D
Explanation - AWS is responsible for patching the hypervisors that run a customer’s EC2 instances. AWS is also responsible for patching the operating systems of RDS instances. Patching the operating systems and applications running on a customer’s EC2 instance is the customer’s responsibility.
33. You have multiple web servers behind an application load balancer in a single VPC. Each web server has a public IP address. You need to explicitly prevent traffic from a particular range of IP addresses from reaching these servers. Which of the following will allow you to accomplish this?
  • A. Create an outbound security group rule to deny the range.
  • B. Create an inbound security group rule to deny the range.
  • C. Create an outbound rule to deny the range using a network access control list.
  • D. Create an inbound rule to deny the range using a network access control list.
  • E. On each instance configure the operating system’s firewall to block the IP address range.
Answer - D
Explanation - Creating an inbound network access control list (NACL) rule to explicitly deny the traffic and then applying that NACL to the subnets the web servers are in will prevent traffic from the IP range from reaching the servers. Creating outbound rules to deny the traffic won’t prevent inbound traffic. It’s not possible to explicitly deny traffic either inbound or outbound using a security group. Using the operating system’s built-in firewall won’t prevent traffic from reaching the server.
34. You’ve contracted a third party to perform penetration testing against your own EC2 instances. Which of the following must you do before proceeding?
  • A. Nothing
  • B. Notify AWS and get permission to proceed.
  • C. Ask AWS to patch your instances before the test begins.
  • D. Give the third party credentials to access your AWS account.
Answer - B
Explanation - You must get permission from AWS before performing any penetration testing against your EC2 instances. It’s not the responsibility of AWS to patch your EC2 instances. There’s no need to give the third party credentials to your AWS account.
35. Which of the following is true regarding the encryption of the files stored in an S3 bucket?
  • A. The customer is responsible for rotating S3- managed keys.
  • B. AWS is responsible for controlling access to customer master encryption keys stored in KMS.
  • C. The customer is responsible for ensuring the files are encrypted.
  • D. AWS is responsible for ensuring the files are encrypted.
Answer - C
Explanation - The customer is responsible for ensuring files stored in S3 are encrypted. Although AWS offers server-side encryption, it’s up to the customer to enable it. The customer, not AWS, is responsible for controlling access to customer master keys (CMKs) stored in KMS. AWS, not the customer, is responsible for rotating S3- managed keys.
36. Which of the following is true regarding S3 security access controls? (Choose two.)
  • A. The customer is responsible for configuring access control lists.
  • B. The customer is responsible for configuring bucket policies.
  • C. AWS is responsible for configuring access control lists.
  • D. AWS is responsible for configuring bucket policies.
Answer - A, B
Explanation - The customer is solely responsible for configuring the two available S3 security access controls: bucket policies and access control lists.
37. Which of the following are valid access control methods for granting access to non-public files stored in S3? (Choose three.)
  • A. Bucket policies
  • B. Security groups
  • C. Identity-based policies
  • D. Access control lists
  • E. Resource groups
Answer - A, C, D
Explanation - Bucket policies, identity-based IAM policies, and access control lists can all be used to grant access to a non-public file stored in S3. Identity-based policies can’t be used to grant public, anonymous access. Resource groups are not an access control method.
38. Which of the following should you do to grant anonymous read access to files stored in an S3 bucket? (Choose three.)
  • A. Grant access to the * principal.
  • B. Create an IAM policy.
  • C. Create a bucket policy.
  • D. Apply the policy to the anonymous user.
  • E. Apply the policy to the file.
  • F. Specify the bucket name in the policy.
Answer - A, C, F
Explanation - To grant anonymous access to a file stored in S3, you must create the bucket policy specifying a * (wildcard) for the principal and include the bucket name in the policy resource element.
39. You need to grant access only to a specific file named test.txt stored in an S3 bucket named examplebucket. Which of the following values should you specify for the resource element in the bucket policy?
  • A. ["arn:aws:s3:::examplebucket/*"]
  • B. ["arn:aws:s3:::examplebucket/test.txt"]
  • C. ["arn:aws:s3:::*"]
  • D. *
Answer - B
Explanation - You must specify the resource element along with the bucket name and file name. Specifying the bucket name followed by a wildcard would allow access to all files in the bucket.
40. You have a file stored in an S3 bucket. You want to restrict access such that only a specific IP address can download the file. Which of the following bucket policy elements will allow you to achieve this?
  • A. Effect
  • B. Principal
  • C. Action
  • D. Resource
  • E. Condition
Answer - E
Explanation - By specifying a source IP address in the condition element, you can apply the policy statement only to users coming from that IP address.
41. You have an S3 bucket with versioning disabled. You want to allow a particular IAM user to delete files in the bucket only for the next 30 days. You’ve decided to create an IAM customer managed policy to achieve this. Which of the following actions should you add to the policy?
  • A. s3:DeleteObject
  • B. s3:DeleteObjectVersion
  • C. s3:PutObject
  • D. s3:RemoveObject
Answer - B
Explanation - The action s3:DeleteObjectVersion deletes a file, regardless of whether versioning is enabled. s3:PutObject creates a file. s3:DeleteObject and s3:RemoveObject aren’t valid actions.
42. You want to allow anonymous users to download files from an S3 bucket only until January 1, 2021. You’ve decided to use a bucket policy to achieve this. Which of the following values should you put in the condition element of the policy?
  • A. {"DateBefore": {"aws:epochTime": "2021-01-01T00:00:00Z"}}
  • B. {"DateLessThan": {"aws:epochTime": "2021-01-01T00:00:00Z"}}
  • C. {"DateBefore": {"aws:CurrentTime": "2021-01-01T00:00:00Z"}}
  • D. {"DateLessThan": {"aws:CurrentTime": "2021- 01-01T00:00:00Z"}}
Answer - D
Explanation - The condition operator DateLessThan returns true if the date and time at policy evaluation precedes the date and time specified in the key’s value. In this case, the value is 2021-01-01T00:00:00Z, which is the ISO 8601 representation of January 1, 2021 at 0:00 Universal Coordinated Time (UTC). The aws:CurrentTime key requires the time to be specified in ISO 8601 format. DateBefore is not a valid condition operator. The aws:epochTime key requires the time to be specified in Unix epoch time.
43. You need to delete files in an S3 bucket once they reach a certain age. Which of the following allows you to do this in the most secure fashion?
  • A. Object lifecycle transition actions
  • B. Object lifecycle expiration actions
  • C. Bucket lifecycle expiration actions
  • D. Lambda functions
  • E. CloudWatch Events Rules
Answer - B
Explanation - Object lifecycle expiration actions automatically delete files in S3 after a specified period of time without requiring you to grant any special permissions. Lambda functions and CloudWatch Events Rules can be used to delete objects in S3 but would require creating roles and granting them permissions to S3. Object lifecycle transition actions don’t delete files. There’s no such thing as a bucket lifecycle expiration action.
44. You have an application running on an EC2 instance. Which of the following represent the most secure way of granting the application access to a DynamoDB table? (Choose two.)
  • A. Access key identifier
  • B. Secret access key
  • C. IAM role
  • D. DynamoDB resource-based policy
  • E. Instance profile
Answer - C, E
Explanation - The most secure way to grant access to DynamoDB is to create an IAM role with the appropriate permissions and link that role to the instance using an instance profile. Using an access key identifier and secret access key would require storing those long-term credentials where they could potentially be stolen. DynamoDB doesn’t support resource-based policies.
45. Which of the following services support resource-based policies?
  • A. Simple Queue Service (SQS)
  • B. Elastic Block Store (EBS)
  • C. Elastic Compute Cloud (EC2)
  • D. Identity and Access Management (IAM)
Answer - A
Explanation - SQS is the only service listed that uses resourcebased policies.
46. Which of the following elements is not required in an identity-based policy?
  • A. Effect
  • B. Action
  • C. Resource
  • D. Principal
Answer - D
Explanation - The Principal element isn’t required in an identitybased policy because the policy itself applies to a principal. The other elements are required.
47. Which of the following elements is not required in a resource-based policy?
  • A. Principal
  • B. Action
  • C. Condition
  • D. Effect
Answer - C
Explanation - The Condition element is not required in a resourcebased policy. The other elements are required.
48. Which of the following formats are IAM policies stored in?
  • A. JSON
  • B. YAML
  • C. CSV
  • D. TSV
Answer - A
Explanation - IAM policies are stored only in JSON format.
49. Which of the following methods can you use to create a customer managed IAM policy? (Choose three.)
  • A. Import an AWS managed policy.
  • B. Use the AWS CLI to import a JSON policy document.
  • C. Use the Visual editor in the AWS Management Console.
  • D. Import a JSON policy document from an S3 bucket.
  • E. Create an IAM user and copy the user’s default policy to a new policy.
Answer - A, B, C
Explanation - To create a new IAM policy, you can import it from an AWS managed policy, import an existing policy document using either the AWS CLI or the AWS Management Console, or use the Visual editor to create a policy from scratch. You can’t import a policy document from an S3 bucket. When you create an IAM user, it has no default policy attached.
50. You have an EC2 instance running an Apache web server on TCP port 80. A public-facing application load balancer is configured to listen for HTTPS traffic and proxy it to the instance. But when you browse to the load balancer’s endpoint, you get a “gateway timeout” error. Which of the following should you do to resolve this? (Choose two.)
  • A. On the security group attached to the application load balancer, add an inbound rule for HTTP.
  • B. On the security group attached to the application load balancer, add an inbound rule for HTTPS.
  • C. On the security group attached to the instance, add an inbound rule for HTTP.
  • D. On the security group attached to the instance, add an inbound rule for HTTPS.
  • E. On the security group attached to the application load balancer, add an outbound rule for HTTP.
Answer - C, E
Explanation - The web server on the instance is configured to listen for HTTP (TCP port 80) traffic, so its security group should allow inbound HTTP traffic, but not HTTPS. The load balancer needs an outbound rule to permit HTTP traffic to the instance. The presence of the “gateway timeout” error indicates that the load balancer already has an inbound rule for HTTPS, so there’s no need to add one. Because the load balancer is configured to listen only for HTTPS traffic, there’s no need to add an inbound rule to its security group to allow inbound HTTP traffic.