Tuesday, 15 March 2022

Free Amazon SOA-C02 Dumps PDF By Realamazondumps.com

 Question: 1

The security team is concerned because the number of AWS Identity and Access Management (IAM) policies being used in the environment is increasing. The team tasked a SysOps administrator to report on the current number of IAM policies in use and the total available IAM policies. Which AWS service should the administrator use to check how current IAM policy usage compares to current service limits?

A. AWS Trusted Advisor

B. Amazon Inspector

C. AWS Config

D. AWS Organizations

Answer: A

Question: 2

A company has a stateless application that is hosted on a fleet of 10 Amazon EC2 On-Demand Instances in an Auto Scaling group. A minimum of 6 instances are needed to meet service requirements. Which action will maintain uptime for the application MOST cost-effectively?

A. Use a Spot Fleet with an On-Demand capacity of 6 instances.

B. Update the Auto Scaling group with a minimum of 6 On-Demand Instances and a maximum of 10 OnDemand Instances.

C. Update the Auto Scaling group with a minimum of 1 On-Demand Instance and a maximum of 6 OnDemand Instances.

D. Use a Spot Fleet with a target capacity of 6 instances.

Answer: A

Question: 3

A SysOps administrator has launched a large general purpose Amazon EC2 instance to regularly process large data files. The instance has an attached 1 TB General Purpose SSD (gp2) Amazon Elastic Block Store (Amazon EBS) volume. The instance also is EBS-optimized. To save costs, the SysOps administrator stops the instance each evening and restarts the instance each morning. When data processing is active, Amazon CloudWatch metrics on the instance show a consistent 3.000 VolumeReadOps. The SysOps administrator must improve the I/O performance while ensuring data integrity.

Which action will meet these requirements?

A. Change the instance type to a large, burstable, general purpose instance.

B. Change the instance type to an extra large general purpose instance.

C. Increase the EBS volume to a 2 TB General Purpose SSD (gp2) volume.

D. Move the data that resides on the EBS volume to the instance store.

Answer: C

Question: 4

With the threat of ransomware viruses encrypting and holding company data hostage, which action should be taken to protect an Amazon S3 bucket?

A. Deny Post. Put. and Delete on the bucket.

B. Enable server-side encryption on the bucket.

C. Enable Amazon S3 versioning on the bucket.

D. Enable snapshots on the bucket.

Answer: B

Question: 5

A SysOps administrator is evaluating Amazon Route 53 DNS options to address concerns about high availability for an on-premises website. The website consists of two servers: a primary active server and a secondary passive server. Route 53 should route traffic to the primary server if the associated health check returns 2xx or 3xx HTTP codes. All other traffic should be directed to the secondary passive server. The failover record type, set ID. and routing policy have been set appropriately for both primary and secondary servers.

Which next step should be taken to configure Route 53?

A. Create an A record for each server. Associate the records with the Route 53 HTTP health check.

B. Create an A record for each server. Associate the records with the Route 53 TCP health check.

C. Create an alias record for each server with evaluate target health set to yes. Associate the records with the Route 53 HTTP health check.

D. Create an alias record for each server with evaluate target health set to yes. Associate the records with the Route 53 TCP health check.

Answer: A

For More Info Please Visit The Link:

https://realamazondumps.com/

Free Amazon DAS-C01 Dumps PDF By Realamazondumps.com

 Question: 1

A financial services company needs to aggregate daily stock trade data from the exchanges into a data store. The company requires that data be streamed directly into the data store, but also occasionally allows data to be modified using SQL. The solution should integrate complex, analytic queries running with minimal latency. The solution must provide a business intelligence dashboard that enables viewing of the top contributors to anomalies in stock prices.

Which solution meets the company’s requirements?

A. Use Amazon Kinesis Data Firehose to stream data to Amazon S3. Use Amazon Athena as a data source for Amazon QuickSight to create a business intelligence dashboard.

B. Use Amazon Kinesis Data Streams to stream data to Amazon Redshift. Use Amazon Redshift as a data source for Amazon QuickSight to create a business intelligence dashboard.

C. Use Amazon Kinesis Data Firehose to stream data to Amazon Redshift. Use Amazon Redshift as a data source for Amazon QuickSight to create a business intelligence dashboard.

D. Use Amazon Kinesis Data Streams to stream data to Amazon S3. Use Amazon Athena as a data source for Amazon QuickSight to create a business intelligence dashboard.

Answer: C

Question: 2

A financial company hosts a data lake in Amazon S3 and a data warehouse on an Amazon Redshift cluster. The company uses Amazon QuickSight to build dashboards and wants to secure access from its on-premises Active Directory to Amazon QuickSight.

How should the data be secured?

A. Use an Active Directory connector and single sign-on (SSO) in a corporate network environment.

B. Use a VPC endpoint to connect to Amazon S3 from Amazon QuickSight and an IAM role to authenticate Amazon Redshift.

C. Establish a secure connection by creating an S3 endpoint to connect Amazon QuickSight and a VPC endpoint to connect to Amazon Redshift.

D. Place Amazon QuickSight and Amazon Redshift in the security group and use an Amazon S3 endpoint to connect Amazon QuickSight to Amazon S3.

Answer: A

Question: 3

A real estate company has a mission-critical application using Apache HBase in Amazon EMR. Amazon EMR is configured with a single master node. The company has over 5 TB of data stored on an Hadoop Distributed File System (HDFS). The company wants a cost-effective solution to make its HBase data highly available.

Which architectural pattern meets company’s requirements?

A. Use Spot Instances for core and task nodes and a Reserved Instance for the EMR master node. Configure the EMR cluster with multiple master nodes. Schedule automated snapshots using Amazon EventBridge.

B. Store the data on an EMR File System (EMRFS) instead of HDFS. Enable EMRFS consistent view. Create an EMR HBase cluster with multiple master nodes. Point the HBase root directory to an Amazon S3 bucket.

C. Store the data on an EMR File System (EMRFS) instead of HDFS and enable EMRFS consistent view. Run two separate EMR clusters in two different Availability Zones. Point both clusters to the same HBase root directory in the same Amazon S3 bucket.

D. Store the data on an EMR File System (EMRFS) instead of HDFS and enable EMRFS consistent view. Create a primary EMR HBase cluster with multiple master nodes. Create a secondary EMR HBase readreplica cluster in a separate Availability Zone. Point both clusters to the same HBase root directory in the same Amazon S3 bucket.

Answer: D

Question: 4

A software company hosts an application on AWS, and new features are released weekly. As part of the application testing process, a solution must be developed that analyzes logs from each Amazon EC2 instance to ensure that the application is working as expected after each deployment. The collection and analysis solution should be highly available with the ability to display new information with minimal delays. Which method should the company use to collect and analyze the logs?

A. Enable detailed monitoring on Amazon EC2, use Amazon CloudWatch agent to store logs in Amazon S3, and use Amazon Athena for fast, interactive log analytics.

B. Use the Amazon Kinesis Producer Library (KPL) agent on Amazon EC2 to collect and send data to Kinesis Data Streams to further push the data to Amazon Elasticsearch Service and visualize using Amazon QuickSight.

C. Use the Amazon Kinesis Producer Library (KPL) agent on Amazon EC2 to collect and send data to Kinesis Data Firehose to further push the data to Amazon Elasticsearch Service and Kibana.

D. Use Amazon CloudWatch subscriptions to get access to a real-time feed of logs and have the logs delivered to Amazon Kinesis Data Streams to further push the data to Amazon Elasticsearch Service and Kibana.

Answer: D

Question: 5

A data analyst is using AWS Glue to organize, cleanse, validate, and format a 200 GB dataset. The data analyst triggered the job to run with the Standard worker type. After 3 hours, the AWS Glue job status is still RUNNING. Logs from the job run show no error codes. The data analyst wants to improve the job execution time without overprovisioning.

Which actions should the data analyst take?

A. Enable job bookmarks in AWS Glue to estimate the number of data processing units (DPUs). Based on the profiled metrics, increase the value of the executor-cores job parameter.

B. Enable job metrics in AWS Glue to estimate the number of data processing units (DPUs). Based on the profiled metrics, increase the value of the maximum capacity job parameter.

C. Enable job metrics in AWS Glue to estimate the number of data processing units (DPUs). Based on the profiled metrics, increase the value of the spark.yarn.executor.memoryOverhead job parameter.

D. Enable job bookmarks in AWS Glue to estimate the number of data processing units (DPUs). Based on the profiled metrics, increase the value of the num-executors job parameter.

Answer: B 

For More Info Please Visit The Link:

https://realamazondumps.com/

Free Amazon DBS-C01 Dumps PDF By Realamazondumps.com

 Question: 1

A company has deployed an e-commerce web application in a new AWS account. An Amazon RDS for MySQL Multi-AZ DB instance is part of this deployment with a database-1.xxxxxxxxxxxx.us-east1.rds.amazonaws.com endpoint listening on port 3306. The company’s Database Specialist is able to log in to MySQL and run queries from the bastion host using these details.When users try to utilize the application hosted in the AWS account, they are presented with a generic error message. The application servers are logging a “could not connect to server: Connection times out” error message to Amazon CloudWatch Logs.

What is the cause of this error?

A. The user name and password the application is using are incorrect.

B. The security group assigned to the application servers does not have the necessary rules to allow inbound connections from the DB instance.

C. The security group assigned to the DB instance does not have the necessary rules to allow inbound connections from the application servers.

D. The user name and password are correct, but the user is not authorized to use the DB instance.

Answer: C

Question: 2

An AWS CloudFormation stack that included an Amazon RDS DB instance was accidentally deleted and recent data was lost. A Database Specialist needs to add RDS settings to the CloudFormation template to reduce the chance of accidental instance data loss in the future.

Which settings will meet this requirement? (Choose three.)

A. Set DeletionProtection to True

B. Set MultiAZ to True

C. Set TerminationProtection to True

D. Set DeleteAutomatedBackups to False

E. Set DeletionPolicy to Delete

F. Set DeletionPolicy to Retain

Answer: ACF

Question: 3

A Database Specialist is troubleshooting an application connection failure on an Amazon Aurora DB cluster with multiple Aurora Replicas that had been running with no issues for the past 2 months. The connection failure lasted for 5 minutes and corrected itself after that. The Database Specialist reviewed the Amazon RDS events and determined a failover event occurred at that time. The failover process took around 15 seconds to complete.

What is the MOST likely cause of the 5-minute connection outage?

A. After a database crash, Aurora needed to replay the redo log from the last database checkpoint

B. The client-side application is caching the DNS data and its TTL is set too high

C. After failover, the Aurora DB cluster needs time to warm up before accepting client connections

D. There were no active Aurora Replicas in the Aurora DB cluster

Answer: B

Explanation:

When your application tries to establish a connection after a failover, the new Aurora PostgreSQL writer will be a previous reader, which can be found using the Aurora read only endpoint before DNS updates have fully propagated. Setting the java DNS TTL to a low value helps cycle between reader nodes on subsequent connection attempts.

Amazon Aurora is designed to recover from a crash almost instantaneously and continue to serve your application data. Unlike other databases, after a crash Amazon Aurora does not need to replay the redo log from the last database checkpoint before making the database available for operations. Amazon Aurora performs crash recovery asynchronously on parallel threads, so your database is open and available immediately after a crash. Because the storage is organized in many small segments, each with its own redo log, the underlying storage can replay redo records on demand in parallel and asynchronously as part of a disk read after a crash. This approach reduces database restart times to less than 60 seconds in most cases

Question: 4

A company is deploying a solution in Amazon Aurora by migrating from an on-premises system. The IT department has established an AWS Direct Connect link from the company’s data center. The company’s Database Specialist has selected the option to require SSL/TLS for connectivity to prevent plaintext data from being set over the network. The migration appears to be working successfully, and the data can be queried from a desktop machine. Two Data Analysts have been asked to query and validate the data in the new Aurora DB cluster. Both Analysts are unable to connect to Aurora. Their user names and passwords have been verified as valid and the Database Specialist can connect to the DB cluster using their accounts. The Database Specialist also verified that the security group configuration allows network from all corporate IP addresses. What should the Database Specialist do to correct the Data Analysts’ inability to connect?

A. Restart the DB cluster to apply the SSL change.

B. Instruct the Data Analysts to download the root certificate and use the SSL certificate on the connection string to connect.

C. Add explicit mappings between the Data Analysts’ IP addresses and the instance in the security group assigned to the DB cluster.

D. Modify the Data Analysts’ local client firewall to allow network traffic to AWS.

Answer: B

Explanation:

• To connect using SSL:

• Provide the SSLTrust certificate (can be downloaded from AWS)

• Provide SSL options when connecting to database

• Not using SSL on a DB that enforces SSL would result in error

Question: 5

A company is concerned about the cost of a large-scale, transactional application using Amazon DynamoDB that only needs to store data for 2 days before it is deleted. In looking at the tables, a Database Specialist notices that much of the data is months old, and goes back to when the application was first deployed. What can the Database Specialist do to reduce the overall cost?

A. Create a new attribute in each table to track the expiration time and create an AWS Glue transformation to delete entries more than 2 days old.

B. Create a new attribute in each table to track the expiration time and enable DynamoDB Streams on each table.

C. Create a new attribute in each table to track the expiration time and enable time to live (TTL) on each able.

D. Create an Amazon CloudWatch Events event to export the data to Amazon S3 daily using AWS Data Pipeline and then truncate the Amazon DynamoDB table.

Answer: C 

For More Info Please Visit The Link:

https://realamazondumps.com/

Free Amazon MLS-C01 Dumps PDF By Realamazondumps.com

 Question: 1

A Machine Learning Specialist is working with multiple data sources containing billions of records that need to be joined. What feature engineering and model development approach should the Specialist take with a dataset this large?

A. Use an Amazon SageMaker notebook for both feature engineering and model development

B. Use an Amazon SageMaker notebook for feature engineering and Amazon ML for model development

C. Use Amazon EMR for feature engineering and Amazon SageMaker SDK for model development

D. Use Amazon ML for both feature engineering and model development.

Answer: B

Question: 2

A Machine Learning Specialist has completed a proof of concept for a company using a small data sample and now the Specialist is ready to implement an end-to-end solution in AWS using Amazon SageMaker The historical training data is stored in Amazon RDS Which approach should the Specialist use for training a model using that data?

A. Write a direct connection to the SQL database within the notebook and pull data in

B. Push the data from Microsoft SQL Server to Amazon S3 using an AWS Data Pipeline and provide the S3 location within the notebook.

C. Move the data to Amazon DynamoDB and set up a connection to DynamoDB within the notebook to pull data in

D. Move the data to Amazon ElastiCache using AWS DMS and set up a connection within the notebook to pull data in for fast access.

Answer: B

Question: 3

Which of the following metrics should a Machine Learning Specialist generally use to compare/evaluate machine learning classification models against each other?

A. Recall

B. Misclassification rate

C. Mean absolute percentage error (MAPE)

D. Area Under the ROC Curve (AUC)

Answer: D

Question: 4

A Machine Learning Specialist is using Amazon SageMaker to host a model for a highly available customer-facing application . The Specialist has trained a new version of the model, validated it with historical data, and now wants to deploy it to production To limit any risk of a negative customer experience, the Specialist wants to be able to monitor the model and roll it back, if needed What is the SIMPLEST approach with the LEAST risk to deploy the model and roll it back, if needed?

A. Create a SageMaker endpoint and configuration for the new model version. Redirect production traffic to the new endpoint by updating the client configuration. Revert traffic to the last version if the model does not perform as expected.

B. Create a SageMaker endpoint and configuration for the new model version. Redirect production traffic to the new endpoint by using a load balancer Revert traffic to the last version if the model does not perform as expected.

C. Update the existing SageMaker endpoint to use a new configuration that is weighted to send 5% of the traffic to the new variant. Revert traffic to the last version by resetting the weights if the model does not perform as expected.

D. Update the existing SageMaker endpoint to use a new configuration that is weighted to send 100% of the traffic to the new variant Revert traffic to the last version by resetting the weights if the model does not perform as expected.

Answer: A

Question: 5

A manufacturing company has a large set of labeled historical sales data The manufacturer would like to predict how many units of a particular part should be produced each quarter Which machine learning approach should be used to solve this problem?

A. Logistic regression

B. Random Cut Forest (RCF)

C. Principal component analysis (PCA)

D. Linear regression

Answer: D

For More Info Please Visit The Link:

https://realamazondumps.com/


https://realawsdumps.gumroad.com/p/get-mls-c01-dumps-pdf-by-realamazondumps-com

https://gotartwork.com/Blog/get-mls-c01-dumps-pdf-by-realamazondumps-com/31709/

https://sites.google.com/view/get-mls-c01-dumps-pdf-by-reala/home

https://fnetchat.com/read-blog/75598

https://cliqafriq.com/read-blog/216890

https://penzu.com/p/6ee72212

https://social.artisanssoft.com/read-blog/33333

https://catbuzzy.com/read-blog/82367

https://joyrulez.com/blogs/90818/Get-MLS-C01-Dumps-PDF-By-Realamazondumps-com

https://lifesspace.com/read-blog/61855

https://www.reusealways.com/read-blog/108574

https://corosocial.com/read-blog/85000

https://www.promorapid.com/read-blog/120820

https://social.heyluu.com/create-blog/

https://redsocialgoool.com/read-blog/67126

https://groups.google.com/g/pass-your-exam-with-us/c/_zMjr6qbHHM

https://zlidein.com/read-blog/67742

https://www.skreebee.com/read-blog/101084

https://wiwonder.com/read-blog/37598

https://www.vevioz.com/read-blog/55018

https://social.x-vendor.com/read-blog/30263

https://realamazondump.smblogsites.com/10723080/get-mls-c01-dumps-pdf-by-realamazondumps-com

http://leroysmith.ampedpages.com/Get-MLS-C01-Dumps-PDF-By-Realamazondumps-com-37258620

https://telegra.ph/Get-MLS-C01-Dumps-PDF-By-Realamazondumpscom-04-19

https://diigo.com/0o8n73

https://www.evernote.com/shard/s556/client/snv?noteGuid=980e57d4-e2c9-24aa-e6b2-db47698c8f0b&noteKey=b55d81c33f4781f2200063c195238cf7&sn=https%3A%2F%2Fwww.evernote.com%2Fshard%2Fs556%2Fsh%2F980e57d4-e2c9-24aa-e6b2-db47698c8f0b%2Fb55d81c33f4781f2200063c195238cf7&title=Get%2BMLS-C01%2BDumps%2BPDF%2BBy%2BRealamazondumps.com

https://www.diveboard.com/leroysmith/posts/get-mls-c01-dumps-pdf-by-realamazondumps-dot-com-B5KVCUs

https://www.janubaba.com/c/forum/topic/199854/Professions__Education/Get_MLSC01_Dumps_PDF_By_Realamazondumpscom

https://chttr.co/status/172ac4a7-b497-49f7-a97f-a989c8a91e7c

https://yaribook.com/blogs/1046/Get-MLS-C01-Dumps-PDF-By-Realamazondumps-com

https://www.articlegrowth.com/get-mls-c01-dumps-pdf-by-realamazondumps-com/

https://articlezone.xyz/get-mls-c01-dumps-pdf-by-realamazondumps-com/

https://article.raghavchugh.com/get-mls-c01-dumps-pdf-by-realamazondumps-com/

https://www.articlecluster.com/get-mls-c01-dumps-pdf-by-realamazondumps-com/

https://articleroom.xyz/get-mls-c01-dumps-pdf-by-realamazondumps-com/

https://articlepost.xyz/get-mls-c01-dumps-pdf-by-realamazondumps-com/

https://randomarticle.xyz/get-mls-c01-dumps-pdf-by-realamazondumps-com/


Free Amazon SCS-C01 Dumps PDF By Realamazondumps.com

 Question: 1

A global company that deals with International finance is investing heavily in cryptocurrencies and wants to experiment with mining technologies using AWS. The company's security team has enabled Amazo GuardDuty and is concerned by the number of findings being generated by the accounts. The security team wants to minimize the possibility of GuardDuty finding false negatives for compromised instances that are performing mining How can the security team continue using GuardDuty while meeting these requirements?

A. In the GuardDuty console, select the CryptoCurrency:EC2/BitcoinTool B'DNS finding and use the suppress findings option

B. Create a custom AWS Lambda function to process newly detected GuardDuty alerts Process the CryptoCurrency EC2/BitcoinTool BIDNS alert and filter out the high-severity finding types only.

C. When creating a new Amazon EC2 Instance, provide the instance with a specific tag that indicates is performing mining operations Create a custom AWS Lambda function to process newly detected GuardDuty alerts and filter for the presence of this tag

D. When GuardDuty produces a cryptocurrency finding, process the finding with a custom AWS Lambda function to extract the instance ID from the finding Then use the AWS Systems Manager Run Command to check for a running process performing mining operations

Answer: A

Question: 2

A security engineer must develop an encryption tool for a company. The company requires a cryptographic solution that supports the ability to perform cryptographic erasure on all resources protected by the key material in 15 minutes or less Which AWS Key Management Service (AWS KMS) key solution will allow the security engineer to meet these requirements?

A. Use Imported key material with CMK

B. Use an AWS KMS CMK

C. Use an AWS managed CMK.

D. Use an AWS KMS customer managed CMK

Answer: C

Question: 3

A security engineer is designing a solution that will provide end-to-end encryption between clients and Docker containers running In Amazon Elastic Container Service (Amazon ECS). This solution will also handle volatile traffic patterns Which solution would have the MOST scalability and LOWEST latency?

A. Configure a Network Load Balancer to terminate the TLS traffic and then re-encrypt the traffic to the containers

B. Configure an Application Load Balancer to terminate the TLS traffic and then re-encrypt the traffic to the containers

C. Configure a Network Load Balancer with a TCP listener to pass through TLS traffic to the containers

D. Configure Amazon Route 53 to use multivalue answer routing to send traffic to the containers

Answer: A

Question: 4

A company has an application hosted in an Amazon EC2 instance and wants the application to access secure strings stored in AWS Systems Manager Parameter Store When the application tries to access the secure string key value, it fails. Which factors could be the cause of this failure? (Select TWO.)

A. The EC2 instance role does not have decrypt permissions on the AWS Key Management Sen/ice (AWSKMS) key used to encrypt the secret

B. The EC2 instance role does not have read permissions to read the parameters In Parameter Store

C. Parameter Store does not have permission to use AWS Key Management Service (AWS KMS) to decrypt the parameter

D. The EC2 instance role does not have encrypt permissions on the AWS Key Management Service (AWSKMS) key associated with the secret

E. The EC2 instance does not have any tags associated.

Answer: C, E

Question: 5

A company is running an application on Amazon EC2 instances in an Auto Scaling group. The application stores logs locally A security engineer noticed that logs were lost after a scale-in event. The security engineer needs to recommend a solution to ensure the durability and availability of log data All logs must be kept for a minimum of 1 year for auditing purposes What should the security engineer recommend?

A. Within the Auto Scaling lifecycle, add a hook to create and attach an Amazon Elastic Block Store(Amazon EBS) log volume each time an EC2 instance is created. When the instance is terminated, the EBS volume can be reattached to another instance for log review.

B. Create an Amazon Elastic File System (Amazon EFS) file system and add a command in the user data section of the Auto Scaling launch template to mount the EFS file system during EC2 instance creation Configure a process on the instance to copy the logs once a day from an instance Amazon Elastic Block Store (Amazon EBS) volume to a directory in the EFS file system.

C. Build the Amazon CloudWatch agent into the AMI used in the Auto Scaling group. Configure the CloudWatch agent to send the logs to Amazon CloudWatch Logs for review.

D. Within the Auto Scaling lifecycle, add a lifecycle hook at the terminating state transition and alert the engineering team by using a lifecycle notification to Amazon Simple Notification Service (Amazon SNS). Configure the hook to remain in the Terminating:Wait state for 1 hour to allow manual review of the security logs prior to instance termination.

Answer: B

For More Info Please Visit The Link:

https://realamazondumps.com/

Free Amazon CLF-C01 Dumps PDF By Realamazondumps.com

 Question: 1

How can a user achieve high availability for a web application hosted on AWS?

A. Use a Classic Load Balancer across multiple AWS Regions

B. Use an Application Load Balancer across multiple Availability Zones in one AWS Region

C. Set up automatic scaling and load balancing with another application instance running on premises

D. Use the AWS Region with the highest number of Availability Zones

Answer: B

Question: 2

Which AWS service helps users create three-dimensional applications quickly without requiring any

specialized programming or three-dimensional graphics expertise?

A. AWS RoboMaker

B. Amazon Rekognition

C. Amazon Sumerian

D. Amazon GameLift

Answer: B

Question: 3

A development learn wants to deploy multiple test environments for an application in a fast, repeatable

manner.

Which AWS service should the learn use?

A. Amazon EC2

B. AWS Cloudformation

C. Amazon QuickSight

D. Amazon Elastic Container Service (Amazon ECS)

Answer: D

Question: 4

A company uses AWS Direct Conned and wants to establish connectivity that spans VPCs across multiple

AWS Regions.

Which AWS service or feature should the company use to meet these requirements?

A. AWS Transit Gateway

B. AWS PrivateLink

C. Amazon Connect

D. Amazon Route 53

Answer: C

Question: 5

which of the following are benefits of running a database on amazon rds compared to an on premises

database?

A. RDS backup are managed by AWS

B. RDS supports any relational database

C. RDs has no database engineer licensing costs.

D. RDS database compute capacity can be easily scaled.

E. RDS inbound traffic content (for example, security groups) is managed by AWS.

Answer: A, C 

For More Info Please Visit The Link:

https://realamazondumps.com/

Free Amazon DOP-C01 Dumps PDF By Realamazondumps.com

 Question: 1

To run an application, a DevOps Engineer launches an Amazon EC2 instances with public IP addresses in a public subnet. A user data script obtains the application artifacts and installs them on the instance upon launch. A change to the security classification of the application now requires the instances to run with no access to the Internet. While the instances launch successfully and show as healthy, the application does not seem to be installed.

Which of the following should successfully install the application while complying with the new rule?

A. Launch the instances in a public subnet with Elastic IP addresses attached. Once the application is installed and running, run a script to disassociate the Elastic IP addresses afterwards.

B. Set up a NAT gateway. Deploy the EC2 instances to a private subnet. Update the private subnet's route table to use the NAT gateway as the default route.

C. Publish the application artifacts to an Amazon S3 bucket and create a VPC endpoint for S3. Assign an IAM instance profile to the EC2 instances so they can read the application artifacts from the S3 bucket.

D. Create a security group for the application instances and whitelist only outbound traffic to the artifact repository. Remove the security group rule once the install is complete.

Answer: C

Question: 2

An IT department manages a portfolio with Windows and Linux (Amazon and Red Hat Enterprise Linux) servers both on-premises and on AWS. An audit reveals that there is no process for updating OS and core application patches, and that the servers have inconsistent patch levels. Which of the following provides the MOST reliable and consistent mechanism for updating and maintaining all servers at the recent OS and core application patch levels?

A. Install AWS Systems Manager agent on all on-premises and AWS servers. Create Systems Manager

Resource Groups. Use Systems Manager Patch Manager with a preconfigured patch baseline to run

scheduled patch updates during maintenance windows.

B. Install the AWS OpsWorks agent on all on-premises and AWS servers. Create an OpsWorks stack with separate layers for each operating system, and get a recipe from the Chef supermarket to run the patch commands for each layer during maintenance windows.

C. Use a shell script to install the latest OS patches on the Linux servers using yum and schedule it to run automatically using cron. Use Windows Update to automatically patch Windows servers.

D. Use AWS Systems Manager Parameter Store to securely store credentials for each Linux and

Windows server. Create Systems Manager Resource Groups. Use the Systems Manager Run Command

to remotely deploy patch updates using the credentials in Systems Manager Parameter Store

Answer: A

Question: 3

A company is setting up a centralized logging solution on AWS and has several requirements. The

company wants its Amazon CloudWatch Logs and VPC Flow logs to come from different sub accounts

and to be delivered to a single auditing account. However, the number of sub accounts keeps changing.

The company also needs to index the logs in the auditing account to gather actionable insight.

How should a DevOps Engineer implement the solution to meet all of the company's requirements?

A. Use AWS Lambda to write logs to Amazon ES in the auditing account. Create an Amazon CloudWatch

subscription filter and use Amazon Kinesis Data Streams in the sub accounts to stream the logs to the

Lambda function deployed in the auditing account.

B. Use Amazon Kinesis Streams to write logs to Amazon ES in the auditing account. Create a CloudWatch

subscription filter and use Kinesis Data Streams in the sub accounts to stream the logs to the Kinesis

stream in the auditing account.

C. Use Amazon Kinesis Firehose with Kinesis Data Streams to write logs to Amazon ES in the auditing

account. Create a CloudWatch subscription filter and stream logs from sub accounts to the Kinesis

stream in the auditing account.

D. Use AWS Lambda to write logs to Amazon ES in the auditing account. Create a CloudWatch

subscription filter and use Lambda in the sub accounts to stream the logs to the Lambda function

deployed in the auditing account.

Answer: C

Question: 4

A company wants to use a grid system for a proprietary enterprise in-memory data store on top of AWS.

This system can run in multiple server nodes in any Linux-based distribution. The system must be able to reconfigure the entire cluster every time a node is added or removed. When adding or removing nodes, an / etc./cluster/nodes.config file must be updated, listing the IP addresses of the current node members of that cluster The company wants to automate the task of adding new nodes to a cluster.

What can a DevOps Engineer do to meet these requirements?

A. Use AWS OpsWorks Stacks to layer the server nodes of that cluster. Create a Chef recipe that populates the content of the /etc/cluster/nodes.config file and restarts the service by using the current members of the layer. Assign that recipe to the Configure lifecycle event.

B. Put the file nodes.config in version control. Create an AWS CodeDeploy deployment configuration and deployment group based on an Amazon EC2 tag value for the cluster nodes. When adding a new node to the cluster, update the file with all tagged instances, and make a commit in version control. Deploy the new file and restart the services.

C. Create an Amazon S3 bucket and upload a version of the etc/cluster/nodes.config file. Create a crontab script that will poll for that S3 file and download it frequently. Use a process manager, such as Monit or systemd, to restart the cluster services when it detects that the new file was modified. When adding a node to the cluster, edit the file's most recent members. Upload the new file to the S3 bucket.

D. Create a user data script that lists all members of the current security group of the cluster and automatically updates the /etc/cluster/nodes.config file whenever a new instance is added to the cluster

Answer: A

Question: 5

A company has established tagging and configuration standards for its infrastructure resources running on AWS. A DevOps Engineer is developing a design that will provide a near-real-time dashboard of the compliance posture with the ability to highlight violations. Which approach meets the stated requirements?

A. Define the resource configurations in AWS Service Catalog, and monitor the AWS Service Catalog compliance and violations in Amazon CloudWatch. Then, set up and share a live CloudWatch dashboard .Set up Amazon SNS notifications for violations and corrections.

B. Use AWS Config to record configuration changes and output the data to an Amazon S3 bucket. Create an Amazon QuickSight analysis of the dataset, and use the information on dashboards and mobile devices.

C. Create a resource group that displays resources with the specified tags and those without tags. Use the AWS Management Console to view compliant and non-compliant resources.

D. Define the compliance and tagging requirements in Amazon inspector. Output the results to Amazon CloudWatch Logs. Build a metric filter to isolate the monitored elements of interest and present the data in a CloudWatch dashboard.

Answer: B 

For More Info Please Visit The Link:

https://realamazondumps.com/