How to pass AWS cloud practitioner (CLF-C01) certification exam. Part 1

Ravindra Elicherla
9 min readJul 22, 2020

Before you start reading this, do read how I passed two AWS exams in less than two months here. If you took a decision to write Cloud Practitioner certification exam, I assume you are keen to start your Cloud learning journey. It is indeed a great first step. Cloud Practitioner exam is a relatively easy exam to pass yet builds a foundation for your future cloud certifications. This is the only certification that can be done by anyone irrespective of the role that they play. It could be a developer, manager, Director, CTO, members from sales, purchasing, or financial teams.

Read these 10 tips to ace the exam easily.

1. Understand AWS infrastructure:

Here is a beautiful representation of AWS infrastructure. Understand the difference between, region, availability zone (AZ), local zone, edge location, point of presence. This link gives a good understanding of all these concepts. Also, understand some services are available across regions some are available within the region. For example, EC2 service is available within a region and Volumes are available within AZ. If you create an EC2 instance in Mumbai region, you can’t see that in the Sydney region. IAM users, groups and roles are global. They are available across the regions. Key pairs are available within the region and same with Security Groups. When it comes to S3, bucket names are unique across all existing bucket names in Amazon S3, while you create a bucket within a region.

Sample Question:

Question: What is the correct relationship between region and availability zone?

A: Availability zone contains a region

B: A region contains one or more AZs

C: A region contains just one AZ

D: One Availability Zone spreads across multiple regions

The answer is B as one or more AZs are part of a Region

2. Shared responsibility model:

Security and Compliance is a shared responsibility between AWS and the customer. AWS responsibility is “Security of the Cloud” and Customer has the responsibility of “Security in the Cloud”. Key here is “Of” the cloud and “In” the cloud. Imagine that you are living in a gated community (AWS in our case). Security of the community is taken care by the central security team. However, entry of the people into your apartment (EC2 instance in our case) is decided by you. You will give instructions to the central security team whether to allow people or not at the main gate. Also, the central security team would not really care about what happens inside the apartment. Similar to this if you spun off an EC2 instance, saved some data in the root folder and exposed to the outside world, AWS will not be responsible. One more easy case is you uploaded sensitive data into S3 and made it public, AWS can’t do anything. AWS is responsible for Data Center security, Compute, Storage, Networking, Database, ensuring availability zones and edge locations are up and running. Customer is responsible for the platform, applications that are running, Identity, Access Management, OS, Network & Firewall configurations, Data encryption (both client-side and server-side) and traffic protection.

© 2020, Amazon Web Services, Inc.

Sample Question

Q: In Shared responsibility model what is AWS responsibility?

A: Ensuring Data is encrypted at rest

B: Ensuring the right ports are opened in a security group

C: Ensuring the right people are given access using IAM policies

D: Ensuring that firmware is updated on hardware devices

The right answer is D. All the remaining are the responsibility of the Customer. In doubt think of gated community security

3. Understand the EC2 instances purchase options:

To be precise there are 7 options.

a. On-Demand Instances: With On-Demand Instances, you pay for compute capacity by the second with no long-term commitments. This is similar to ordering a taxi on Uber. You pay as per your use.

b1. Reserved Instances: Reserved Instances are not physical instances, but rather a billing discount applied to the use of On-Demand Instances in your account. This is like renting a car for a specific period say per day or per week. You need to pay money even if you do not drive one day.

b2. Reserved InstancesScheduled Instances: Scheduled Reserved Instances (Scheduled Instances) enable you to purchase capacity reservations that recur on a daily, weekly, or monthly basis, with a specified start time and duration, for a one-year term. You reserve the capacity in advance, so that you know it is available when you need it.

c. Spot Instances: Spot Instances enable you to request unused EC2 instances at steep discounts, but then they can get terminated with 2 minutes notice. Not close enough but you can think like buying a ticket and get into a train.

d. Dedicated Hosts: EC2 Dedicated Host is a physical server with EC2 instance capacity fully dedicated to your use.

e. Dedicated Instances: Dedicated Instances are Amazon EC2 instances that run in a virtual private cloud (VPC) on hardware that’s dedicated to a single customer. The difference between dedicated host and dedicated instance is with dedicated hosts you can use your existing per-socket, per-core, or per-VM software licenses, including Windows Server, Microsoft SQL Server, SUSE, and Linux Enterprise Server.

f. On-Demand Capacity Reservations: On-Demand Capacity Reservations enable you to reserve capacity for your Amazon EC2 instances in a specific Availability Zone for any duration. For example, there is a merger with a new company and you may see surge of requests for a period of one week. This is one-off and this compute capacity can be requested.

Without putting too many variables, in general cost is low to high in the order of Spot instances, Reserved instances, On-Demand instances, Dedicated instances and Dedicated host.

Spot instances are cheapest while Dedicated hosts are costly.

Sample question:

Q: Which of the following pricing model allows the customer to use their existing licenses?

A: Spot instances

B: Dedicated Instances

C: Dedicated Hosts

D: Reserved Instances

A and D straight away can be removed. But B and C are sounding similar. Whenever you hear instance, you can safely assume it is built on shared hardware. So the answer is C: Dedicated Hosts

4. S3 Storage Classes

S3 is object-store in AWS. Knowing differences between storage classes can fetch you answers for 2 to 3 questions. When you save some money in the bank you will look at few things. Your money should be safe (Durability), you should be able to take money whenever you want (Availability) and you would like to take money as many times as you want (Frequency of access).

Durability: For S3 Standard (more on this below) Durability is 99.999999999% (I am saving your time, it is Eleven nines). This means that if you store 10,000 objects in S3, you will lose one object in 10 million years. Well, that too much to ask for. The message is, if you store in S3, rest assured, you will never lose anything in your lifetime (thank God our jobs are saved!)

Availability: For S3 Standard Availability is 99.99% (Four nines). This means that there maybe 52 minutes, 36 seconds downtime per year

Frequency of access: Some banks in India charge an additional fee if you withdraw money from ATM more than a certain number of times. Similar to this, if you access data frequently from S3 it adds to operational and Network costs and AWS would like to charge for that.

Let's look at various options:

S3 Standard: S3 Standard offers high durability, availability, and performance object storage for frequently accessed data. can be used for cloud applications, dynamic websites, content distribution, mobile and gaming applications, and big data analytics.

Durability: 99.999999999% and Availability 99.99%

S3 Standard-IA: Here IA stands for Infrequent access. It has all that are part of S3 Standard and for infrequently accessed data. Of course, cost is less than Standard. Suitable for long-term storage, backups, and as a data store for disaster recovery files.

Durability: 99.999999999% and Availability 99.9% (three nines, 8.77 hours of downtime in a year)

S3 One Zone-Infrequent Access: This is similar to S3 Standard-IA but as the data is stored in only one AZ, if that AZ is down completely, you can’t access the data that point of time. Your job is still safe!! This doesn't impact Durability in anyways. it only impacts Availability. This is cheaper than S3 Standard-IA. It’s a good choice for storing secondary backup copies of on-premises data or easily re-creatable data.

Durability: 99.999999999% and Availability 99.5% (two and a half nines, 1.83 days of downtime in a year)

S3 Glacier: S3 Glacier is a secure, durable, and low-cost storage class for data archiving. S3 Glacier provides three retrieval options that range from a few minutes to hours. It is ideal for long-term archive

Durability: 99.999999999%

S3 Glacier Deep Archive: It is S3’s lowest-cost storage class and supports long-term retention and digital preservation for data that may be accessed once or twice in a year. Retrieval time is within 12 hours

Durability: 99.999999999%

Sample question:

Q: Which of the following is characteristics of Amazon S3?

A. An object store

B. A local file store

C. A network file system

D. S3 is CPU for EC2

The answer is A, Object store

One more Sample question:

Q: A company has a requirement to read an image, make thumbnail and store in a DB. This thumbnail should be accessed within a minute and can be reproduced easily if it is not available beyond one minute. What is the cheapest storage option in S3?

A. S3- Standard

B. S3-Standard IA

C. S3-One Zone IA

D. Glacier

If you can guess the answer, please write in comment with reason.

5. Amazon Snow family:

You will definitely see one or two questions on the snow family. In 2016 Reinvent event a truck made a dramatic entry with lots of cheers from the audience. To transfer exabyte (1000 petabytes) scale data which would take 26 years on 10GB internet line, Snowball takes few months.

AWS Snowcone: AWS Snowcone is the smallest member of the AWS Snow Family of edge computing, edge storage, and data transfer devices, weighing in at 4.5 pounds (2.1 kg) with 8 terabytes of usable storage. You can use Snowcone in backpacks on first responders, or for IoT, vehicular, and even drone use cases.

© 2020, Amazon Web Services, Inc.

AWS Snowball: AWS Snowball is a data migration and edge computing device that comes in two device options: Compute Optimized and Storage Optimized. You can use these devices for data collection, machine learning and processing, and storage in environments with intermittent connectivity (like manufacturing, industrial, and transportation) or in extremely remote locations (like military or maritime operations) before shipping them back to AWS.

© 2020, Amazon Web Services, Inc.

AWS Snowmobile: Snowmobile is an Exabyte-scale data transfer service used to move extremely large amounts of data to AWS. This makes it easy to move massive volumes of data to the cloud, including video libraries, image repositories, or even a complete data center migration.

© 2020, Amazon Web Services, Inc.

Sample question.

Q: A company bought a startup and needs to transfer 10TB data. They have very slow internet connection. What is the cost-effective option to transfer data?

a. Snowmobile

b. Snowcone

c. Snowball

d. Snowflake

Snowmobile works but it is too costly and not an ideal option for 10 TB. Snowcone is for 8TB. Snowball is right option for 10 TB. There is no snowflake option in AWS.

That’s all in Part 1. If you understand these concepts, you will be able to answer anywhere between 5 to 10 questions easily. Happy learning.

Part 2 coming soon….

PS: Content and pictures shown in this blog are only for educational purposes. Copy right belongs to Amazon Web Services, Inc.

--

--

Ravindra Elicherla

Geek, Painter, Fitness enthusiast, Book worm, Options expert and Simple human