Thursday, December 27, 2018

Google Cloud Concepts Part I



The Google Cloud Platform provides a comprehensive big data solution in a single platform.

The Google Cloud Platform is a full service platform, and it's set up so that you can utilize not only cloud native services, but also open-source tools. It also supports both batch and stream data processing modes.

Google Cloud Platform resources consist of physical resources, like computers and hard disk drives, as well as virtual resources, for example virtual machines. It's global in scope, with resources located around the world in Google data centers. And global distribution has a number of positive implications, including redundancy in the event of failure. The vast reach of the global data centers offered by Google Cloud Platform mean that you can pretty much deploy whatever number of resources you need to without worry. It also means reduced latency, since you can locate your services at a data center close to your end users.



Resources reside in regions or zones. A region is a particular geographical location where resources run. Each region contains one or more zones. For example, the us-central1 region specifies a region in central US that has zones in us-central1-a, us-central1-b, -c, and -f. Resources that reside in a zone, for example resources like virtual machine instances or persistent disks, are called zonal resources. Other resources, like static external IP addresses, are regional. Regional resources can be consumed by any resources within that region. And that includes any zone within that region, while zonal resources are only used by other resources within the same zone.



Google Cloud Platform resources are hosted across multiple locations globally. Placing resources in different zones within a region provides isolation from several common types of infrastructure, software, and hardware failures. Placing resources in different regions provides an even higher level of protection against failure. The bottom line is that you can design robust systems using resources that are spread across different failure domains. Compute Engine resources are either global, regional, or zonal. As an example, images themselves are global resources, while disks are zonal. Global resources are accessible by resources regardless of region or zone. So virtual machine instances from different zones can apply the same global image. The scope of a resource indicates how accessible it is to other resources. Though all resources, regardless of whether global, zonal, or regional, must be uniquely named within a project. What this means is that you can't, for example, name a virtual machine instance demo instance in one zone and then try to name another VM within the same project with that same name.



Google Cloud Platform Services


Google Cloud Platform provides a huge number of services.

Some of the more common services include computing and hosting, storage, networking as well as big data.Let's look at computing and hosting first. First, managed application platform. This is offered as Google App Engine and it's a platform as a service offering. It's a somewhat hands off approach in which you allow Google to manage hosting, scaling, and monitoring services. Well, for example, if traffic to your e-commerce website is a dramatic upturn, Google will automatically scale the system for you.
 And container based computing. This is focused on application code rather than deployment and hosting. Google Kubernetes Engine is referred to as containers as a service, and is very mature, and one of the most powerful container orchestration platform. Virtual machines are offered as a service called Google Compute Engine, and this is considered to be a type of infrastructure as a service.
With this type of service, you are responsible for configuration, administration, and monitoring tasks. In other words, Google will make sure that reliable resources are always available and up to date, but it's on you to manage and provision them.

 Now, let's look at storage services. Cloud SQL, which is a database service based on Structured Query Language or SQL, and it offers either MySQL or PostgreSQL databases. Google Cloud Platform offers two types of NoSQL data storage. One is Cloud Datastore and Cloud Bigtable is the other.

Cloud Spanner is a fully managed, highly available, relational database service for mission critical application. Cloud Storage offers large capacity, consistent and scalable data storage. And Compute Engine offers persistent disks. And this is available as the primary storage for your instances with both standard persistent disks, as well as solid state drives.

Now, let's look at networking services. Compute Engine provides networking services for virtual machine instances to use. You can load balance traffic across multiple instances. And there's Cloud DNS as well, which allows you to create and manage domain name system records. And Google Cloud Interconnect is an advanced connectivity service which allows you to connect your existing network to Google Cloud Platform networking resources.

And finally, big data services. First, BigQuery, this is a data analysis service. The data analysis services include custom schema creation, so you can organize your data as you wish. For example, you may have a schema structure in mind using specific datasets and tables.It offers the convenience of large dataset querying using SQL-like commands, so the learning curve is more manageable. It provides for loading, querying, and other operations via jobs. And supports managing and protecting data with controllable and manageable permission.

Cloud Dataflow is a managed service that includes software development kits or SDKs for batch and streaming data processing modes. And Cloud Dataflow is also applicable for extract, transform, load, or ETL operation. Then there's Cloud Pub/Sub. This is an asynchronous messaging service. It allows an application to send messages as JSON structures. And on the receiving end is a publishing unit referred to as a topic. These topics are global resources and what means is that other applications and projects owned by your organization can subscribe to the topic. Thereby receiving those messages in the body of HTTP requests or responses.

Benefits of Google Cloud Platform

Some of the main benefits of Google Cloud Platform include ,



Let's have a closer look at these. Future-proof infrastructure includes factors like live migration. And that means you can move Google Compute Engines instances to nearby hosts, even while they are active and under high load. Google Cloud Platform offers pricing innovations like per second billing and discounts for sustained use. The platform allows you to configure a wide combination of memory and virtual CPU, helping to avoid over-provisioning when sizing hardware for a particular workload. Fast archive restore provides a high throughput for right now restoration of data. Google's load balancer is the same system that supplies load balancing to Google products like Gmail and Google Maps over a global distributive platform. It's super fast and capable of tolerating extreme bites of traffic. You can take advantage of the Google security model, built and maintained by some of the top, application, information and network security experts. This is the same infrastructure that secures Google applications like Gmail and G Suite.

Google maintains a global network footprint, boasting over 100 points of presence, banning over 30 countries. Now let's look at powerful data and analytics as a benefit. You can build distributed services or fast results on the platform. BigQuery, Cloud Datalab and Cloud Dataproc. These are the same services that Google uses. So queries that traditionally take hours or days, can now be performed in a fraction of the time. Google Cloud Platform offers powerful applications and tools for working with big data, with data processing tools like Cloud Dataflow, Cloud Pub/Sub, BigQuery, and Cloud Datalab. Making it easier to use extreme volumes of data to deliver results.

Again, these are the same products that Google itself uses. And Google Cloud Machine Learning provides access to powerful deep learning system that google uses for services like Google Translate and Google Photos, as well as voice search. With respect to serverless computing, there are no upfront provisioning costs. So resources are allocated dynamically as needed. You simply bring your code and data. It's full management of servers and eliminates the repetitive tasks and potential errors that are inherent in tasks like scaling clusters and applying security badges. With automatic scaling and dynamic provisioning of resources, you pay only for what you use.

Let's consider a couple of use cases for a serverless computing. Take for example, web backend. You employ Google's App Engine with the highly scalable NoSQL Cloud Datastore database for a full scale, powerful backend infrastructure. Or Internet of Things or IoT device messaging, combined the real time geo-redundent Cloud Pub/Sub messaging service, with Cloud Dataflow serverless stream and batch data processing. When considering extract transform and load or ETL, we could combine Cloud Dataflow again for stream and batch data processing, with BigQuery for serverless data warehousing. [Other examples of Serverless Use Cases are shown.




Now one of the other benefits, customer-friendly pricing. As pointed out earlier, you do not have to commit to a specific deployment size, so no upfront costs are involved. You pay as you go and with per second pricing, that means that you pay for services as you require them. So you don't have to maintain a mountain of hardware and have that hardware sitting there idle. And you stop paying when you stop using a service, with no termination fees. Google Cloud Platform offers, as another benefit, data center innovation. For example, high performance virtual machines for fast and consistent performance.

Google's global network provides fast networking, performance, strong redundancy and high availability. Live migration technology means that maintenance of virtual machines is transparent, never requiring down time for scheduled maintenance. Google maintains very high security compliance and standards, providing some of the most secure infrastructure on earth. And Google builds its data centers with energy efficiency top of mind. In fact, Google is the first major Internet services organisation to obtain ISO 50,000 I certification. And Google has reportedly been carbon-neutral for over a decade. Now, consider security.
Google security model is an end-to-end process. Google uses practices and controls to secure data access and when retired, hard disks that contain customer information undergo a data destruction process. With only a few exceptions, customer data stored at rest is always encrypted on Google Cloud Platform. The encryption is automatic and transparent, so no customer intervention or action is required. Google's secure global network helps to improve security of in transit data. And the combination of cloud interconnect and managed VPN, means that you can create encrypted channels from an on-prem private IP environment, to Google's network. In addition to that, Cloud Security Scanner assists app engine developers to identify common vulnerabilities.

Google Cloud Platform also allows the configuration of user permissions at the project level, for full control of who has access to what resources and at what level. Using tools like Google Cloud logging and Google Cloud monitoring, simplifies the collection and analysis of request logs, as well as the monitoring of infrastructure services availability.


Comparing GCP and Other Models

When you're talking about cloud services suppliers, there are really three main suppliers. That's Amazon Web Services, Google Cloud Platform, and Microsoft Azure. The major differences from platform to platform include pricing. And one thing that you should keep in mind when considering pricing is this. How to calculate cost? For example, the pricing for Amazon's EC2 and Azure's Virtual Machines scalable computing service can get pretty complicated. While Google's scalable computing service is perhaps a little less flexible, but the pricing, way more straightforward. Another major difference lies in how these vendors name and group the services that they offer.
  
Compute offerings
With respect to scalable computing on demand, Amazon Web Services has its Elastic Compute Cloud, or EC2. While Google Cloud Platform offers Compute Engine, and Azure has Virtual Machines and Virtual Machine Scale Set. For web and mobile apps, we have AWS Elastic Beanstalk and GCP, and I'll refer to Google Cloud Platform hereafter as GCP. GCP's App Engine, and Azure offers Webs Apps and Cloud Services. For software container management, Amazon Web Services has ECS. And ECS is Amazon's EC2 container service, while EKS is Amazon Elastic Container Service for Kubernetes. Google Cloud Platform provides the Google Kubernetes Engine. And I'm quite familiar with that having a lot of personal experience with GKE, and I can tell you it is very powerful, very flexible, and fantastic. Azure offers AKS, which is Azure Container Service, and Azure Container Instances. 
  
For storage offerings, we have object storage. So Amazon Web Services offers Simple Storage Service, or S3. GCP has Cloud Storage, while Azure has Blob Storage. As far as archiving, or as it's known, cold storage, is concerned, AWS offers Glacier and Data Archive. GCP has Cloud Storage Nearline. And Azure offers Backup and Archive. With respect to content delivery networks, AWS offers CloudFront. GCP offers Cloud CDN, and Azure has its Content Delivery Network.

Now let's look at analytics offerings. For big data, AWS offers EMR and Athena. While GCP offers BigQuery, Cloud Dataflow, Dataproc, Datalab, and Pub/Sub. Azure has HDInsight and Data Lake Analytics. For BI, or business intelligence, AWS offers QuickSight. GCP offers Data Studio, while Azure has Power BI. Now, you might be thinking, well, Power BI, isn't that an installable application? Yes, it is an installable application. There are applications that are installable on desktop, or on servers even. But they connect to cloud resources very readily and easily. That's why I'm including those here. With respect to machine learning, AWS has Amazon Machine Learning, or AML. Google has Cloud Machine Learning, and Azure offers Azure Machine Learning.

Now let's consider briefly locations offerings. So really you should try to choose your data center close to users, because it reduces latency and provides better user experience. I mean, that goes without saying, if you're involved in any capacity in networking, in administration, you already know that. So here's the thing, AWS has the most global coverage, but unfortunately there's no coverage in Africa at this point. Google Cloud Platform has good coverage in the US, but not so good in Europe and Asia, and there's none in South America or Africa at this time. But knowing Google, I'm pretty sure that they are in the process of planning those right now. And as far as Azure goes, really they have the second best global coverage behind AWS, but again, there is no coverage in Africa.



















Initial releaseApril 7, 2008; 10 years ago
LicenseProprietary Written inJavaC++PythonGoRuby

In the beginning, we need to apply a Google Cloud Platform (GCP) account and the new client would have $300 credit to spend in the first 12 months.
Once we logged in to the GCP console, we need to create a new project in order to use the GCP services. Simply click the New Project button and type the project name. At the same time, it would assign a unique project ID for your project which we need to use to access the GCP in terminal later.

Friday, December 7, 2018

Defensive Pessimism


Growing up I heard again and again in school especially on Oct 2nd the below words

"Keep your thoughts positive because your thoughts become your words," 


But the school didn’t teach me what to do if I have a failure in life even though I have been there Mon-Friday for 12 years of my life …

The fact is human body especially human brain is developed enough to help us with cope with loss of some one dear or a sudden illness for you or in the family.

Defensive Pessimism is a strategic approach to life that help bolsters your physical, emotional and mental health

People who tend to be anxious can benefit from this approach. Defensive pessimism is the process that allows anxious people to do good planning.


The obesity in the UK is at record levels among men, women and children, and that many conditions, like heart disease and diabetes, are linked to being overweight. "Defensive pessimism can help with eating well and exercising regularly,"

A defensive pessimist may realize that high blood pressure and diabetes runs in his or her family, and in turn exercise and eat a healthy diet to try to avoid those diseases.

Some times taking that umbrella with you is defensive pessimism.

Think about the below things and keep a journal of your answers in your diary and see each day how decision's change


1.     What is your goal?
2.     What would be the most positive outcome?
3.     What action will I take to reach this goal?
4.     What is the biggest obstacle?
5.     When and where is the obstacle most likely to occur?
6.     What can I do to prevent the obstacle?
7.     What specific thing will I do to get back to my goal when this obstacle happens?



Optimism:
(Read from bottom to top)

Pessimism:
(Read top to bottom)

Thursday, November 22, 2018

Simple Gestures of love

It is story time few years ago I worked in a family business. We had a lot of regulars, and I really enjoyed the customers — except for this one woman. She was awful, hateful, rude and just a jerk. One day I was talking to one of my co-workers when we saw Mrs. Wonderful pull up in front of the store. The other worker said to me, “We should kick her bitchy ass out.” I don’t know what made me do it but I said “No, let’s kill her with kindness.” I ran over and immediately started handpicking the three pounds of chocolate mint ice cream that she purchased every time she came in. She hit the door, frowning. I had it weighed, bagged and rung up before she got to the register. I handed it to her, she took it, turned and left. Not a word.
 
The same thing happened for weeks. It sort of became a game for us, to see if we could get it ready before she got to the front door from her car. Finally one day I said, “You can’t have it until you prove to me that you have teeth.” She looked very angry and said, “WHAT?” I was taken aback and replied, “You never smile; smile at me.” She glared and gave me a death head smile, no happiness or humor, just teeth. I thought, oh crap. But she took her ice cream and left. She didn’t come back in for a couple of weeks and I thought it was my fault — I had run off a customer by being rude. I felt really bad.

Then one day she came in and said, “I have to thank you girls. Every time I stopped here you were all so nice to me and I was dreadful. You see, I was taking the ice cream to my husband who was dying in the hospital and the only thing he wanted to eat was chocolate mint ice cream. I was so distraught every time I arrived here that I know I was awful. You all made it so much easier for me with your consideration and kindness. I don’t think I could have taken it if it hadn’t been for you girls.” We were floored.

From her I learned, at the age of 27, that you never know what’s going on in someone else’s life. So just be kind. That jerk tail-gating you, just pull over and let him by; he may have a hurt child in the car. That asshole who took your parking spot may have just buried his wife. You just never know — and kindness costs you nothing.

PS. This story has no relation to any people you may know J

Thursday, November 8, 2018

How to pack your hand luggage ?


The infographic was put together by Stasher, which describes itself as the ‘the Airbnb of luggage’, and the world’s largest luggage storage network.






















Saturday, September 29, 2018

Cardiff through my lense


Full Album below

https://www.flickr.com/photos/annmj17/albums/72157701620468564

Wednesday, September 5, 2018

Random acts of kindness


Evidence shows that being kind to friends, family and strangers really does improve your mental and physical wellbeing. The Mental Health Foundation have put together some suggestions that you may wish to try throughout September 

At home and in your community
  • Call a friend that you haven’t spoken to for a while
  • Send a letter to your nan and grandad
  • Send flowers to a friend out of the blue
  • Offer to pick up some groceries for your elderly neighbour
  • Help a friend pack for a move
  • Send someone a handwritten thank you note
  • Offer to babysit for a friend
  • Walk your friend’s dog
  • Tell your family members how much you love and appreciate them
  • Help out at home with household chores
  • Check on someone you know who is going through a tough time
  • Help a friend get active
At work
  • Make a cup of tea for your colleagues
  • Get to know the new staff member
  • Lend your ear - listen to your colleague who is having a bad day
  • Say good morning
  • Bake a cake or healthy treat for your colleagues
  • Give praise to a colleague for something they’ve done well
In public places
  • Give up your seat to an elderly, disabled or pregnant person
  • Take a minute to help a tourist who is lost even though you are in a rush
  • Have a conversation with a homeless person
  • Help a mother carrying her pushchair down the stairs or hold the door for her
  • Let a fellow driver merge into your lane
  • Pick up some rubbish lying around in the street
  • Smile and say hello to people you may pass every day, but have never spoken to before

Thanks 
Ann

Wednesday, July 18, 2018

Cloud Types and Service Models


Some of the characteristics that define cloud computing include metered usage, where we pay only for those IT resources that we use in the cloud.

Another characteristic is resource pooling, where the cloud provider is responsible for pooling together all of the physical resources like server hardware, storage, network equipment, and that's made available to cloud subscribers, otherwise called tenants.

Another characteristic is that we should be able to access our cloud IT resources over a network, and in the case of a public cloud that means access from anywhere over the Internet.

Rapid elasticity is another characteristic so that we can quickly provision resources and deprovision them as required, and this is often done through a self-provisioning web portal.


A public cloud is one whose services are potentially accessible to all Internet users. We say potentially because there might be a requirement to sign up for an account or pay a subscription fee, but potentially it is available. A public cloud has worldwide geographic locations, and that's definitely the case with Amazon Web Services. The cloud provider is responsible for acquiring all of the hardware and making sure it's available for the IT services that they sell as cloud services to their customers.

A private cloud, on the other hand, is accessible only to a single organization and not to everybody over the Internet, and that's because it's organization owned and maintained hardware. However, a private cloud still does adhere to the exact same cloud characteristics that a public cloud does. For example, having a self-provisioned rapid elasticity of pooled IT resources available, that's still a cloud. In this case it's private because it's on hardware owned by the organization. The purpose of a private cloud is really apparent in larger government agencies and enterprises where we can track usage of IT resources and then use that for departmental chargeback.

A hybrid cloud is the best of both worlds. The two worlds we're talking about are the on-premises IT computing environment and the cloud computing environment. We have to consider that the migration of on-premises systems and data could potentially take a long time. So, for example, we might have data stored on-premises and in the cloud at the same time. And this is possible, for example, using the Amazon Web Services Storage Gateway, where we've got a cached copy of data available locally on the Gateway appliance on our on-premises network, but it's also replicating that data into the cloud. We might also, as another example, have a hardware VPN that links our on-premises environment to an Amazon Web Services Virtual Private Cloud, essentially a virtual network running in the cloud.

A community cloud serves the same needs that are required across multiple tenants. For example, Amazon Web Services has a government cloud in the United States, where it deals with things like sensitive data requirements, regulatory compliance. It's managed by US personnel and it's also FedRAMP compliant. FedRAMP, of course, is the Federal Risk and Authorization Management Program. So having these specific types of clouds available, in this case the government cloud, is referred to as a community cloud.


Cloud computing service models.

 So what is a service model anyway? Well, as it applies to cloud computing, it really correlates to the type of cloud service that we would subscribe to. So let's think about IT components like virtual machines and databases and websites and storage. Each of these examples correlates to a specific type of cloud computing service model.

 Let's start with Infrastructure as a Service, otherwise called IaaS. This includes things in Amazon Web Services like EC2 virtual machines. Or S3 cloud storage, or virtual networks which are called VPCs, Virtual Private Clouds. That's core IT infrastructure. And so it's considered Infrastructure as a Service.

Another type of cloud computing model is Platform as a Service, otherwise called PaaS. This deals with things like databases or even things like searching, such as the Amazon CloudSearch capability.

Software as a Service is called SaaS, and this is the way we would deal with things like websites or using Amazon Web Services WorkDocs. Well we can work with office productivity documents like Excel and Word documents in the cloud.

Security as a Service is called SECaaS. This deals with security that's being provided by a provider. So we're essentially transferring that risk out to some kind of a hosted solution. And it comes in many forms. It could be spam or malware scanning done for email in the cloud. Or as we see here, we've got an option in Amazon Web Services called AWS Shield. The purpose of this offering is for distributed denial of service attack protection.


A DDoS occurs when an attacker has control of slave machines, otherwise called #zombies. And the collection of these on a network is called a #botnet. Well, the attacker can issue commands to those slaves so that they could attack a victim host, as pictured here, or an entire network. Such as to flood it with traffic thereby preventing legitimate traffic from getting to, for example, a legitimate website. And in many cases a lot of these botnets are actually for rent by malicious users to the highest bidder. So for a fee, potentially we could pay for the use of a botnet to bring down a network or a host. Now luckily with Amazon Web Services, this can be mitigated using AWS Shield. DDoS protection mechanisms will often do things like looking at irregular traffic flows and blocking certain IP addresses. 


Tuesday, July 10, 2018

Basic Agile Scrum Interview QA


AGILE

Agile software development refers to a group of software development methodologies based on iterative development, where requirements and solutions evolve through collaboration between self-organizing cross-functional teams. Agile methods or Agile processes generally promote a disciplined project management process that encourages frequent inspection and adaptation, a leadership philosophy that encourages teamwork, self-organization and accountability, a set of engineering best practices intended to allow for rapid delivery of high-quality software, and a business approach that aligns development with customer needs and company goals. Agile development refers to any development process that is aligned with the concepts of the Agile Manifesto. The Manifesto was developed by a group fourteen leading figures in the software industry, and reflects their experience of what approaches do and do not work for software development. Read more about the Agile Manifesto.

SCRUM

Scrum is a subset of Agile. It is a lightweight process framework for agile development, and the most widely-used one.
  • A “process framework” is a particular set of practices that must be followed in order for a process to be consistent with the framework. (For example, the Scrum process framework requires the use of development cycles called Sprints, the XP framework requires pair programming, and so forth.)
  • “Lightweight” means that the overhead of the process is kept as small as possible, to maximize the amount of productive time available for getting useful work done.
Scrum process is distinguished from other agile processes by specific concepts and practices, divided into the three categories of Roles, Artifacts, and Time Boxes. These and other terms used in Scrum are defined below. Scrum is most often used to manage complex software and product development, using iterative and incremental practices. Scrum significantly increases productivity and reduces time to benefits relative to classic “waterfall” processes. Scrum processes enable organizations to adjust smoothly to rapidly-changing requirements, and produce a product that meets evolving business goals. An agile Scrum process benefits the organization by helping it to

  • Increase the quality of the deliverables
  • Cope better with change (and expect the changes)
  • Provide better estimates while spending less time creating them
  • Be more in control of the project schedule and state

1. What is the duration of a scrum sprint?

Answer: Generally, the duration of a scrum sprint (scrum cycle) depends upon the size of project and team working on it. The team size may vary from 3-9 members. In general, a scrum script complete in 3-4 weeks. Thus, on an average, the duration of a scrum sprint (scrum cycle) is 4 weeks. This type of sprint-based Agile scrum interview questions is very common in an agile or scrum master interview.

2. What is Velocity?

Answer: Velocity question is generally posed to understand if you have done some real work and familiar with the term. Its definition “Velocity is the rate at which team progresses print by sprint” should be enough. You can also add saying the important feature of velocity that it can’t be compared to two different scrum teams.

3. What do you know about impediments in Scrum? Give some examples of impediments.

Answer: Impediments are the obstacles or issues faced by scrum team which slow down their speed of work. If something is trying to block the scrum team from their getting work “Done” then it is an impediment. Impediments can come in any form. Some of the impediments are given as –
  • Resource missing or sick team member
  • Technical, operational, organizational problems
  • Lack of management supportive system
  • Business problems
  • External issues such as weather, war etc
  • Lack of skill or knowledge
While answering impediments related agile scrum interview questions remember that you may be asked the way to remove any of the mentioned impediment.

4. What is the difference and similarity between Agile and Scrum?

Answer: Difference between Agile and Scrum – Agile is a broad spectrum, it is a methodology used for project management while Scrum is just a form of the Agile that describes the process and its steps more concisely. Agile is a practice whereas scrum is a procedure to pursue this practice.
The similarity between Agile and Scrum – The Agile involves completing projects in steps or incrementally. The Agile methodology is considered to be iterative in nature. Being a form of Agile, Scrum is same as that of the Agile. It is also incremental and iterative.

5. What is increment? Explain.

Answer: This is one of the commonly asked agile scrum interview questions and a quick answer can be given this way. An increment is the total of all the product backlogs items completed during a sprint. Each increment includes all the previous sprint increment values as it is cumulative. It must be in the available mode in the subsequent release as it is a step to reach your goal.

6. What is the “build-breaker”?

Answer: The build-breaker is a situation that arises when there is a bug in the software. Due to this sudden unexpected bug, compilation process stops or execution fails or a warning is generated. The responsibility of the tester is then to get the software back to the normal working stage removing the bug.

7. What do you understand by Daily Stand-Up?

Answer: You may surely get an interview question about daily stand-up. So, what should be the answer to this question? The daily stand-up is an everyday meeting (most preferably held in the morning) in which the whole team meets for almost 15 minutes to find answer to the following three questions –
  • What was done yesterday?
  • What is your plan for today?
  • Is there any impediment or block that restricts you from completing your task?
The daily stand-up is an effective way to motivate the team and make them set a goal for the day.

8. What do you know about Scrum ban?

Answer: Scrum-ban is a Scrum and Kanban-based model for the software development. This model is specifically used for the projects that need continuous maintenance, have various programming errors or have some sudden changes. This model promotes the completion of a project in minimum time for a programming error or user story.

Sunday, July 8, 2018

Potatoes, Eggs, and Coffee Beans

Once upon a time a daughter complained to her father that her life was miserable and that she didn’t know how she was going to make it. She was tired of fighting and struggling all the time. It seemed just as one problem was solved, another one soon followed.
Her father, a chef, took her to the kitchen. He filled three pots with water and placed each on a high fire. Once the three pots began to boil, he placed potatoes in one pot, eggs in the second pot, and ground coffee beans in the third pot.

He then let them sit and boil, without saying a word to his daughter. The daughter, moaned and impatiently waited, wondering what he was doing.

After twenty minutes he turned off the burners. He took the potatoes out of the pot and placed them in a bowl. He pulled the boiled eggs out and placed them in a bowl.
He then ladled the coffee out and placed it in a cup. Turning to her he asked. “Daughter, what do you see?”



“Potatoes, eggs, and coffee,” she hastily replied.

“Look closer,” he said, “and touch the potatoes.” She did and noted that they were soft. He then asked her to take an egg and break it. After pulling off the shell, she observed the hard-boiled egg. 

Finally, he asked her to sip the coffee. Its rich aroma brought a smile to her face.
“Father, what does this mean?” she asked.

He then explained that the potatoes, the eggs and coffee beans had each faced the same adversity– the boiling water.

However, each one reacted differently.

The potato went in strong, hard, and unrelenting, but in boiling water, it became soft and weak.

The egg was fragile, with the thin outer shell protecting its liquid interior until it was put in the boiling water. Then the inside of the egg became hard.

However, the ground coffee beans were unique. After they were exposed to the boiling water, they changed the water and created something new.

“Which are you,” he asked his daughter. “When adversity knocks on your door, how do you respond? Are you a potato, an egg, or a coffee bean? “

Moral:In life, things happen around us, things happen to us, but the only thing that truly matters is what happens within us.

Which one are you?

Friday, July 6, 2018

Cloud Computing


Cloud computing is the on-demand delivery of compute power, database storage, applications, and other IT resources through a cloud services platform via the internet with pay-as-you-go pricing.

The best way to start with that is to compare it to traditional IT computing. Where on-premises on our own networks, we would at some point have a capital investment in hardware. So think of things like having a server room constructed, getting racks and then populating those racks with equipment. With things like telecom equipment, routers, switches, servers, storage arrays, and so on. Then, we have to account for powering that equipment. We then have to think about HVAC, heating, ventilation and air conditioning, to make sure that we've got optimal environmental conditions to maximize the lifetime of our equipment. Then there's licensing. We have to license our software. We have to install it, configure it and maintain it over time, including updates. So with traditional IT computing, certainly there is quite a large need for an IT staff to take care of all of our on-premises IT systems.

But with cloud computing, at least with public cloud computing, we are talking about hosted IT services. Things like servers and related storage, and databases, and web apps can all be run on provider equipment that we don't have to purchase or maintain. So in other words, we only pay for the services that are used. And another part of the cloud is self-provisioning, where on-demand, we can provision, for example additional virtual machines or storage. We can even scale back on it and that way we're saving money because we're only paying for what we are using. With cloud computing, all of these self-provisioned services need to be available over a network.
In the case of public clouds, that network is the Internet.

But something to watch out for is vendor lock-in. When we start looking at cloud computing providers, we want to make sure that we've got a provider that won't lock us into a proprietary file format for instance. If we're creating documents using some kind of cloud-based software, we want to make sure that data is portable and that we can move it back on-premises or even to another provider should that need arise.

Then there is responsibility. This really gets broken between the cloud provider and the cloud consumer or subscriber, otherwise called a tenant. So the degree of responsibility really depends on the specific cloud service that we're talking about. But bear in mind that there is more responsibility with cloud computing services when we have more control. So if we need to be able to control underlying virtual machines, that's fine, but then it's up to us to manage those virtual machines and to make sure that they're updated.

The hardware is the provider's responsibility. Things like power, physical data center facilities in which equipment is housed, servers, all that stuff. The software, depending on what we're talking about, could be split between the provider's responsibility and the subscriber's responsibility. For example, the provider might make a cloud-based email app available, but the subscriber configures it and adds user accounts, and determines things like how data is stored related to that mail service. Users and groups would be the subscriber's responsibility when it comes to identity and access management.

Working with data and, for example, determining if that data is encrypted when stored in the cloud, that would be the subscriber's responsibility. Things like data center security would be the provider's responsibility. Whereas, as we've mentioned, data security would be the subscriber's responsibility when it comes to things like data encryption. The network connection however is the subscriber's responsibility, and it's always a good idea with cloud computing, at least with public cloud computing, to make sure you've got not one, but at least two network paths to that cloud provider.

AmazonWeb Services (https://aws.amazon.com/free/manages their own data center facilities and they are responsible for the security of them, as well as physical hardware security like locked server racks. They're responsible for the configuration of the network infrastructure, as well as the virtualization infrastructure that will host virtual machines.

The subscriber would be responsible for things like AMIs. An AMI, or A-M-I, is an Amazon Machine Image, essentially a blueprint from which we create virtual machine instances. We get to choose that AMI when we build a new virtual machine. We, as a subscriber, would also be responsible for applications that we run in virtual machines, the configuration of those virtual machines, setting up credentials to authenticate to the virtual machines, and also dealing with data at rest and in transit and our data stores.

We can see what is managed by AWS customers. So data, applications, depending on what we're configuring, the operating system running in a virtual machine, firewall configurations, encryption. However, what's managed by Amazon Web Services are the underlying foundation services, the compute servers, the hypervisor servers that we run virtual machines on. The cloud also has a number of characteristics. Just because you're running virtual machines, for instance, doesn't mean that you have a cloud computing environment.

A cloud is defined by resource pooling. So, we've got all this IT infrastructure pooled together that can be allocated as needed. Rapid elasticity means that we can quickly provision or de-provision resources as we need. And that's done through an on-demand self-provisioned portal, usually web-based. Broad network access means that we've got connectivity available to our cloud services. It's always available. And measured service means that it's metered, much like a utility, in that we only pay for those resources that we've actually used. So, now we've talked about some of the basic characteristics of the cloud and defined what cloud computing is.



Sunday, July 1, 2018

The Elephant Rope

As a man was passing the elephants, he suddenly stopped, confused by the fact that these huge creatures were being held by only a small rope tied to their front leg. No chains, no cages. It was obvious that the elephants could, at anytime, break away from their bonds but for some reason, they did not.

He saw a trainer nearby and asked why these animals just stood there and made no attempt to get away. “Well,” trainer said, “when they are very young and much smaller we use the same size rope to tie them and, at that age, it’s enough to hold them. As they grow up, they are conditioned to believe they cannot break away. They believe the rope can still hold them, so they never try to break free.”

The man was amazed. These animals could at any time break free from their bonds but because they believed they couldn’t, they were stuck right where they were.
Like the elephants, how many of us go through life hanging onto a belief that we cannot do something, simply because we failed at it once before?

Failure is part of learning; we should never give up the struggle in life.