top of page
  • Umeed Kothavala, CEO

The Comprehensive Guide to Understanding Machine Learning

"Computers are incredibly fast, accurate, and stupid; humans are incredibly slow, inaccurate, and brilliant; together, they are powerful beyond imagination" – This quote by Albert Einstein captures the essence of Machine Learning, where human intellect and machine capabilities join forces to achieve extraordinary outcomes.


Who would have thought this novice technology would become the Tour De Force today? Thus, making it essential to unravel the intricacies before predicting the future. This blog will discuss how ML (Machine Learning) works, the different types, and the evolution of this technology.

The Evolution of Machine Learning - Dates to the 1940s

Alan Turing – a British mathematician and computer scientist – coined the notion of machines emulating human cognition. Concurrently, Frank Rosenblatt conceived Perceptron, the pioneering algorithm capable of data-driven learning, establishing the groundwork for Neural Networks. By the 70s, the research in Artificial Intelligence encountered significant hurdles, including unmet expectations, constrained computational resources, and insufficient data. This era dubbed the "AI Winter," resulted in waning enthusiasm and reduced funding for further initiatives.

However, the late 1980s and 1990s witnessed the emergence of more powerful computers and parallel processing, reinvigorating the Machine Learning domain. Researchers gained the ability to manipulate much larger datasets and implement intricate algorithms, expediting the innovation trajectory. In 1997, IBM's Deep Blue garnered attention by defeating world chess champion Garry Kasparov, epitomizing the first instance of a machine superseding human intellect in a highly strategic contest, thus underscoring Machine Learning’s potential two decades later.

Fast forward to the 21st century, whether it’s IBM Watson's Jeopardy victory, Google AlphaGo's Go World Champion triumph, or the recent advent of ChatGPT – the past decade has signified the stride in Machine Learning evolution. These models epitomize state-of-the-art Natural Language Processing, analyzing, and facilitating various applications from chatbots to content creation. Speaking of ChatGPT, read our comprehensive series on OpenAI’s disruption in the conversational AI industry – including the latest developments in the GPT-4 architecture.

Understand the Types of Machine Learning Machine learning is a multifaceted marvel categorized into three fundamental types based on the techniques it employs to adapt. These types can be broadly classified into Supervised Learning, Unsupervised Learning, and Reinforcement Learning, with a fourth one often called Semi-supervised Learning: 1. Supervised Learning: Picture a wise teacher guiding a student – that's Supervised Learning! Here, we train an algorithm using a labeled dataset with input-output examples. Imagine teaching a model to tell cats and dogs apart where the algorithm learns to recognize unique features and can classify new images.

2. Unsupervised Learning: Now, imagine a detective finding clues without guidance – that's Unsupervised Learning – dealing with problems where no labeled data is available. Instead, discovering patterns within the data through clustering makes it ideal for anomaly detection and data compression applications.


3. Reinforcement Learning: Think of a kid learning to ride a bike through trial and error – that's Reinforcement Learning! Here, the algorithm interacts with an environment, receiving rewards or penalties as feedback. Remember Google’s AlphaGo? That used Reinforcement Learning to defeat the world champion in Go.

4. Semi-supervised Learning: Semi-supervised machine learning is a type of learning where a model is trained on both labeled and unlabeled data. The labeled data helps the model to learn patterns and make accurate predictions, while the unlabeled data assists in improving its overall performance. Semi-supervised learning is valuable when obtaining labeled data is difficult or expensive. It has been successfully applied in various fields, such as speech recognition, natural language processing, and computer vision.

Elaborating the Core Fundamental of Machine Learning

Envision a world where machines identify patterns and make informed predictions, continuously adapting to the ever-changing data environment. At the core of this vision is a four-step process that intertwines the components of Machine Learning. The initial step – Data Collection – requires thoroughly gathering relevant information from various sources, including sensors, databases, and user interactions. It’s essential to note that the quality of the data collected influences the model's performance, exemplifying the concept of "garbage in, garbage out."


The process then proceeds to Model Training, where the algorithm refines its parameters, enhancing its ability to minimize errors and maximize predictive accuracy. This careful equilibrium reduces overfitting or underfitting and is achieved by dividing the data into validation sets. Subsequently, Performance Evaluation is done through metrics such as accuracy, precision, recall, and the F1 score. Once the model satisfies the predetermined benchmarks, Deployment takes place in real-world applications, confidently making predictions on previously processed data.


Machine Learning has become an indispensable tool in various domains, from simple algorithms in the 1950s to the GPT-4 architecture. As we continue to witness rapid advancements in computing power and data generation, there's no doubt that Machine Learning will play an even more significant role in shaping our future. However, Machine Learning – as a particular domain – might not be sufficient to propel the innovation forward, thus, requiring subsets or purpose-built applications to disrupt at a broader scale. Which applications? Let’s answer that in our next blog. Stay tuned!


Read other Extentia Blog posts here!


30 views0 comments

Recent Posts

See All
bottom of page