Early AI Models

Early AI Models in the 1960s were essentially trying to teach a newborn baby how to think logically in a few days. But they hoped to make machines that could duplicate the human thinking process. We now know is a hard problem, and they attempted them with astonishing optimism and cleverness.

TLT: The Logic Theorist

Mathematical logic was a man-centred notion before LT was born. Although formal systems had been developed that would allow mathematicians. Mainly like George Boole and Gottlob Frege to formalise logical reasoning. They were still tools for human thought (Gardner, 1958). Really, the idea of machines being able to manipulate these systems on their own was revolutionary. At the Carnegie Tech — the Carnegie Mellon University — Allen Newell and Herbert Simon were attempting to find out how humans solve problems. Their main contribution was that human problem solving was composed of incremental, programable steps.

Conversational AI

As Bohr’s model revolutionised our understanding of atomic structure. ELIZA’s revolutionised human-computer interaction by showing that machines could engage in what looks remarkably like human conversation. Created by Joseph Weizenbaum at MIT in 1966. ELIZA was not just a program; it was a watershed moment in computing history.

Microworlds

During the late 1960s and early 1970s, microworld phenomena represented a radical paradigm change within AI research. Microworlds were just like Rutherford’s gold foil experiment that polished atomic investigation to its own controlled environment and so provided their controlled domains for studying the artificial intelligence problems. They were microworlds, scientific laboratories where researchers could run AI capabilities in controlled environments. SHRDLU (1970), run in a simple block world where it could figure out and carry out commands to move geometric shapes

Heuristic Approach

Examples of heuristic search include Newell and Simon’s contribution to the Logic Theorist (Newell et al., 1959), which showed that heuristic search had much power, and the claim that rigorous mathematical approaches were the only reliable course of conduct (Wang, 1960).

Researchers

Some early AI researchers disagreed on how machines should represent knowledge. One group supported Minsky’s frames theory (1961), which proposed structured representations, while others, like McCarthy (1963), advocated for logical formalisms.
These arguments provided the main driving force behind the evolution of initial manifestations of expert systems and problems solving applications.

Pattern Recognition

Early neural network approach was Rosenblatt’s perceptron (Rosenblatt, 1958) and formal logical reasoning (Nilsson, 1965) was emphasised by Nilsson. In a short time, Minsky and Papert (1969) later formulated the limitations of perceptrons, shifting focus initially from symbolic ones.

AI Winter

Since then, AI has undergone several phases: from early rule-based systems and expert systems in the 1980s to the resurgence of neural networks in the 2000s, driven by increased computational power and the availability of large datasets. This historical context highlights the dynamic nature of AI, characterized by cycles of optimism, funding booms, and periods of stagnation, often referred to as “AI winters.”

As we continue to explore the early AI models. Understanding these foundational elements will provide clarity on how we transition from traditional AI to more advanced forms such as Generative AI and Quantum AI. Each progression builds upon the principles established in this foundational phase.

Aditi Sharma

Chemistry student with a tech instinct!