Company News

Everything about Artificial Intelligence and Tektorch.AI

Post by
Suraj Venkat
Company News

Everything about Artificial Intelligence and Tektorch.AI

By
Suraj Venkat
|
December 1, 2021
|
3 Mins Read
Everything about Artificial Intelligence and Tektorch.AI

What is AI?

In order to define artificial intelligence, it is important to define what intelligence actually is. Intelligence may be defined as the computational ability to accomplish a set goal or meet a set objective.

Intelligence varies in various degrees in people, animals, and even machines for that matter.

Artificial Intelligence can be broadly defined as the applied science of making intelligent machines, and highly intelligent computer programs to be specific. In other words, Artificial Intelligence is the process of design and development of computer programs that have the required skills and knowledge to undertake tasks that would otherwise use human intelligence to accomplish. 

Intelligence requires mechanisms, and research in the field of artificial intelligence has been discovering how to carry out these mechanisms, and sometimes even simulate human intelligence.  

At TekTorch, there are three major principles that guide the development of artificial intelligence:

  • Ethical AI
  • Rational AI
  • Empathetic AI

Artificial intelligence systems are usually expected to exhibit the following characteristics:

  • Planning
  • Learning
  • Logical reasoning
  • Logical problem solving
  • And very rarely, social awareness and creativity too.

The development of Artificial Intelligence can be broadly classified into two categories:

  • Narrow AI or Weak AI

This is a term used to describe intelligent software that has the ability to simulate human intelligence and do a single task extremely well, mostly within a limited context. A birds-eye view may seem like the machine or the program is intelligent, but it operates under a lot of limitations and constraints as compared to the most basic of human intelligence. Some examples of narrow/weak AI include image recognition, personal assistants, checking the weather, analyzing raw data, and other rule-based programs.

Most AI that exists today is narrow/weak AI. Unlike other types of Artificial Intelligence, narrow AI is not sentient, conscious, or driven by emotion (that is, lacks human touch). While it is true that tools with narrow AI can interact with and process human language, the real reason we call it weak AI is that simply put, they can’t think for themselves.

It is also true that calling it narrow or weak AI is undermining how big of an innovation it actually is. Most narrow AI is powered by machine learning and deep learning. This means that when a machine is fed with a training data set, the computer uses statistical techniques to get progressively better at the task at hand, by means of both supervised and unsupervised learning. 

Narrow AI has relieved us of a lot of mundane and monotonous tasks, and the uber-fast data processing has certainly improved overall productivity, efficiency, and quality of work. 

  • Artificial General Intelligence

Artificial General Intelligence, also known as Strong AI is often used to describe a machine that can undertake all tasks that a human can. In other words, artificial general intelligence is supposed to exhibit intelligence identical to that of a human and undertake any intellectual task that a human can. 

AGI currently seems to be the holy grail for all AI researchers, and the quest to create a machine with a full set of cognitive abilities is dystopian science fiction at best with the current know-how available to us. 

Currently, machines can in fact process data faster than us and perform all rule-based processes with consistent and better ease and accuracy. But the fact that humans can strategize, memorize, and think abstractly and creatively to make consciously aware decisions make us infinitely better than machines.

This intelligence is also very hard to define and programmed into machines because it is essentially driven by our ability to be sentient creatures. 

AGI is expected to solve problems, reason, plan, learn, and integrate prior knowledge and inculcate a sense of sentience by being capable of experiencing consciousness, which is pretty far off from where we are today.

  • Artificial Super Intelligence

Artificial superintelligence, a relatively recent discussion has been around the development of machines that - wait for it- outperform humans! It is a hypothetical intelligence that will surpass the human intellect and cognitive performance in all aspects, be it general wisdom, creativity, problem-solving or emotional consciousness. Researchers predict that the leap from AGI to ASI will be a short one, but no one can really say when the first sentient computer will arrive. 

This is the kind of AI that is generally perceived as a threat, and which has led the likes of Elon Musk to suggest an outcome as far out as the extinction of mankind

  1. History of AI
  1. What it was before and what it is now, and where it is going

Early Breakthroughs

  • In 1943, a research paper by McCullough and Pitts proposed the first mathematical model for building a neural network.
  • In 1949, Donald Hebb published a book titled “The Organization of Behaviour: A Neuropsychological Theory” which laid the foundation of neural networks. He went on to propose how the connections between neurons keep getting stronger with frequent use, and to date, Hebbian learning continues to be a model in AI.
  • 1950 saw the rise of the famous Turing Test, the litmus to determine the intelligence of a machine. Alan Mathison Turing often credited to be the father of theoretical computer science and Artificial Intelligence by a lot of people, laid the foundations of machine learning during the second world war. Together with his team, he created the Bombe machine which would crack the infamous “Enigma”, a code used by German forces to send messages securely. His efforts and success in breaking the Enigma are widely believed to have changed the course of the second world war.
  • Science fiction is a precursor to science fact. 1950 also saw the works of Issac Asimov get published. His book, “I, Robot” laid the foundation for ethical artificial intelligence with his three laws of intelligent machine behavior. 
  • 1956 was the year when John McCarthy, widely known as the father of Artificial Intelligence, organized the “Dartmouth Summer Research Project on AI” workshop, and “Artificial Intelligence” as a field of study was born and introduced to a wider audience.
  • Subsequently, research centers started propping up all across the US to explore the potential of AI, and Allen Newell and Herbert Simon played a central role in promoting AI as a means to transform the world.

The AI hype:

  • In 1958, John McCarthy developed LISP - a programming language for AI and also published a paper named “Programs with Common Sense”. The paper went on to propose an intelligent AI system capable of learning from experiences, just as humans do. 
  • John McCarthy started the MIT Artificial Intelligence Project along with Marvin Minsky in 1959. Subsequently, in 1963, he also started the AI Lab at Stanford. 
  • In 1966 in the midst of the Cold War, the U.S. government canceled all government-funded Machine Translation projects after observing a lack of progress in machine translation research - a Cold War program designed to achieve real-time translation of Russian communications.

The advent of AI WInter

  • Things kept going downhill for AI innovation from this point. In 1973, the British government released the Lighthill Report, which highlighted in detail the huge disappointments in AI research, and the failure to make substantial strides. This led to severe cuts in funding for Artificial Intelligence projects.
  • The subsequent years proved even worse for AI development, research, and innovation. The funding dries up and projects start stalling because of frustrating outcomes. Despite a well-funded and global effort spanning decades, computer scientists began realizing that creating intelligence in machines is a problem that proved to be much more difficult than previously thought. The infrastructure and hardware at the time were also not well-developed to process huge swathes of unstructured data. The period was especially bad for Neural Network research, which all but stopped as it became clear that only trivial problems could be solved with the then-existing techniques. In a nutshell, over-promise and under-delivery led to governments and corporations losing faith in artificial intelligence. 

New Beginnings

  • In 1986, Rumelhart and McClelland published probably the most important piece of AI literature in decades - Parallel Distributed Processing (PDP). The backpropagation learning algorithm - used to train Neural Networks was re-invented and employed to classify and generate useful representations from data. The re-ignited interest in Artificial Neural Networks also laid the ground for Deep Learning research in the years to come.
  • As the decade of the ’90s progressed, corporates became increasingly interested in AI - again. The Japanese unveiled plans to develop a 5th generation computer to make strides in the advancement of Machine Learning. The seminal moment arrived in 1997 - IBM’s Deep Blue played chess with a human, and managed to beat the then world champion, Gary Kasparov!
  • The beginning of the 21st century registered the rise of Deep Learning, in an effort to commercialize the use of AI on a wide-spread scale. Exponential strides in the processing power and storage ability also helped propel the adoption of AI to the next level. 
  • Around 2003, Recommender systems came into existence and that has fundamentally changed the way we consume things. Simply put, a recommender system or a recommender engine is a system that helps users discover stuff they might like, similar to their interests. So basically anything that predicts a user’s future preference based on past choices is a recommender system or a recommender engine. 
  • Innovations in the recommender engine space reached a fever pitch in 2005 with the announcement of The Netflix Prize - a million-dollar challenge to make the company’s recommender engine more accurate by at least 10%. The early days of the Netflix Prize helped make astonishing strides in the fields of artificial intelligence, machine learning, and recommender systems. 
  • The following years saw tech behemoths increasingly adopt AI to their commercial advantage, and in addition to user and data analytics, substantial work has been done in the fields of Computer Vision, Natural Language Processing, and a host of other AI sub-domains.

The Present

Today, artificial intelligence is being used more than ever, both for the good and the not-so-good. It’s everywhere around us - from Healthcare to Defence, in music, and in books, in judging the resilience of markets and people, and influencing people’s consumption patterns, especially on social media. It is influencing your decisions consciously or subconsciously. 

But wherever there’s hype involved, there’s misunderstanding. Today artificial intelligence has become such a big umbrella term that it is often misunderstood or misused. However, there are certain avenues that are making monumental breakthroughs owing to the enormous processing power and the ability to train mathematical models through accessible code. Some sub-domains of AI that are on the rise are Natural Language Processing, Computer Vision, and the biggest of them all - Machine Learning. A major reason for these domains suddenly turning up is simple - the abundance of data. This, when coupled with the advances in computational power/performance has given rise to the recent renaissance, and now Data is the new Oil!

Everything said and done, the fact remains that we are very far away from potent and ethical artificial general intelligence. Though the field has grown explosively in recent times, the only systems that currently exist are bundles of narrow AI - that is, they can only produce meaningful results inside a narrow domain. Here are a few things that have been achieved by means of narrow AI so far:

  • Human-level image classification
  • Autonomous driving with near-human efficiency
  • Text translation on a near-human level
  • Rise of Digital Assistants
  • Speech recognition to a near-human level

Yes, these achievements are impressive, but it is worth noting that the current principle applications of AI are relatively specific and straightforward with respect to human intelligence. 

The Future

Currently, the modus operandi of even the most powerful AI is to solve specific tasks, but developing true artificial general intelligence (AGI) requires the development of an entire cognitive architecture that covers all aspects of cognition. 

The immediate future will see the current trends with machine learning and deep learning continue, but soon enough questions will begin to surface about the efficiency of such brute processes, considering the current methods have a significant carbon footprint. 

As we move towards AGI and the world realizes the true potential of ethical AI, the community will demand new methods to get things done. This would also finally be the period when the world learns that AI is not magic and begins to see the limitations of what is, at the core, just a ton of math. AI winters and hype cycles will begin to disappear, making way for steady and sustained AI innovation. 

For artificial intelligence to truly reach the levels of human intelligence or greater, unpredictable radical changes may take place. The rise of Quantum computing will give way to a very probabilistic nature of AI given its nature and models can be trained in ways never possible before. 

Here is a list of some traits for machines to develop, to realistically plan for a leap towards AGI:

  • Reasoning and Problem Solving
  • Planning and Scheduling
  • Perceptions 
  • Creativity, and
  • Knowledge representation 
  1. Most popular languages (stats, figures) and Frameworks for different applications

With a boom in recent years, AI has come a long way to help organizations supercharge their business performance. The massive strides in AI, however, could not have been possible without improvements and enhancements in the computing power and underlying programming languages.

Here are some of the most popular and efficient languages and frameworks for the development of artificial intelligence:

  • Python

Python was first developed in 1991 and has surpassed Java to be the second most popular language as per Stack Overflow. According to a poll, about 60% of AI developers and engineers prefer Python as their first-choice programming language for the development of AI solutions. Python is incredibly easy to learn and offers easy access to the world of AI development for programmers and statisticians alike.

Python is hands down the number 1 language when it comes to artificial intelligence development. Advantages of Python include an easy and simple syntax, faster development as compared to Java and Ruby, and a ton of libraries and tools to support the development of artificial intelligence 

Some popular libraries in Python are:

  • TensorFlow - It is used for working with datasets and machine learning workloads. The fact that it is open source and is the favorite AI tool of top tech behemoths makes it the most popular library on our list. It is perfect for complex numerical computations of large volumes.
  • Scikit-learn - This library is used for training Machine Learning models. It was built back in 2007, and it is also open source and built on matplotlib, NumPy, and SciPy. It is great for problems such as data mining and analysis
  • PyTorch - PyTorch is also an open-source deep-learning library mostly used for Computer Vision and Natural Language Processing. It is primarily a research-first library. 
  • Keras - Another framework written in Python, Keras is the best for rapid experimentation because of its ability to run on top of other AI frameworks such as TensorFlow. It is used as the code interface for complex mathematical operations
  • Theano - Developed back in 2007, Theano is great for defining, optimizing, and evaluating math expressions. 
  • Java

For over 20 years now, Java is considered to be the best programming language of the world owing to its high-user friendliness, flexibility, and platform independence. 

Java has been used for artificial intelligence development in many ways, and some libraries available for the development of AI are:

  • TensorFlow - Among other languages, TensorFlow also supports Java with an API. The support is not as rich as other fully supported languages but it exists and is being improved rapidly.
  • Deep Java Library - Used by thousands of institutions and businesses across the globe, it is built by Amazon to build and deploy deep learning programs in Java. 
  • Kubeflow - It provides ready to use machine learning solutions by facilitating deployment and management of machine learning stacks on Kubernetes
  • R

Created by Ross Ihaka and Robert Gentleman in 1995, R is the implementation of S programming language and is best used for data analysis and statistical purposes.

The biggest feature of R is its comparatively raw NumPy package which makes it the best choice for large scale number crunching. R also covers all paradigms such as functional programming, vectorial computation, and object-oriented programming. 

Some advantages of working with R are  - the availability of various libraries and packages for greater functionality, the ability to work with Fortran and C++, high-quality graphs, and an active support community. 

Some packages for AI and machine learning programming available in R are: 

  • CARET - CARET refers to classification and regression training, and the task of this package is to integrate the training and prediction of a model. 
  • randomForest - One of the most popular R packages for machine learning, randomForest can be deployed for solving regression and classification tasks, and can also be used for training missing values. 
  • tm - Text mining is an emerging application of machine learning, and this R package acts as a framework for text-mining applications. 
  • LISP

LISP was created by the founding father of AI - John McCarthy in 1958. It is an abbreviation for list processing and is the second oldest programming language after Fortran. It is famously said that apart from Haskell, LISP is the only language where the programmer spends more time thinking than typing. For Machine Learning, LISP provides for the following features:

  • Rapid prototyping
  • Dynamic Object creation
  • Garbage collection, and
  • Flexibility 

  1. Challenges

There are numerous challenges that the field of AI currently faces. Currently, despite huge strides made in AI R&D over the past decades, even the most powerful of AI techniques is to solve a specific task effectively. Even the best AI is not sentient, conscious, or driven by emotion. True AGI development calls for the development of a cognitive architecture that incorporates all aspects of cognition. 

  1. Reasoning, problem-solving

Reasoning essentially means drawing inferences appropriate to a given situation and can be broadly classified as inductive or deductive. And though there has been some amount of success in programming machines to draw inferences, true reasoning is more than just drawing inferences, it is all about drawing inferences relevant to a particular situation. This makes reasoning one of the hardest problems confronting AI development right now.

Problem-solving, in the context of artificial intelligence, can be defined as a systematic operation through a host of actions to come to a solution. Problem-solving can either be general or special purpose. Special purpose problem-solving is made for a specific problem and makes use of the very characteristics of the situation in which the problem is embedded. On the other hand, general-purpose techniques used in AI is a stepwise or incremental reduction of the difference between the current situation and the desired situation.

  1. Knowledge representation

Knowledge can be defined as the judgement of the understanding of a person or a machine on a given subject. It is responsible for representing information about a problem in such a way that a computer can understand and utilize this information to solve complex problems. Depending on the type of functionality, knowledge representation in AI can be classified in the following ways:

  • Logical Representation
  • Semantic Network
  • Frame representation
  • Production Rules

Knowledge representation is more than just storing data into a database, it is about enabling an intelligent machine to learn from that knowledge and experiences to behave intelligently like a human. 

  1. Planning

Planning refers to the sub-domain of AI which focuses on finding the appropriate course of action in order to accomplish the given task by the machine. The idea is to reach the goals while optimizing overall performance measures. 

Planning involves the choice of a sequence of actions that will be followed step-by-step by the machine in order to accomplish the set final goal. 

  1. Learning

A whole range of learning methods can be applied to artificial intelligence, the simplest of them being learning by trial and error. 

The machine then stores the solution and recalls it the next time it encounters the same problem. This memorizing of procedures is also known as rote learning.

However, the challenge before AI is about applying past experiences to analogous new problems to come up with a solution, and this is called generalization. 

  1. Natural language processing

A language may be defined as a system of signals having meaning by a set convention. A defining characteristic of full-fledged human languages is productivity.

It may be relatively easy to write programs that respond to human language to questions or statements albeit in severely restricted contexts. 

Some steps in NLP are:

  • Morphological Analysis - This step separates the punctuations from sentences.
  • Parsing - This step involves lexical analysis and grammar. 
  • Semantic Analysis - This step assigns meanings to the structures initially created in parsing.
  • Pragmatic Analysis - This step involves re-interpretation of languages, verification, and generation of results.

The big challenge for the development of such an AI is a genuine understanding of the language. Can the machine reach a point where its command of language is indistinguishable from a human? With the rate at which NLP is progressing, that day may not be as far away as one might think.

  1. Perception

Perception may be defined as the scanning of the environment by means of sensors - both real or artificial, and the environment is then decomposed into separate objects in various spatial relationships. In other words, it is the ability to deduce things about the world from visual images, sounds, or other sensory inputs.

At present, artificial intelligence can sufficiently identify individuals, enable autonomous vehicles to drive on open roads amongst other things. But the challenge to overcome is that analysis can be complicated by many factors such as the angle form on which the object is viewed, the intensity of illumination, or the contrast with the surroundings. 

  1. Motion and manipulation

The field of robotics is closely related to Artificial Intelligence and makes use of highly sensitive sensors on a robot to avoid and navigate around obstacles. 

Though the present intelligence capabilities do allow a robot to undertake and perform tasks such as object manipulation and navigation, the real challenge for the development of AI lies in the secondary-problems of localization - that is, knowing where the robot is, mapping - learning what is around in the robot’s environment, and motion-planning - figuring out how to get there.  

  1. Social intelligence

The boom in AI means that now is the right time to talk about the boundless landscape of artificial intelligence, which means that there is a frontier for ethics and risk-assessment as big as it is for emerging technology. 

Some of the social challenges that AI will inevitably have to navigate around are:

  • Fear of unemployment - As we’ve increasingly found ways to automate mundane jobs, there is growing disenchantment with Artificial Intelligence among the working masses that use physical work to earn a living. There is an alarming need to move from predominantly physical work to cognitive labor that can strategize work in an increasingly globalized world.
  • Equality of wealth - In the midst of the raging debate about growing income inequalities and distribution of wealth around the world, a major question that AI will inevitably have to maneuver around would be how the wealth created by machines be distributed and revenue models be worked.
  • Human-behavior - In this day and age where we frequently interact with machines almost as if they’re humans, the real question is how machines can trigger the reward centers in the brain to improve relationships and business outcomes without creating a tech dependency among the masses. Tech addiction is in fact the latest frontier of human dependency, and AI will have to find ways to avoid it in a free and fair manner. 
  • Guarding against mistakes - Intelligence comes from learning to both humans and machines. But unlike humans, a training phase in the development of AI currently cannot possibly cover all examples that a machine may deal with within the real world. These systems can be then overpowered to discover thighs that don’t exist or miss out on those who do, and AI development will have to scale this challenge to ensure the machine performs as planned. 
  • Elimination of bias - Yes AI has speed and capacity of processing data far beyond humans will ever have, but that is no reason to trust that the AI will always be fair and neutral. Also, at the end of the day, AI systems are created by humans after all, who can often be biased and judgemental. If used in the right way it might also act as a catalyst for positive changes. This is perhaps the biggest challenge in the development of AI - how to be fair and neutral in all outcomes, consistently.
  • AI Rights - As we work towards the development of artificial general intelligence, we will unlock more information about the conscious human experience, and learn more about the basic mechanisms of reward and aversion that drives our race. Yes, these systems are fairly superficial now, but if our ultimate objective is to create machines that can perceive, feel and act, we will have to consider how they “feel” at some point. 
  1. General intelligence

Artificial General Intelligence, aka Superintelligence is a state of artificial intelligence where it exceeds human intelligence in virtually every field, including creativity, scientific temper, general wisdom, and social skills.

General Intelligence is a utopian concept as of now, as we neither have the resources nor the know-how to build artificial intelligence at par with human intelligence, let alone beyond it. 

  1. Approaches
  1. Cybernetics and brain simulation

Cybernetics is the study of communication and control systems in humans and machines alike. Over the years we have had science fiction portray robots which are essentially cyborgs - flashback to the Terminator or I, Robot. And true to the saying that science fiction is the precursor to science fact, AI has not only caught up with the fiction but also laid out general practicalities. Research shows that a robot can successfully have a biological brain with which it can make independent decisions. Research is also consistently making strides, and along with the increasing number of cultured neurons, the range of sensor-based inputs is also expanding to include audio, IR, and Visual stimulus. 

Human brain + machine interfaces are still possibly years away, considering studies today generally employ rat neurons. And if indeed we get there and bring about a robot with a human-neuron brain, there’s still a ton of social and ethical questions that need to be answered. 

  1. Symbolic

Today artificial intelligence is mostly about Deep Learning or Artificial Neural Networks, but there was a time not so long ago when Symbolic Artificial Intelligence was all the talk of the town. Also known as the classical AI, rule-based AI, or just the good old fashioned AI, Symbolic reasoning involves the embedding of human behavior and knowledge into machines and programs. Symbolic AI was also considered to be the road to AGI. 

The early pioneers of AI firmly believed that every feature of intelligence can be precisely described in a manner that a machine can be made to simulate it. Soon enough, Symbolic AI took the center stage and a lot of tools and concepts in computer science were a result of these efforts. Symbolic AI programs were based on creating explicit rules for structures and behavior.

But Symbolic AI had its limitations, for example when a machine had to make sense of the contents of an image or a video. These processes aren’t explicitly rule-based, and symbolic AI starts to break in such a scenario.

AI futurists believe that symbolic AI will die, but this assumption couldn’t be farther from the truth. Rule-based AI systems are very important in today’s AI applications and will continue to be so. 

  1. Sub-symbolic

The difference between a Symbolic and a Sub-symbolic AI lies in the way the models learn. The sub-symbolic approach to artificial intelligence is principled on the fact that human-based information format is not always the best fit for AI, and advocates for the raw data to be fed to the model for the machine to analyze and construct its own implicit knowledge about it. 

Sub-symbolic AI is more revolutionary, futuristic, and easy on the developers since the machine learns by itself. But the issue with sub-symbolic AI is that they are intensely data-hungry. Examples of a sub-symbolic AI system are Computer Vision and Neural Networks. 

  1. Statistical learning

Statistical Learning can broadly be defined as a set of tools for understanding data and can be classified as supervised and unsupervised learning. Supervised Learning is the practice of estimating an output based on one or more than one input. On the other hand, unsupervised learning aims to find a pattern in the given dataset without a supervised output. Essentially, statistical learning focuses on calculating probabilities for each hypothesis and gives results accordingly. 

Statistical learning techniques, though powerful, are not applicable in many AI scenarios in the real world because just as machine learning is inherently exploratory, statistical learning is inherently confirmatory. 

  1. Integrating the approaches

There is growing momentum in favor of hybridizing Symbolic and non-symbolic (connectionism) approaches to AI to create an intelligent machine that can make decisions.

Symbolic AI is the right strategic complement for the ever-popular connectionist techniques. As Artificial Intelligence proliferates deeper in our lives and our ask off AI becomes increasingly sophisticated, it would probably be the best step forward to develop a system that makes use of the strength of both systems while mitigating each other’s weaknesses.

  1. Tools
  1. Search and optimization

A fundamental part of an efficient search is to understand the context of queries. Artificial Intelligence is disrupting all areas of knowledge activities today, and online searches are no different. 

It is important for these searches to be precise, quick and respond in the right context. In this regard, AI information retrieval is an up and coming field, which focuses on returning context based search results, not just keywords. Using deep learning and neural networks, it is now possible to build a robust, scalable and semantic search engine. 

  1. Logic

Logic may be defined as the ability to analyze facts and arrive at a conclusion based on those facts, which in turn helps a human or a machine to arrive at the best solution to a problem or a question.

Logic is essential for intelligence of any kind, and is what makes AI intelligent. Normal programs are written with a predefined set of rules that are supposed to be followed, but the objective of AI is to make decisions by itself.

  1. Probabilistic methods for uncertain reasoning

Probabilistic reasoning is a way of knowledge representation here the concept of probability is applied to represent the uncertainty in knowledge. Probabilistic reasoning is needed for the following reasons:

  • When outcomes are unpredictable
  • When specifications of predicates are too large to handle
  • In the case of unknown errors

Probabilistic modeling provides for a framework to understand what learning actually is. It has therefore emerged as a top principle theoretical and practical approach to design machines that learn from data acquired through experience. 

  1. Classifiers and statistical learning methods

In Machine Learning, classification is a supervised learning approach in which the machine learns from a given dataset to come to conclusions, make observations or classifications. Classification can be performed on both structured and unstructured data, and the classes are often referred to as target, label, or categories. 

Following are some classification algorithms:

  • Logistic Regression
  • Naive Bayes
  • Stochastic Gradient Descent
  • K-Nearest Neighbours
  • Decision Trees
  • Random Forests
  • Artificial Neural Networks
  • Support Vector Machine

  1. Artificial neural networks

A neural network, inspired by the neurons in the human brain, is a network made up of neurons arranged in layers that take an input vector and subsequently convert it to an output. Each neuron takes an input and applies a nonlinear function to it and then passes the output to the next layer, which takes it as an input again.

Artificial Neural Networks have a very high tolerance to noisy data, and they’re also able to classify untrained patterns and perform better with continuous-valued inputs and outputs.

A drawback of Neural Networks is that they have poor interpretation as compared to other models. 

  1. What Tektorch does

TekTorch is an AIaaS company based out of Sydney, and we help founders with highly specialized consulting in building scalable web and AI/ML solutions that bring tremendous business impact. We’ve worked with companies in various industries spanning oil and gas, telecommunications, tourism, finance, to name a few, and our team understands business challenges associated with these different sectors and possesses the technical acumen to solve these problems.

TekTorch works with businesses of all sizes and at different stages of their growth.

In a world of rapid technological change, staying ahead of the pack can deliver greater value and revenue to your business. Using state-of-the-art Artificial Intelligence and machine learning to perform tasks with greater speed, intelligence, and accuracy, we allow your business to operate more efficiently – and grow.

Our expertise: 

  1. NLP

NLP or Natural Language Processing is the set of computational techniques that analyze and synthesize human natural language and speech. Simply put, we write effective algorithms that teach the machine to understand and manipulate human language and speech. 

We take pride in our expertise with NLP, and here’s what makes our offering super effective :

  • Our conversational AI can distinguish multiple variables with ease, that is, it can read and make sense of misspelled words, and also identify key themes of input, which helps in maintaining the natural flow of a conversation, just like humans do!


  • Our conversational AI has memory and understands context, which is useful when the query is not detailed enough and the machine ought to probe for more information. In other words, it understands what the query is about, then seeks more detail to narrow it down if it’s too vague, and understands intent, just like humans do!
  • Spelling mistakes, typos, and pronunciation mistakes are a common theme of human-computer interaction, and our conversational AI has the ability to interpret typos, check for spellings, and adjust for accent, just like humans do!
  • In a natural language interaction, it is important to have context and understand the nuances of a language. Our AI has the ability to distinguish and make sense of keywords in a sentence, both written and spoken thereby minimizing confusion, just like humans do!
  • An important feature of natural language interaction is the ability of both participants to keep the flow of the interaction going. Our AI has the ability to determine the flow of the conversation based on the input query, just like humans do!

  1. Computer Vision

CV or Computer Vision is the field of AI that aims to replicate the complexity of human vision in order to enable machines to identify and process objects in a way humans would do.

Computer Vision has been around for decades, but it takes an in-depth understanding of real-life use cases for the tech to move out from labs to a real-world setting. With our expertise working with Computer Vision over the years, here’s why you should consider our offering:

  • It’s reliable because human vision can err due to varied factors like health and fatigue, but Computer Vision eliminates these factors to deliver the right results, consistently. 
  • A plethora of use cases - Computer Vision can be used in almost all the aspects of our lives because when a machine learns by viewing millions of images it understands the traits of what's being visualized. 
  • Enormous cost reduction - Computer Vision, just like human vision can be an inalienable part of a business and can be used to improve productivity and efficiency while identifying fault lines in the system. 
  • Some applications of our Computer Vision offering are - Facial Recognition, Object recognition, Augmented and Virtual Reality, and identifying unstructured data in Healthcare to name a few. 
  1. RPA and Intelligent Process Automation

Robotic process automation duplicates human execution of repetitive processes with existing applications by using software robots that are configured to capture and record these applications. 

Intelligent process automation is the journey towards automation leveraging artificial intelligence to the fullest, and as opposed to traditional automation solutions, robotic process automation can be deployed on top of existing legacy systems. 

With all our experience deploying RPA systems across industries, here are a few reasons to consider our RPA offering:

  • Low-risk Non-invasive technology - RPA can be deployed on existing systems, which allows for the creation of a platform compatible with complex AI algorithms and tools.
  • Accuracy - The right result, calculation or decision in the first attempt
  • Consistency - Identical tasks and processes, which eliminates variation of outputs.
  • Audit-Trail - Fully maintained logs imperative for compliance.
  • Reliability - Services provided all through the year.
  • Scalability - Spontaneous ramp-ups and downs to match demand highs and lows
  • Productivity and retention - Freed up human resources for higher value-added and more priority tasks.
  • Duration - RPA projects run for roughly 9-12 months with an ROI of less than a year.
  1. Prediction and Classification

Machine Learning is a subset of artificial intelligence that teaches a machine how to learn, and predictive analytics in ML models look for patterns in a given dataset to draw inferences just like a person would. The objective is to get the algorithm really good at coming up with the right conclusions with the given data, at which point it can apply the knowledge to a new dataset. 

In our experience, when applying Prediction and Classification to a business problem, there are some fundamental questions that need to be answered that determine if the implementation would be a success. Following are a few evaluations we like to make, which usually translate to successful business outcomes: 

  • Is the problem complex enough to warrant the use of Machine Learning in the first place? - If it’s possible to structure a set of rules that can answer a given problem, 
  • Clean data is better than big data - It is important to have new and clean data, because if you have a ton of disjointed data, a lot of precious resources would likely be spent on cleaning the data before we can get to the learning stage. 
  • It’s also important to make sure that the existing data is labeled for the machine to make sense of it. While unsupervised learning does allow making sense out of unlabeled data, it is advisable for companies to start off with labeled data. 
  • Another important aspect to consider is the fact that ML is a machine skill, and just like a human skill it keeps evolving as it keeps getting informed by its environment. It is therefore important to have at least a very small space for errors. 
  1. Reinforcement Learning

Reinforcement Learning is a sub-domain within machine learning that is focussed around training an algorithm around the trial and error approach. Broadly, Reinforcement Learning encompasses a class of algorithms to optimize decision making through rewards and penalties (or, negative rewards) as they transition from one scenario to the next. 

The key goal of Reinforcement Learning systems by TekTorch is to establish a best sequence of actions in order to maximize the long term reward when solving a problem. 

Some important use-cases of Reinforcement Learning are:

  • Personalisation/Recommender Systems
  • Hyper targeted advertising
  • Automate tasks such as forecasting of stock prices or Industrial automation

RL is generally valuable when searching for optimal solutions in a constantly changing environment. 

  1. Deep Learning

Deep Learning is both a sub-domain of machine learning and an extension of it, where the machine abstracts layers of algorithms to build an artificial neural network that learns and makes predictions. 

Deep Learning is still relatively nascent, and organizations willing to explore Deep Learning should ideally be well informed about the challenges that they may encounter along the way:

  • Gaining transparency - Deep Learning neural networks are opaque in the sense that one cannot explain the output the system gives because each neuron is only identifiable by its location inside a hugely complicated system. 
  • Data quality - Deep Learning models are extremely data hungry, and addition of random noise in these datasets may markedly decline performance of the model. Using poor quality training data can unintentionally hamper the usefulness of this AI.

TekTorch, with its seasoned team of developers and consultants are adept at developing production grade AI across business verticals, and deliver results that solve real-world business problems to fuel business performance.

  1. Tektorch Virtual CTO

It’s a fact that the supply of good AI engineers is far less than the demand, and bulk of the work in AI is thankless and boring stuff like ingestion, organization, transformation, evaluation and storage of data. Only a small part of the job involves designing networks, tuning parameters and choosing activation functions (which is actually pretty fun!)

The world is a long way from solving the problem of AI talent scarcity. Building an in-house AI team is an expensive proposition. Given the costs, consider asking these key questions to fasten your journey to AI adoption

  • What if your company simply isn’t prepared or is short on the required budget to set up a full-fledged in-house AI team? 
  • What if your existing staff lacks the potential or you don’t have the bandwidth to recruit? 

By eliminating the need to build in-house capability, you get rid of all the ancillary expenses that come with the arrangement. Bringing in a team of experienced and expert individuals dedicated to your AI initiatives can maximize revenue and transform your business into a successful AI company.

At TekTorch, we’ve come up with a dedicated Virtual Chief Technology Officer (CTO) offering, and have a solid team experienced in building projects, understanding data sets and delivering AI products.

  1. Virtual CTO aspects or services

The overall objective of a CTO is to take hold of an organization’s IT strategy in order to maximize operational and financial efficiency to sync with overall objectives. 

Small and medium businesses and startups have a perpetual struggle with capital, and it is imperative that they make the best possible use of available resources for efficiency. Onboarding a full-time in house CTO can be a costly proposition. Also, designing a compact and results oriented artificial intelligence team is extremely important for the business to sustain and succeed.

You probably need a Virtual CTO to assist you if you want to:

  • Stay on top of ever changing technology
  • Avoid technology matters to disrupt your work
  • You don't see the point in spending a ton of money to hire an in-office member when the same thing could be accomplished virtually
  • You realize the real value lies in continuous service as opposed to quick fixes

With the sheer pace of technological advancements, it is extremely important for businesses to adapt and grow with these advances in order to stay in the game and not be left behind. It is therefore prudent to eliminate the need of an in-house CTO, and instead onboard a virtual chief technology officer. 

As your virtual chief technology officer, TekTorch has the ability to:

  • Actively participate in management decisions
  • Formulate and communicate a well-defined technology strategy and vision
  • Take charge of research and development initiatives
  • Keep up with ever changing technological trends, standards and regulations
  • Develop policies and identify trends that could potentially put the business in jeopardy

Virtual CTO is a game changer for organisations that are willing to experiment with AI capabilities but want to do it quickly without the risk of time and capital.

  1. AI for Chief Technology Officers (what they should learn and think about when bringing in AI to their companies)

Chief Technology Officers are typically responsible for the entire organizations’ IT strategy in order to maximize operational and financial efficiency to sync with overall objectives. It is extremely important for CTOs to design a compact and results oriented team and strategy, even more so in the case of Artificial Intelligence for businesses to succeed and sustain. 

And building sustainable AI development practices goes beyond just surface understanding of data science and machine learning. Almost always, it requires a union of strong IT knowledge with the domain knowledge of the problem you’re trying to address. In terms of the talent required, to build a winning team here is what you will need:

  1. Domain Experts to derive insights - To build effective artificial intelligence that yields the desired ROI, you will need subject matter expert(s) who have an in-depth understanding of the domain your model will be operating in. 
  1. Data Scientists to understand data - Arguably the most important person on the team, the Data Scientist will make or break your AI strategy. A Data Scientist may come from a variety of backgrounds such as computer science, engineering or statistics, and must possess the tenacity and ability to analyze 

data for business decisions. 

  1. Data Engineers with a solid understanding of analytics - While a Data Scientist determines the “what” of your project, the job of a Data Engineer is to bring the vision to life and determine the “how” of your project using statistics, machine learning, forecasting, optimization, code, API’s etc. They define and implement the integration of data into the larger AI architecture. 
  1. Business Analyst who can put data to use for valuable business outcomes -  For all of this tam to come together and work as a cohesive unit, you will need a use-case driven Business Analyst who can work in tandem with the Data Scientist(s) and act as a bridge between the business users and the AI team.
  1. Applications
  1. Healthcare

The waves of impact made by AI is starting to be felt across industries, and the impact in Healthcare is touted to be a game changing one for sure. Over the years, the normous innovation in Healthcare has seen the development of high-precision diagnosis and treatment procedures. But now, with the newfound ability of not just replicating human actions and decisions but also learning from them, Artificial Intelligence is bringing about a paradigm shift in the Healthcare industry. 

The applications of artificial intelligence in Healthcare are immense, and as per a 2019 CB report, around 86% of healthcare provider organizations and life sciences companies will look to incorporate AI in healthcare by 2023. The fact that the pandemic has hastened this process is clearly visible, with AI healthcare startups playing an important part in the race against time to find a vaccine for Covid-19.

Apart from drug discovery and clinical trials, some areas where artificial intelligence is all set to make concrete inroads in the life-sciences and healthcare sector are - 

  • Telemedicine, TeleHealth or remote patient monitoring
  • Machine Learning augmented diagnostics
  • Faster Insurance claims and settlements
  • Secure management of patient medical records and 
  • IoT enabled Healthcare
  • Detecting and tracking the spread of epidemics
  • Optimize the patient flow

As the world moves towards a predictive model of care than a reactive one post the pandemic, AI and ML enabled healthcare is the way to go, especially in the case of chronic health conditions, such as diabetes, cancer and heart ailments among other things.

  1. Automotive

In a world which is beginning to rapidly move towards self-driving and autonomous vehicles, artificial intelligence has begun to play a central role in the automotive industry. With the manufacturers implementing an entire range of human-computer-interactions (HCI) such as voice assistance, gesture-sensitive systems and personalized platforms, artificial intelligence in the automotive industry is witnessing implementation at an exponential rate. 

On top of autonomous driving, there are numerous applications for AI in automotive sector such as:

  • Smart manufacturing
  • Predictive maintenance
  • IoT powered and connected cars and Telematics
  • Real-time mapping
  • Cars as an assistant

With sustained advancements in computer vision, predictive analytics and machine/deep learning, artificial intelligence is set to set the pace of innovation for the automotive industry in the years to come in order to make automotive cheaper and autonomous.

  1. Construction

The construction industry has traditionally been labour intensive, and continues to be so, albeit attempting to shift from an analog past to a digitalized future. The challenges in front of the construction industry today are unique - they are faced with cut-throat competition, stagnating growth in productivity, and a chronic shortage of labor. Though traditionally averse in adopting new technology, construction industries today are increasingly looking towards artificial intelligence, machine learning and computer vision as a means of transforming how they go about their business. 

Some areas where AI, ML and computer vision can enormously benefit the construction industry are:

  • Audit of work orders and enduring highest level of compliance
  • Predictive maintenance of equipment
  • Infrastructure planning and design
  • Management of administrative activities
  • Tracking on-site safety using Computer Vision
  • Monitoring and surveillance with drone-swarms and collaborative robots

The adoption of AI and cognitive computing in construction is all set to keep increasing at a rapid pace, and even though it will remain to be human-driven for the foreseeable future, the use of AI, ML and CV among other things will massively augment the capabilities and improve business outcomes for the construction industry. 

  1. Finance and economics + Fintech + Cryptocurrency 

The financial industry have traditionally been early adopters of tech, and have always been on the lookout for the next big thing to improve their offerings and smoothen out theri systems. 

Cognitive technologies hold the key to change the face of the BFSI industry, and artificial intelligence and blockchain powered FinTech is the latest booming area of interest that is bringing about a renewed phase of digital innovation in the financial industry.

FinTech isn’t exactly a new kid on the block, it has been around for quite some time now and lately has evolved very rapidly in a very short time. While FinTech  was all about facilitating payments and transactions, FinTech 2.0 is all about making use of cognitive computing like ML or NLP or blockchain for that matter, and doing a whole range of tasks right from processing credit risks to running entire hedge funds and navigating the complicated realm of compliance/regulation.

Driven by artificial intelligence, blockchain and cryptography, here are some areas that will be impacted by the latest wave of FinTech innovations:

  • Fraud detection
  • Insurance and claims processing
  • Credit approvals
  • Client risk profiling
  • Contracts analysis, 
  • Valuation Models, and
  • Algorithmic Trading

The rise of crypto-economic models is also set to disrupt the financial industry, and tokenization of economy and  digital models of finance and trading will be crucial in the near future. The challenge here is to understand how crypto-economics will affect the established financial models and how can the digital economy be leveraged in tandem with the existing economy.

  1. Cybersecurity

In an era where we’re generating a ton of data, data security is increasingly important. It is imperative for organizations to keep updating existing cybersecurity solutions and enforce every possible applicable security layer to ensure that data is breach-proof. The rise of artificial intelligence has the potential to equip an organization to mitigate an entire world of cyber threats. 

AI powered cybersecurity has the potential to analyze user behavior, deduce patterns, and identify the whole range of abnormalities/irregularities in the network. AI powered cybersecurity has the ability to minimize the routine security responsibilities by identifying recurring incidents and remediating them. 

As AI continually matures and eases into the cybersecurity space, and  here are some ways AI can help organizations boosts cybersecurity:

  • Detect malicious and prevent cyber attacks
  • Analyze endpoints for cyber threats
  • Improves the spectrum of human analysis from attack detection to endpoint protection, and finally
  • Automate mundane and repetitive security tasks

Therefore, AI will be a game-changer that will transform our approach towards cybersecurity. With the ability to instantly spot malware in a network, guide incident response and detect possible intrusions before they’ve even taken place, artificial intelligence will empower organizations in their quest to keep their data secure.  

  1. Government

The area of greatest optimism about Cognitive Computing, AI and ML lies in its potential to improve people’s lives, and there are a lot of practical applications with which AI could materialize a drastic improvement in the lives of its citizens. 

The adoption of AI among government organizations so far has been uneven at best, and lags behind the private sector. But with that being said, the potential applications of AI in government are massive, and there is space for all aspects and departments of government to reap the benefits of artificial intelligence. 

Here are a few ways governments around the world can use artificial intelligence and cognitive computing for effective administration:

  • Real time citizen engagement/feedback
  • Automated customer service for local municipalities
  •  Weather forecasting
  • Tracking of carbon footprint
  • Predictive analysis of historical crime data
  • Automated traffic control 
  • Analysis of fire risk and incidents, and
  • Implementation/tracking of social welfare schemes

Almost all governments around the world are understaffed, underskilled and face huge administrative backlogs. Resources are scarce and consequently delivery of services may be poor and slow in nature. 

Artificial Intelligence can potentially alter the situation drastically, and take up all of the time and labor-intensive jobs to make space for increased efficiency at reduced costs resulting in the more responsive and effective policy implementation by the government in order to make the citizens’ lives better. 

  1. Law-related professions

Effective implementation of artificial intelligence continues to find new ways to manage, execute and learn from laborious tasks across industries for increased speed and accuracy.

The field of law, though traditionally slower adopters of technology, shows a lot of potential for AI to work its magic and bring concrete business value to the table. A concrete and well laid out AI strategy can augment a lawyers’ ability by doing the front-line repetitive and mundane work, drive better results for their clients and become better lawyers in the process. 

Here are some of things artificial intelligence can help law professionals with, so that they can make time for better and more productive stuff:

  • Due diligence and legal research
  • Drafting legal documents
  • Review documents and contracts
  • Data Analysis, and
  • Prediction of future outcomes

Artificial intelligence, mainly machine learning, has enormous value for the noble field of law, practitioners can automate and manage their data and processes, protect their confidential data and empower better decision making to drive better outcomes both for themselves and their clients.  

  1. Video games

Since very early days, game developers have aspired to programme software that both pretends like it is human and create virtual environments that are consistent with the experiences of a real person. 

The advent of artificial intelligence in this regard has been a boost in the arm for the game developers and the larger video game community in general. Majority of the video games released in the last couple of years have highly sophisticated AI for all non-player controlled personas, characters or functions. Bots have increasingly become smarter in an effort to ensure quality engagement over prolonged periods of time and maximize player participation. 

There are varied ways in which AI is being used to develop video games these days, and here a few of them:

  • AI based voice intelligence
  • Better 3D visualization by means of VR and AR
  • Reinforcement learning to give a player just the right amount of challenge, and
  • Making optimum use of processing power to ensure seamless performance

As developers have increasingly started making AI based player profiles for better personalization and give a characteristic vibe to characters, video games have become more entertaining and the possibilities of advancement of these virtual environments by means of AI is endless.

  1. Military

Giant strides in the advancement of artificial intelligence, machine learning, deep learning, robotics and computer vision among other things have massively aided the development of new military capabilities to disrupt 

military strategies almost all around the world. Artificial Intelligence has made its presence felt across the entire spectrum of military requirements - intelligence, surveillance, to even nuclear capabilities in some cases. 

The ways AI can help militaries around the world are endless, but here are a few ways that may be used by the militaries in an ethical manner:

  • Identify objects/Spot hidden targets
  • Conduct reece and reconnaissance 
  • Use of drones for high stress situations
  • Identify patterns and predict enemy behavior
  • Predictive maintenance of equipment
  • Protection against cyber-attacks, and
  • Augment decision making

Apart from the above applications, AI could prove to be a master-stroke in maintaining world peace by keeping fast-paced or irregular advances in military approaches around the world in check and prevent a potential world-wide arms race or a cold-war like situation again. 

  1. Hospitality

With the ability to not just carry out, but also learn from the traditional human functions, artificial intelligence is playing an increasingly important role in the hospitality industry. 

The main objective of hospitality leaders and providers, and service partners is to surpass customer expectations and offer intensely personalized assistance. 

Artificial Intelligence can thus be a master stroke for the hospitality industry. With the ability to streamline processes, provide actionable insights, and personalize and optimize experiences, it has the potential to power in a new wave of responsive and guest-centric hospitality. 

Amongst a sea of possible applications, here are a few which can radically affect business outcomes:

  • Personalized service
  • Concierge robots
  • Digital assistance and chatbots
  • AR/VR to enhance travel experience 
  • Journey patterns and itinerary
  • Occupancy and rate optimization
  • Booking and staff interactions, and
  • Reputation management

AI has the power to transform every facet of the hospitality business and institutions to deliver better customer experiences and in return, better business outcomes. 

  1. Marketing, Sales and Advertising Martech - speak about RevOps

In recent years, perhaps no sector other than marketing has been so radically transformed with the advent of AI. It has become an indispensable part of the marketing industry, helping organizations position their products, collect consumer data and improve future products in the process. Every company out there is fighting a cut-throat battle to gain market share, and companies are spending around 20%(sometimes even as high as 50%) of their revenues on artificial intelligence enabled marketing efforts in order to understand their customers better and meet their end-user demands. 

Here are a few ways AI can be incorporated in marketing:

  • Filtering of data and improved search experience 
  • Increase reach with smart content generation, curation and distribution
  • Predictive analytics
  • Lead scoring
  • Ad targeting 
  • Dynamic pricing
  • Predictive and personalized customer service
  • Ad retargeting, and
  • Churn prediction and smart engagement

With the rise of AI and data-driven marketing, an increasingly effective practice adopted by organizations across the board is Revenue Operations (RevOps). This practice aims to break down the silos between sales, marketing and customer success departments in an organization to give a holistic view of the revenue streams. Thus, artificial intelligence is transforming the way we look at not just marketing, but also sales and customer success in an organization. 

  1. Art

Artificial intelligence is drastically changing the nature of creative processes across the board, and algorithms are increasingly playing an active role in creative endeavours such as music, architecture, fine arts and content creation. From being just a tool in order to conceive art, computers are beginning to step in as a creative entity in their own right. 

Primarily conceived as a human domain, it turns out machines can do a lot more in the creative world than previously expected. 

Here are just a few genres deep learning can produce art in, provided the system is fed with a training data-set:

  • AI in paintings and drawings 
  • Crate royalty free music
  • Create recipes for meals from previous culinary knowledge 
  • Predict and choreograph dance moves

The rise of artificial intelligence has not only transformed our ability to create art, but also put forward thought-provoking questions about our relationship with technology and the depth of human creativity in an effort to learn about ourselves. 

  1. Oil and Gas 

Oil is the most important commodity globally, and drives most of our energy needs. But the global oil reserves have begun depleting, and the time is just right for Energy companies to turn to modern technologies and invest in artificial intelligence and other data technologies in order to maximize efficiency and secure future competitiveness in an ever-changing environment. 

All problems are not created equal, and this represents a unique proposition for the oil and gas industry. As the industry becomes increasingly competitive and unpredictable, it’s high time they start looking at AI at a basic level at the least in order to streamline production, reduce costs, improve safety and empower decision making among other things. 

With the use of data science, machine learning and artificial intelligence in general, here are just a few avenues Oil and Gas companies could use help in:

  • Precision Drilling
  • Hydrocarbon Production
  • Reservoir management
  • ESP Monitoring 
  • Plant management
  • Product and subsurface data analysis optimization, and
  • Predictive maintenance

The combination of technologies such as AI, ML, NLP big data and IoT will significantly help the industry in the years ahead. And though AI has indeed been implemented in a broad array of some verticals, the need today is to take a step further and incorporate these technologies in even the specialized sectors of the industry to realize the overall potential impact for the oil and gas sector. .

  1. Developer tools and APIs
  1. Venture Capital/ Private Markets Can put this under Fintech part too but the emphasis should be there here)

Considering the utility of artificial intelligence in not only identifying the most important correlations and causations in data, but also learning from past behaviour and running predictive analytics in an attempt to judge future outcomes, the VC/Private Market industry is one which is primed for reaping the success from using cognitive computing and artificial intelligence. 

With well-established metrics and droves of data points already being available to venture capitalists in order to establish correlations and patterns to assess start-up potential, AI and ML are the ultimate means to a powerful tool to filter through all the noise and assist VC’s in finding the best contenders for investment. 

In order to make better, faster and more informed decisions, here are a few ways data, algorithms and artificial intelligence can help venture capitalists:

  • Investment aid - AI has the ability to internalize data to break startup viability down to metrics in an effort to complement gut-instinct. 
  • Due diligence and market research - AI tools can help VC’s assess a company’s standing with respect to the competitors.
  • Track investment trends - Coming across the right investments at the right time can be crucial to success as a venture capitalist, and technology can help you find a viable investment with similar values and desired metrics. 

By harnessing the power of AI and ML, VCs will have access to better conclusions and insights from data in every stage of their investment journey, which is the juice for informed decision-making. 

  1. AI and Ethics

In the past few years, AI research and development has seen an enormous rise. Companies are posturing themselves in a manner that can give them an edge in the fourth industrial revolution, and countries are developing national strategies to explore how AI and cognitive computing can be used for the greater interest of society at large. 

But ever since the inception/conception of artificial intelligence, there has  been a raging debate around the design, construct, use and treatment of machine intelligence by humans. 

Ever since the argument that AI could be detrimental to humans in ways such as physical, mental, social, emotional and financial to name a few, has been floated, there has been a need for systematic policies and mechanisms to understand how algorithms and data can be safeguarded from malicious behavior.     

  1. Upcoming rule and regulations, Asimov's laws, Nadella's thoughts, Sundar Pitchai's thoughts

Science fiction is a precursor to science fact. 1950’s saw the works of Issac Asimov get published. His book, “I, Robot” laid the foundation for ethical artificial intelligence with his three laws of intelligent machine behavior, which were:

  • An intelligent machine shall never, by actions direct or indirect, allow human being(s) to come to harm.
  • An intelligent machine shall always obey the instructions of the humans except in cases where they violate the first law.
  • An intelligent machine shall always protect its existence as long as it does not interfere with the first and second laws.

As artificial intelligence progressed past the AI winter and into the 21st century, exponential strides in the processing power and storage ability resulted in active efforts to commercialize the use of AI on a wide-spread scale. This not only called for better compliance with existing concepts of ethics, but an altogether better definition and framework for ethics in intelligent machines.

While Asimov’s laws spoke to the machines, Microsoft and Satya Nadella’s laws for ethical artificial intelligence talks to the people building it. In order for machines to assist humanity to address society’s scourges, here are the 10 laws of AI, for the ones building AI, in the words of Microsoft CEO Satya Nadella himself:

  • AI must be designed to assist humanity
  • AI must be transparent
  • AI must maximize efficiency without harming the dignity of the people
  • AI must be designed for intelligent privacy
  • AI must be designed with algorithmic accountability
  • AI must guard against bias
  • It’s critical for humans developing AI to have empathy
  • It’s critical for humans developing AI to have education
  • It’s critical for AI to retain the need for human creativity
  • A human has to responsible for a machine generated action/decision

Google CEO Sundar Pichai has also been a strong advocate for the need of regulation in AI, citing the dangers of AI and prescriptive data security and privacy protection. This effort has led to 7 laws of development of artificial intelligence at Google:

  • AI should be socially beneficial
  • AI should not create or reinforce unfair bias
  • AI should be built and tested for safety
  • AI should be accountable to people
  • AI should incorporate privacy design principles
  • AI should uphold high standards of scientific excellence
  • AI should be made available for uses that accord with these principles.
  • Purpose and use - likely use of an application and how close the solution can be to harmful use.
  • Nature and uniqueness - whether the tech is unique or generally available
  • Scale - whether the technology will  have significant impact
  • Nature of involvement - whether it is general-purpose, integrations or custom solutions. 

Google also outlines the technologies it will NOT pursue:

  • Technologies that are likely to cause overall harm
  • Technologies whose purpose is to facilitate injury, for instance, weapons
  • Technologies that violate internationally accepted norms
  • Technologies which contravene international law and human rights

Policy makers on the other hand seem to have a mixed reaction to the rise of AI all over the globe. Here’s a brief overview of how the global legal and regulatory trends look like:

  • Most governments seem to adopting a wait and see approach to the rise of AI
  • The EU, however, is the most active in proposing new rules and regulations with proposed rules in 7 out of 9 areas relevant for AI regulation.
  • 24 countries already have laws in place for operation of autonomous vehicles, and a further 8 countries are actively discussing it. 
  • 13 countries have had some level of advanced discussions over harmful or lethal applications of AI

While even after such extensive definitions there still might be ambiguities about the intricacies of the ethics of AI, there is one thing which is extremely clear - artificial intelligence is supposed to help us, not hurt us, lile Asimov said. 

  1. The limits of artificial general intelligence

Artificial General Intelligence can be broadly defined as the ability to understand, learn or implement anything that a human can. 

Although most artificial intelligence applications today can indeed perform tasks with more efficacy than an average human, the machines today are far from “intelligence” in a sense that they can only do a single function extremely well, and completely break down when doing something else. 

Machines today need troves of data to learn from, unlike humans who may learn with significantly lesser learning experiences. Also, the concept of AGI dictates that a machine should be able to apply the knowledge gained from one domain to another, just like humans would do. 

This would require the learning process of machines to be extremely similar to that of humans, so that they may learn in less time and gain competency in multiple areas. But the problem is that we ourselves don’t quite completely understand the functioning of a human brain, much less model and replicate it.

Hence, in theory we may indeed conceive the concept of artificial general intelligence, but in reality we are many leaps and bounds away from achieving artificial general intelligence. 

  1. Potential harm

“Success in creating effective AI could be the biggest event in the history of our civilisation. Or the worst. So we cannot know if we will be infinitely helped by AI or ignored by it and sidelined, or conceivably destroyed by it” - Stephen Hawking

While it is true that artificial intelligence is indeed transforming entire industries at end and has the potential to be an unparalleled force for good, but as the technology evolves, there’s bound to be negative aspects/consequences as well. And in order to be better equipped to mitigate and manage potential dangers of artificial intelligence, here are some key negative aspects of AI:

  • Bias in AI - The question of bias in machine intelligence stems from the inherent bias in human intelligence If the source from which AI learns is biased, the results are bound to be biased as well. Hence, to avoid unintended consequences, companies should take a good look at how they’re developing and training the algorithms, and if they are in sync with ethics and laws of AI development. 
  • Loss in menial jobs - While there will undoubtedly be a net increase in jobs with the advent of AI, with jobs generated far outnumbering the jobs that will become obsolete, it is true that a lot of menial jobs done manually today would cease to exist with the advent of AI.
  • Global relationships - With the world consistently shrinking and being a smaller place thanks to technology. The isolation is decreasing and decisions regarding AI from one country can impact other countries very easily.  This calls for a globally accepted framework for peaceful application of AI.
  • Accelerated Terrorism -  With the rise of autonomous drones and vehicles, nanobots etc among other things, it will take extensive human reasoning to prepare our law and defence institutions against any potential threats in the field of AI terrorism. 
  1. Ethical machines (already covered in detail)
  2. Machine consciousness, sentience, and mind
  3. Superintelligence

By definition, as is evident from the term itself, SuperIntelligence refers to the ability of artificial intelligence to exceed human intelligence in virtually every field, including creativity, scientific temper, general wisdom, and social skills. 

AI today, albeit making great strides, requires an objective fixed by humans to pursue. The fact is, Superintelligence is just a pipedream at this stage, and it almost sounds like a utopian concept. Nevertheless, this is the kind of AI that is generally perceived as a threat, and which has led the likes of Elon Musk to suggest an outcome as far out as the extinction of mankind. 

  1. AI in numbers:
  • The global AI market reached $12.5 billion in 2019, up from $2.5 billion in 2014, at a CAGR of 38%. It is expected to grow at a CAGR of 52% till 2025. (Source: Tractica)
  • The worldwide data generated is estimated to grow by 61% to 170 Zettabytes. (Source: IDC)
  • Over the last 4 years, about 37% of organizations have deployed AI based solutions to deliver Processes/Product/Service. That turns out to be a massive 250% percent increment in these 4 years. By 2021, this figure is expected to be 70%. (Source: Gartner)
  • In 2019, the spending on AI based solutions touched the $35 Billion mark, which is a growth of around 44% over 2018. By 2021, this figure is expected to be around $58 Billion. (Source: Gartner)
  • The global market for smart-wearables is expected to cross the $180 Billion mark by 2025.
  • Among the departments in enterprises, The Marketing Division reported the usage of AI and Machine Learning more than others at around 30%. (Source: Forbes)
  • Around 85% of digitally mature enterprises said that AI is a strategic priority for them, but only 47% said that they had a defined AI strategy. (Source: Adobe/Zoominfo)
  • The top 3 industries by percentage of resources invested in AI Solutions are Cyber intelligence (22%), Healthcare (10%), Manufacturing automation (8%).

Insights:

  • The fact that 85% of enterprises think that AI is a strategic priority for them, but only 47% have a well-defined strategy in place, and only about 37% have a AI based solution in service, indicates that there’s a massive demand that the supply can’t keep up with. It’s an open space, and there’s an enormous market to be catered to, provided they’re being pitched the right solution in the right manner.
  • The Healthcare industry intends to spend around 10% of revenue on AI based solutions, but ends up spending only a miniscule 1.33%. The primary fields to cater to in the healthcare industry are Neurology, Cardiovascular and Respiratory domains. There’s demand for AI based products, but supply can’t keep up, hence the relative expenditure is low.
  • Retail is another domain which has enormous potential for AI, and it currently spends about 2.5% on AI based solutions. I expect this figure to go up drastically in the years to come, so this market is one to watch out for. 
  • A whopping 72% of businesses in the technology, media, and telecommunications industry expect AI to have a significant impact on product offerings in the next few years, but their combined share in the AI industry comes close to 5-7%. This is another hot market waiting on the right product to make use of.
  • A study by Gartner predicts that by 2020, 30% of all B2B companies will employ AI to augment at least one of their primary sales processes, and a Harvard Business Review study found that companies using AI for sales were able to increase their leads by more than 50%, reduce call time by 60-70%, and realize cost reductions of 40-60%.
  • Furthermore, 64% of them consider AI valuable for their sales and marketing strategy, and a mammoth 63% ARE NOT using AI in their tech stack. They only account for 2.55% of revenue generated by the AI ecosystem. Enormous opportunity here.
  • The customer service domain is one that cuts across industries. 85% of these enterprises think that AI would be a disruptive game-changer, 78% say they’d have concrete plans to implement AI in their services, but only a tiny 27% are actually using AI based solutions. Chatbots will change the playfield for this industry.

References: