Informative speech on Artificial Intelligence

Artificial Intelligence
Technology is very common words in our daily life’s. Everything which man has invented after Stone-Age is a part of technology. Technology can be the knowledge of techniques, processes, etc. or it can be embedded in machines, computers, devices and factories, which can be operated by individuals without detailed knowledge of the workings of such things.
Common philosophy related with AI?
1.       Common philosophy related to AI for general public
a)      Some basic AI Example (SIRI, Google Now/ Search)
Newspapers, magazines, the Internet in general have been throwing out the terms “AI” and “Artificial Intelligence” a lot.
 From SIRI to self-driving cars, artificial intelligence (AI) is progressing rapidly.
AI can encompass anything from Google’s search algorithms to IBM’s Watson to autonomous weapons.
WEAK AI & Strong AI (comparison)
1.       Weak AI- is designed to perform a narrow task
 (e.g. only facial recognition or only internet searches or only driving a car).
a)      Examples
only facial recognition or only internet searches or only driving a car
b)      Benefits
While narrow AI may outperform humans at whatever its specific task is, like playing chess or solving equations,
2.       Strong AI
a)      Example- Doing anything
b)      AGI would outperform humans at nearly every cognitive task.
However, the long-term goal of many researchers is to create general AI (AGI or strong AI).
A lot of people assume that we are developing general AI rather than applied AI.

What actually aI is?
 Things to achieve "learning" and "problem solving".
The central problems (or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing (communication), perception and the ability to move and manipulate objects. General intelligence is among the field's long-term goals. Approaches include statistical methods, computational intelligence, soft computing (e.g. machine learning), and traditional symbolic AI.
1.       Artificial Intelligence è  Machine Learning è significantly Deep Learning
Artificial Intelligence is not artificial intelligence. When people talk about ‘volcanic’ changes in ‘AI’ they are talking about one particular field of technology: Machine Learning (most significantly Deep Learning). Machine Learning is a very literal description of the technology it describes, that is a program written to learn and adapt.

How is artificial intelligence programmed?


1.       Analogy
If anyone claims to know the answer to this question, then ask them to show you their working AI program. If they claim to have a program that could be trained to do any job that a human could do, then ask them why we are still paying people worldwide USD $75 trillion per year to do that work instead.

2.       Real Answer
Nobody knows how AI is programmed.

3.       How to someone work in AI-

It's not that we aren't making progress. Google and Facebook have invested heavily in AI. Google search does a pretty good job of understanding what you want when you speak or type a question in natural language. Facebook is pretty good at recognizing faces and understanding your interests. The reason these companies are making progress is that they have thousands of developers, millions of CPUs, Exabyte of data, and hundreds of billions of dollars.

4.       Analogy to human mind
AI is an extremely hard problem. But we can at least gain some insight into the hardware and software requirements of AI by examining the only known working example of human intelligence.
The human brain has about 10^11 neurons and 10^14 synapses. An artificial neural network of this size with response times of 10 to 100 milliseconds would require 10^15 to 10^16 operations per second. Such computers exist, but they cost millions of dollars and consume several megawatts of power.

Doing the work of humans requires both a mind and a body. Therefore, we should expect that an AI controlled robot that could be trained to do any task that a human could do should be as complex as a human. Complexity is measured in bits: it is the size of the software when compressed using the best possible algorithm.
I actually did this experiment, where I compressed the human genome (our source code) and compared it with a large collection of open source code. The answer is that a human baby has the same complexity as 300 million lines of code (after which it will still require years of training). Software costs about $100 per line. This makes the total cost $30 billion. This is why only large companies are making significant progress in AI.
How close are big companies in developing AI
1.       One AI system will train another one.
We are getting closer to general AI though. There is a developing technology, “Adversarial Training in Recurrent Neural Networks “, where the data from one machine learning program helps to train the other. This is the technology that Google and Facebook have been flouting a lot recently. An example of this might be in medicine, where one ML program is used to diagnose a patient, and another is used to prescribe a treatment. The two programs may train each other in that correct treatments suggest correct diagnoses and the correct diagnosis may lead to different treatments, and so on…

2.       How AI will help us and case example.

There are companies that have already hinted they are riding the AI/ML wave. Travis Kalanik has suggested on multiple occasions Uber has no need for drivers. Instead, driverless cars powered by ML technologies could hack the way for the future of transportation. Why is a taxi company worth over USD 45 billion? Because it isn’t about taxis, it is about automating and optimizing global infrastructure in general. That is an exciting vision: powered by AI. That is also going to kill multiple industries, not merely hasten their decline.
HOW CAN AI BE DANGEROUS?
Most researchers agree that a super intelligent AI is unlikely to exhibit human emotions like love or hate, and that there is no reason to expect AI to become intentionally benevolent or malevolent. Instead, when considering how AI might become a risk, experts think two scenarios most likely:
 The AI is programmed to do something devastating: Autonomous weapons are artificial intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties. Moreover, an AI arms race could inadvertently lead to an AI war that also results in mass casualties. To avoid being thwarted by the enemy, these weapons would be designed to be extremely difficult to simply “turn off,” so humans could plausibly lose control of such a situation. This risk is one that’s present even with narrow AI, but grows as levels of AI intelligence and autonomy increase.
The AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal: This can happen whenever we fail to fully align the AI’s goals with ours, which is strikingly difficult. If you ask an obedient intelligent car to take you to the airport as fast as possible, it might get you there chased by helicopters and covered in vomit, doing not what you wanted but literally what you asked for. If a super intelligent system is tasked with an ambitious geoengineering project, it might wreak havoc with our ecosystem as a side effect, and view human attempts to stop it as a threat to be met.
As these examples illustrate, the concern about advanced AI isn’t malevolence but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we have a problem. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. A key goal of AI safety research is to never place humanity in the position of those ants.
Future with AI
Before you dismiss AI as decades away, keep in mind that our expectations of development in technology are linear whilst its progress is exponential. Today, we may be reading with sci-fi goggles, the progress of Machine Learning, but by the time our cars drive themselves the cities we drive them in will radically change. Neither will cars look like cars anymore.
In the meantime, IMHO it is worth monitoring the impact of Machine Learning on these industries:
Education
Agriculture
Medicine
The Military
Infrastructure
Transportation
References




Comments