Each page has LINKS to help you increase your depth iof knowledge
As technology advances, so is what it can do and who is affected. This site gives you an indication of where we are and where we are going.
This site comments on its relationship/effect on Judaism. Again, this is early days and so what is said is also ‘early writings/comments’.
No agreed definition of what AI is exists. as it is a new field that is growing and changing at a fantastic speed.
The graphic below gives you an indication of the scope of AI development
Funding for Israeli AI startups is heating up. For the 2017 year-to-date, Israeli AI startups have raised $837 million, which is already larger than the total period for 2016, and represents a fifteen-fold increase in the last five years.
THEARTIFICIAL INTELLIGENCE MARKET IS ESTIMATED TO BE WORTH US$191 BILLION BY 2024
The developments of a lot of human-like robots and increase within their preparation rate in the developing regions have had a considerable impact on the overall computing (AI) market. Improved productivity, distributed application areas, increased client satisfaction, and large information integration drive the factitious intelligence market. However, lack of consummate work force and threat to human dignity threats may restrain the market growth. nevertheless, the impact of those factors is anticipated to be minimal due to the introduction of newer technologies.
WHAT IS AI? WHAT DOES ARTIFICIAL INTELLIGENCE DO? BBC 9 August 2019
Artificial intelligence - or AI for short - is technology that enables a computer to think or act in a more 'human' way. It does this by taking in information from its surroundings, and deciding its response based on what it learns or senses.
It affects the the way we live, work and have fun in our spare time - and sometimes without us even realising.
AI is becoming a bigger part of our lives, as the technology behind it becomes more and more advanced. Machines are improving their ability to 'learn' from mistakes and change how they approach a task the next time they try it.
Some researchers are even trying to teach robots about feelings and emotions.
You might not realise some of the devices and daily activities which rely on AI technology - phones, video games and going shopping, for example.
Some people think that the technology is a really good idea, while others aren't so sure.
Just this month, it was announced that the NHS in England is setting up a special AI laboratory to boost the role of AI within the health service.
Announcing that the government will spend £250 million on this, Health Secretary Matt Hancock said the technology had "enormous power" to improve care, save lives and ensure doctors had more time to spend with patients.
WHAT DOES AI DO?
AI can be used for many different tasks and activities.
Personal electronic devices or accounts (like our phones or social media) use AI to learn more about us and the things that we like. One example of this is entertainment services like Netflix which use the technology to understand what we like to watch and recommend other shows based on what they learn.
It can make video games more challenging by studying how a player behaves, while home assistants like Alexa and Siri also rely on it.
It has been announced that NHS England will spend millions on AI in order to improve patient care and research
AI can be used in healthcare, not only for research purposes, but also to take better care of patients through improved diagnosis and monitoring.
It also has uses within transport too. For example, driverless cars are an example of AI tech in action, while it is used extensively in the aviation industry (for example, in flight simulators).
Farmers can use AI to monitor crops and conditions, and to make predictions, which will help them to be more efficient.
You only have to look at what some of these AI robots can do to see just how advanced the technology is and imagine many other jobs for which it could be used.
WHERE DID AI COME FROM?
The term 'artificial intelligence' was first used in 1956.
In the 1960s, scientists were teaching computers how to mimic - or copy - human decision-making.
This developed into research around 'machine learning', in which robots were taught to learn for themselves and remember their mistakes, instead of simply copying. Algorithms play a big part in machine learning as they help computers and robots to know what to do.
WHAT IS AN ALGORITHM?
An algorithm is basically a set of rules or instructions which a computer can use to help solve a problem or come to a decision about what to do next.
From here, the research has continued to develop, with scientists now exploring 'machine perception'. This involves giving machines and robots special sensors to help them to see, hear, feel and taste things like human do - and adjust how they behave as a result of what they sense.
The idea is that the more this technology develops, the more robots will be able to 'understand' and read situations, and determine their response as a result of the information that they pick up.
WHY ARE PEOPLE WORRIED ABOUT AI?
Many people have concerns about AI technology and teaching robots too much.
Famous scientist Sir Stephen Hawking spoke out about it in the past. He said that although the AI we've made so far has been very useful and helpful, he worried that if we teach robots too much, they could become smarter than humans and potentially cause problems.
Sir Stephen Hawking spoke out about AI and said that he had concerns that the technology could cause problems in the future
People have expressed concerns about privacy too. For example, critics think that it could become a problem if AI learns too much about what we like to look at online and encourages us to spend too much time on electronic devices.
Another concern about AI is that if robots and computers become very intelligent, they could learn to do jobs which people would usually have to do, which could leave some people unemployed.
Other people disagree, saying that the technology will never be as advanced as human thoughts and actions, so there is not a danger of robots 'taking over' in the way that some critics have described.
DEFINITION - WHAT DOES ARTIFICIAL INTELLIGENCE (AI) MEAN?
Artificial intelligence (AI) is an area of computer science that emphasizes the creation of intelligent machines that work and react like humans. Some of the activities computers with artificial intelligence are designed for include:
TECHOPEDIA EXPLAINS ARTIFICIAL INTELLIGENCE (AI)
Artificial intelligence is a branch of computer science that aims to create intelligent machines. It has become an essential part of the technology industry.
Research associated with artificial intelligence is highly technical and specialized. The core problems of artificial intelligence include programming computers for certain traits such as:
Ability to manipulate and move objects
Free Download: AI in the Insurance Industry: 26 Real-World Use Cases
Knowledge engineering is a core part of AI research. Machines can often act and react like humans only if they have abundant information relating to the world. Artificial intelligence must have access to objects, categories, properties and relations between all of them to implement knowledge engineering. Initiating common sense, reasoning and problem-solving power in machines is a difficult and tedious task.
Machine learning is also a core part of AI. Learning without any kind of supervision requires an ability to identify patterns in streams of inputs, whereas learning with adequate supervision involves classification and numerical regressions.
Classification determines the category an object belongs to and regression deals with obtaining a set of numerical input or output examples, thereby discovering functions enabling the generation of suitable outputs from respective inputs. Mathematical analysis of machine learning algorithms and their performance is a well-defined branch of theoretical computer science often referred to as computational learning theory.
Machine perception deals with the capability to use sensory inputs to deduce the different aspects of the world, while computer vision is the power to analyze visual inputs with a few sub-problems such as facial, object and gesture recognition.
Robotics is also a major field related to AI. Robots require intelligence to handle tasks such as object manipulation and navigation, along with sub-problems of localization, motion planning and mapping.
[Master Deep Learning and build a career in AI, with this highly sought from Coursera.]
TURING TEST: WHY IT STILL MATTERS Could everything we know and do one day be reproduced by a complicated enough computer program installed in a complicated enough robot? Snopes, originally published in ‘The Conversation’ 2 October 2019, Harry Collins, Professor of Social Science, Cardiff University
Image via Shutterstock
We’re entering the age of artificial intelligence. And as AI programs gets better and better at acting like humans, we will increasingly be faced with the question of whether there’s really anything that special about our own intelligence, or if we are just machines of a different kind. Could everything we know and do one day be reproduced by a complicated enough computer program installed in a complicated enough robot?
In 1950, computer pioneer and wartime codebreaker Alan Turing made one of the most influential attempts to tackle this issue. In a landmark paper, he suggested that the vagueness could be taken out of the question of human and machine intelligence with a simple test. This “Turing Test” assesses the ability of a computer to mimic a human, as judged by another human who could not see the machine but could ask it written questions.
In the last few years, several pieces of AI software have been described as having beaten the Turing Test. This has led some to argue that the test is too easy to be a useful judge of artificial intelligence. But I would argue that the Turing Test hasn’t actually been passed at all. In fact, it won’t be passed in the foreseeable future. But if one day a properly designed Turing Test is passed, it will give us cause to worry about our unique status.
The Turing Test is really a test of linguistic fluency. Properly understood, it can reveal the thing that is arguably most distinctive about humans: our different cultures. These give rise to enormous variations in belief and behaviour that aren’t seen among animals or most machines. And the fact we can program this kind of variation into computers is what gives them the potential to mimic human abilities. In judging fluent mimicry, the Turing Test lets us look for the ability of computers to share in human culture by demonstrating their grasp of language in a social context.
Turing based his test on the “imitation game”, a party game in which a man pretended to be a woman and a judge tried to guess who was who by asking the concealed players questions. In the Turing Test, the judge would try to guess who was a computer and who was a real human.
Unsurprisingly, in 1950, Turing didn’t work out the necessary detailed protocol for us to judge today’s AI software. For one thing, he suggested the test could be done in just five minutes. But he also didn’t work out that the judge and the human player had to share a culture and that the computer would have to try to emulate it. That’s led to lots of people claiming that the test has been passed and others claiming that the test is too easy or should include emulation of physical abilities.
FIRST CLAIMED PASS
Some of this was made obvious nearly 50 years ago with the construction of the program known as ELIZA by computer scientist Joseph Weizenbaum. ELIZA was used to simulate a type of psychotherapist known as a Rogerian, or person-centred, therapist. Several patients who interacted with it thought it was real, leading to the earliest claim that the Turing Test had been passed.
But Weizenbaum was clear that ELIZA was, in effect, a joke. The setup didn’t even follow what little protocol Turing did provide because patients didn’t know they were looking out for fraud and there were no simultaneous responses from a real psychotherapist. Also, culture wasn’t part of the test because Rogerian therapists say as little as possible. Any worthwhile Turing Test has to have the judge and the human player acting in as human-like a way as possible.
Given that this is a test of understanding text, computers need to be judged against the abilities of the top few percent of copy-editors. If the questions are right, they can indicate whether the computer has understood the material culture of the other participants.
The right kind of question could be based on the 1975 idea of “Winograd schemas”, pairs of sentences that differ by just one or two words that require a knowledge of the world to understand. A test for AI based on these is known as a Winograd Schema Challenge and was first proposed in 2012 as an improvement on the Turing Test.
Consider the following sentence with two possible endings: “The trophy would not fit in the suitcase because it was too small/large.” If the final word is “small”, then “it” refers to the suitcase. If the final word is “large”, then “it” refers to the trophy.
To understand this, you have to understand the cultural and practical world of trophies and suitcases. In English-speaking society, we use language in such a way that even though a small trophy doesn’t exactly “fit” a large suitcase that’s not what a normal English speaker would mean by “fit” in this context. That’s why in normal English, if the final word is “small”, “it” has to refer to the suitcase.
You also have to understand the physical world of trophies and suitcases as well as if you had actually handled them. So a Turing Test that took this kind of approach would make a test that included an assessment of an AI’s ability to emulate a human’s physical abilities redundant.
A HIGHER BAR
This means a Turing Test based on Winograd schemas is a much better way to assess a computer’s linguistic and cultural fluency than a simple five-minute conversation. It also sets a much higher bar. All the computers in one such competition in 2016 failed miserably, and no competitors were entered from the large AI-based firms because they knew they would fail.
None of the claims that the Turing Test has already been passed mean anything if it is set up as a serious test of humanity’s distinctive abilities to create and understand culture. With a proper protocol, the test is as demanding as it needs to be. Once more, Alan Turing got it right. And, as we stand, there is no obvious route to creating machines that can participate in human culture sufficiently deeply to pass the right kind of linguistic test.
How is artificial intelligence – and its prominent discipline, machine learning – helping deliver better business insights from big data? Let’s examine some ways – and peek at what’s next for AI and big data analysis The Enterprisers ProjectKevin Casey | October 14, 2019
HOW BIG DATA WORKS WITH AI
Big data isn’t quite the term de rigueur that it was a few years ago, but that doesn’t mean it went anywhere. If anything, big data has just been getting bigger.
That once might have been considered a significant challenge. But now, it’s increasingly viewed as a desired state, specifically in organizations that are experimenting with and implementing machine learning and other AI disciplines.
“AI and ML are now giving us new opportunities to use the big data that we already had, as well as unleash a whole lot of new use cases with new data types,” says Glenn Gruber, senior digital strategist at Anexinet. “We now have much more usable data in the form of pictures, video, and voice [for example]. In the past, we may have tried to minimize the amount of this type of data that we captured because we couldn’t do quite so much with it, yet [it] would incur great costs to store it.”
[ Could AI solve that problem? Get real-world lessons learned from CIOs in the new HBR Analytic Services report, An Executive’s Guide to Real-World AI. ]
HOW AI FITS WITH BIG DATA
“The more data we put through the machine learning models, the better they get. It’s a virtuous cycle.”
There’s a reciprocal relationship between big data and AI: The latter depends heavily on the former for success, while also helping organizations unlock the potential in their data stores in ways that were previously cumbersome or impossible.
“Today, we want as much [data] as we can get – not only to drive better insight into business problems we’re trying to solve, but because the more data we put through the machine learning models, the better they get,” Gruber says. “It’s a virtuous cycle in that way.”
HOW AI USES BIG DATA
It’s not as if storage and other issues with big data and analytics have gone bye-bye. Gruber, for one, notes that the pairing of big data and AI creates new needs (or underscores existing ones) around infrastructure, data preparation, and governance, for example. But in some cases, AI and ML technologies might be a key part of how organizations address those operational complexities. (Again, there’s a cyclical relationship here.)
[ Sort out the jargon jumble. Read: AI vs. machine learning: What’s the difference? ]
About that “better insight” thing: How is AI – and ML as its most prominent discipline in the business world at the moment – helping IT leaders deliver that, whether now or in the future? Let us count some ways.
6 WAYS AI FUELS BETTER INSIGHTS
1. AI is creating new methods for analyzing data
2. Data analytics is becoming less labor-intensive
3. Humans still matter plenty
4. AI/ML can be used to alleviate common data problems
5. Analytics become more predictive and prescriptive
6. What’s next for AI and big data? We’ve merely scratched the surface
The past few years have seen enormous developments in the speed and data storage capacity of modern computers which will provide new tools to AI. Today this is in the form of a Quantum Computer.
Quantum computing is redefining what is possible with technology—creating unprecedented possibilities to solve humanity’s most complex challenges. Microsoft is committed to turning the impossible into reality—in a responsible way that brings the best solutions to humanity and our planet.
Throughout 2019, we'll be investigating the impact of the pace and extent of technological change on our culture and society, looking at how we can grasp and respond to the seismic shifts these advances will bring about.
Life Rewired will interrogate how artists are responding to a time when technology is simultaneously enhancing our lives and challenging our identity by creating machines with human characteristics. It will explore how scientific breakthroughs can affect us at every stage of our life; from expert and first-person perspectives on IVF to the personal and societal impact of lengthening life expectancy.
The season will demonstrate how artists are finding imaginative ways to communicate the human impact of unprecedented technological shifts, as well as finding creative new uses for artificial intelligence, big data, algorithms and virtual reality.
SOLVING 21ST-CENTURY CHALLENGES University announces unprecedented investment in the Humanities Univeristy of Oxford
At a time when significant investments are being made in scientific and technological research and development, this gift recognises the essential role of the Humanities in helping society confront and answer fundamental questions of the 21st century.
One of the most urgent of these questions relates to the impact of Artificial Intelligence, which will challenge the very nature of what it means to be human and transform most aspects of our lives. From our health and wellbeing to the future of work and manufacturing, AI will redefine the way we live, work and interact.
Just as the Humanities helped guide the debate on medical ethics 30 years ago, so they will be even more essential in providing an ethical framework for developing machine intelligence, for responding to the increasing automation of work, and the use of algorithms in all walks of life. The planned Institute for Ethics in AI, which would be housed within the Faculty of Philosophy, allows Oxford to deploy its unique resources and expertise towards these issues.
Sir Tim Berners-Lee, inventor of the World Wide Web, said: ‘It is essential that philosophy and ethics engages with those disciplines developing and using AI. If AI is to benefit humanity we must understand its moral and ethical implications. Oxford with its rich history in humanities and philosophy is ideally placed to do this.’