T O P I C
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?
WHAT DOES IT DO?
TO UNDERSTAND HOW ARTIFICIAL INTELLIGENCE (AI)
WILL AFFECT JUDAISM
WE MUST UNDERSTAND WHAT AI IS
AND WHERE IT MAY BE GOING.
THIS IS WHY THIS BLOCK HAS BEEN DIVIDED INTO THREE PAGES
Each page has LINKS to help you increase your depth iof knowledge
As technology advances, so is what it can do and who is affected.
This site gives you an indication of where we are and where we are going.
This site comments on its relationship/effect on Judaism.
Again, this is early days
and so what is said is also ‘early writings/comments’.
No agreed definition of what AI is exists.
as it is a new field that is growing
and changing at a fantastic speed.
The graphic below gives you an indication of AI development
Funding for Israeli AI startups is heating up. For the 2017 year-to-date, Israeli AI startups have raised $837 million, which is already larger than for 2016, and represents a fifteen-fold increase in the last five years.
Hackernoon September 18 2017
THE ARTIFICIAL INTELLIGENCE MARKET IS ESTIMATED
TO BE WORTH US$191 BILLION BY 2024
The developments of a lot of human-like robots and increase within their preparation rate in the developing regions have had a considerable impact on the overall computing (AI) market. Improved productivity, distributed application areas, increased client satisfaction, and large information integration drive the factitious intelligence market. However, lack of consummate work force and threat to human dignity threats may restrain the market growth. nevertheless, the impact of those factors is anticipated to be minimal due to the introduction of newer technologies.
MarketWatch Jan 23, 2019
WHAT IS AI? WHAT DOES ARTIFICIAL INTELLIGENCE DO?
BBC 9 August 2019
Artificial intelligence - or AI for short - is technology that enables a computer to think or act in a more 'human' way. It does this by taking in information from its surroundings, and deciding its response based on what it learns or senses.
It affects the the way we live, work and have fun in our spare time - and sometimes without us even realising.
AI is becoming a bigger part of our lives, as the technology behind it becomes more and more advanced. Machines are improving their ability to 'learn' from mistakes and change how they approach a task the next time they try it.
Some researchers are even trying to teach robots about feelings and emotions.
You might not realise some of the devices and daily activities which rely on AI technology - phones, video games and going shopping, for example.
Some people think that the technology is a really good idea, while others aren't so sure.
Just this month, it was announced that the NHS in England is setting up a special AI laboratory to boost the role of AI within the health service.
Announcing that the government will spend £250 million on this, Health Secretary Matt Hancock said the technology had "enormous power" to improve care, save lives and ensure doctors had more time to spend with patients.
WHAT DOES AI DO?
AI can be used for many different tasks and activities.
Personal electronic devices or accounts (like our phones or social media) use AI to learn more about us and the things that we like. One example of this is entertainment services like Netflix which use the technology to understand what we like to watch and recommend other shows based on what they learn.
It can make video games more challenging by studying how a player behaves, while home assistants like Alexa and Siri also rely on it.
It has been announced that NHS England will spend millions on AI in order to improve patient care and research
AI can be used in healthcare, not only for research purposes, but also to take better care of patients through improved diagnosis and monitoring.
It also has uses within transport too. For example, driverless cars are an example of AI tech in action, while it is used extensively in the aviation industry (for example, in flight simulators).
Farmers can use AI to monitor crops and conditions, and to make predictions, which will help them to be more efficient.
You only have to look at what some of these AI robots can do to see just how advanced the technology is and imagine many other jobs for which it could be used.
WHERE DID AI COME FROM?
The term 'artificial intelligence' was first used in 1956.
In the 1960s, scientists were teaching computers how to mimic - or copy - human decision-making.
This developed into research around 'machine learning', in which robots were taught to learn for themselves and remember their mistakes, instead of simply copying. Algorithms play a big part in machine learning as they help computers and robots to know what to do.
WHAT IS AN ALGORITHM?
An algorithm is basically a set of rules or instructions which a computer can use to help solve a problem or come to a decision about what to do next.
From here, the research has continued to develop, with scientists now exploring 'machine perception'. This involves giving machines and robots special sensors to help them to see, hear, feel and taste things like human do - and adjust how they behave as a result of what they sense.
The idea is that the more this technology develops, the more robots will be able to 'understand' and read situations, and determine their response as a result of the information that they pick up.
WHY ARE PEOPLE WORRIED ABOUT AI?
Many people have concerns about AI technology and teaching robots too much.
Famous scientist Sir Stephen Hawking spoke out about it in the past. He said that although the AI we've made so far has been very useful and helpful, he worried that if we teach robots too much, they could become smarter than humans and potentially cause problems.
Sir Stephen Hawking spoke out about AI and said that he had concerns that the technology could cause problems in the future
People have expressed concerns about privacy too. For example, critics think that it could become a problem if AI learns too much about what we like to look at online and encourages us to spend too much time on electronic devices.
Another concern about AI is that if robots and computers become very intelligent, they could learn to do jobs which people would usually have to do, which could leave some people unemployed.
Other people disagree, saying that the technology will never be as advanced as human thoughts and actions, so there is not a danger of robots 'taking over' in the way that some critics have described.
ARTIFICIAL INTELLIGENCE (AI)
DEFINITION - WHAT DOES ARTIFICIAL INTELLIGENCE (AI) MEAN?
Artificial intelligence (AI) is an area of computer science that emphasizes the creation of intelligent machines that work and react like humans. Some of the activities computers with artificial intelligence are designed for include:
TECHOPEDIA EXPLAINS ARTIFICIAL INTELLIGENCE (AI)
Artificial intelligence is a branch of computer science that aims to create intelligent machines. It has become an essential part of the technology industry.
Research associated with artificial intelligence is highly technical and specialized. The core problems of artificial intelligence include programming computers for certain traits such as:
Ability to manipulate and move objects
Free Download: AI in the Insurance Industry: 26 Real-World Use Cases
Knowledge engineering is a core part of AI research. Machines can often act and react like humans only if they have abundant information relating to the world. Artificial intelligence must have access to objects, categories, properties and relations between all of them to implement knowledge engineering. Initiating common sense, reasoning and problem-solving power in machines is a difficult and tedious task.
Machine learning is also a core part of AI. Learning without any kind of supervision requires an ability to identify patterns in streams of inputs, whereas learning with adequate supervision involves classification and numerical regressions.
Classification determines the category an object belongs to and regression deals with obtaining a set of numerical input or output examples, thereby discovering functions enabling the generation of suitable outputs from respective inputs. Mathematical analysis of machine learning algorithms and their performance is a well-defined branch of theoretical computer science often referred to as computational learning theory.
Machine perception deals with the capability to use sensory inputs to deduce the different aspects of the world, while computer vision is the power to analyze visual inputs with a few sub-problems such as facial, object and gesture recognition.
Robotics is also a major field related to AI. Robots require intelligence to handle tasks such as object manipulation and navigation, along with sub-problems of localization, motion planning and mapping.
[Master Deep Learning and build a career in AI, with this highly sought from Coursera.]
TURING TEST: WHY IT STILL MATTERS
Could everything we know and do one day be reproduced
by a complicated enough computer program
installed in a complicated enough robot?
originally published in ‘The Conversation’ 2 October 2019,
Harry Collins, Professor of Social Science, Cardiff University
Image via Shutterstock
We’re entering the age of artificial intelligence. And as AI programs gets better and better at acting like humans, we will increasingly be faced with the question of whether there’s really anything that special about our own intelligence, or if we are just machines of a different kind. Could everything we know and do one day be reproduced by a complicated enough computer program installed in a complicated enough robot?
In 1950, computer pioneer and wartime codebreaker Alan Turing made one of the most influential attempts to tackle this issue. In a landmark paper, he suggested that the vagueness could be taken out of the question of human and machine intelligence with a simple test. This “Turing Test” assesses the ability of a computer to mimic a human, as judged by another human who could not see the machine but could ask it written questions.
In the last few years, several pieces of AI software have been described as having beaten the Turing Test. This has led some to argue that the test is too easy to be a useful judge of artificial intelligence. But I would argue that the Turing Test hasn’t actually been passed at all. In fact, it won’t be passed in the foreseeable future. But if one day a properly designed Turing Test is passed, it will give us cause to worry about our unique status.
The Turing Test is really a test of linguistic fluency. Properly understood, it can reveal the thing that is arguably most distinctive about humans: our different cultures. These give rise to enormous variations in belief and behaviour that aren’t seen among animals or most machines. And the fact we can program this kind of variation into computers is what gives them the potential to mimic human abilities. In judging fluent mimicry, the Turing Test lets us look for the ability of computers to share in human culture by demonstrating their grasp of language in a social context.
Turing based his test on the “imitation game”, a party game in which a man pretended to be a woman and a judge tried to guess who was who by asking the concealed players questions. In the Turing Test, the judge would try to guess who was a computer and who was a real human.
Unsurprisingly, in 1950, Turing didn’t work out the necessary detailed protocol for us to judge today’s AI software. For one thing, he suggested the test could be done in just five minutes. But he also didn’t work out that the judge and the human player had to share a culture and that the computer would have to try to emulate it. That’s led to lots of people claiming that the test has been passed and others claiming that the test is too easy or should include emulation of physical abilities.
FIRST CLAIMED PASS
Some of this was made obvious nearly 50 years ago with the construction of the program known as ELIZA by computer scientist Joseph Weizenbaum. ELIZA was used to simulate a type of psychotherapist known as a Rogerian, or person-centred, therapist. Several patients who interacted with it thought it was real, leading to the earliest claim that the Turing Test had been passed.
But Weizenbaum was clear that ELIZA was, in effect, a joke. The setup didn’t even follow what little protocol Turing did provide because patients didn’t know they were looking out for fraud and there were no simultaneous responses from a real psychotherapist. Also, culture wasn’t part of the test because Rogerian therapists say as little as possible. Any worthwhile Turing Test has to have the judge and the human player acting in as human-like a way as possible.
Given that this is a test of understanding text, computers need to be judged against the abilities of the top few percent of copy-editors. If the questions are right, they can indicate whether the computer has understood the material culture of the other participants.
The right kind of question could be based on the 1975 idea of “Winograd schemas”, pairs of sentences that differ by just one or two words that require a knowledge of the world to understand. A test for AI based on these is known as a Winograd Schema Challenge and was first proposed in 2012 as an improvement on the Turing Test.
Consider the following sentence with two possible endings: “The trophy would not fit in the suitcase because it was too small/large.” If the final word is “small”, then “it” refers to the suitcase. If the final word is “large”, then “it” refers to the trophy.
To understand this, you have to understand the cultural and practical world of trophies and suitcases. In English-speaking society, we use language in such a way that even though a small trophy doesn’t exactly “fit” a large suitcase that’s not what a normal English speaker would mean by “fit” in this context. That’s why in normal English, if the final word is “small”, “it” has to refer to the suitcase.
You also have to understand the physical world of trophies and suitcases as well as if you had actually handled them. So a Turing Test that took this kind of approach would make a test that included an assessment of an AI’s ability to emulate a human’s physical abilities redundant.
A HIGHER BAR
This means a Turing Test based on Winograd schemas is a much better way to assess a computer’s linguistic and cultural fluency than a simple five-minute conversation. It also sets a much higher bar. All the computers in one such competition in 2016 failed miserably, and no competitors were entered from the large AI-based firms because they knew they would fail.
None of the claims that the Turing Test has already been passed mean anything if it is set up as a serious test of humanity’s distinctive abilities to create and understand culture. With a proper protocol, the test is as demanding as it needs to be. Once more, Alan Turing got it right. And, as we stand, there is no obvious route to creating machines that can participate in human culture sufficiently deeply to pass the right kind of linguistic test.
For a detailed review see Wikipedia
How is artificial intelligence – and its prominent discipline, machine learning – helping deliver better business insights from big data?
Let’s examine some ways – and peek at what’s next
for AI and big data analysis
The Enterprisers Project Kevin Casey | October 14, 2019
HOW BIG DATA WORKS WITH AI
Big data isn’t quite the term de rigueur that it was a few years ago, but that doesn’t mean it went anywhere. If anything, big data has just been getting bigger.
That once might have been considered a significant challenge. But now, it’s increasingly viewed as a desired state, specifically in organizations that are experimenting with and implementing machine learning and other AI disciplines.
“AI and ML are now giving us new opportunities to use the big data that we already had, as well as unleash a whole lot of new use cases with new data types,” says Glenn Gruber, senior digital strategist at Anexinet. “We now have much more usable data in the form of pictures, video, and voice [for example]. In the past, we may have tried to minimize the amount of this type of data that we captured because we couldn’t do quite so much with it, yet [it] would incur great costs to store it.”
[ Could AI solve that problem? Get real-world lessons learned from CIOs in the new HBR Analytic Services report, An Executive’s Guide to Real-World AI. ]
HOW AI FITS WITH BIG DATA
“The more data we put through the machine learning models, the better they get. It’s a virtuous cycle.”
There’s a reciprocal relationship between big data and AI: The latter depends heavily on the former for success, while also helping organizations unlock the potential in their data stores in ways that were previously cumbersome or impossible.
“Today, we want as much [data] as we can get – not only to drive better insight into business problems we’re trying to solve, but because the more data we put through the machine learning models, the better they get,” Gruber says. “It’s a virtuous cycle in that way.”
HOW AI USES BIG DATA
It’s not as if storage and other issues with big data and analytics have gone bye-bye. Gruber, for one, notes that the pairing of big data and AI creates new needs (or underscores existing ones) around infrastructure, data preparation, and governance, for example. But in some cases, AI and ML technologies might be a key part of how organizations address those operational complexities. (Again, there’s a cyclical relationship here.)
[ Sort out the jargon jumble. Read: AI vs. machine learning: What’s the difference? ]
About that “better insight” thing: How is AI – and ML as its most prominent discipline in the business world at the moment – helping IT leaders deliver that, whether now or in the future? Let us count some ways.
6 WAYS AI FUELS BETTER INSIGHTS
1. AI is creating new methods for analyzing data
2. Data analytics is becoming less labor-intensive
3. Humans still matter plenty
4. AI/ML can be used to alleviate common data problems
5. Analytics become more predictive and prescriptive
6. What’s next for AI and big data? We’ve merely scratched the surface
To read detail go to The Enterprisers Project
The past few years have seen enormous developments in the speed and data storage capacity of modern computers which will provide new tools to AI. Today this is in the form of a Quantum Computer.
Quantum computing is redefining what is possible with technology—creating unprecedented possibilities to solve humanity’s most complex challenges. Microsoft is committed to turning the impossible into reality—in a responsible way that brings the best solutions to humanity and our planet.
Superfast "fifth generation 5G" mobile internet could be launched as early as next year in some countries, promising download speeds 10 to 20 times faster than we have now. But what difference will it really make to our lives? Will we need new phones? And will it solve the "notspot" issue for people in remote areas?
WHAT IS 5G EXACTLY?
It's the next - fifth-generation of mobile internet connectivity promising much faster data download and upload speeds, wider coverage and more stable connections.
It's all about making better use of the radio spectrum and enabling far more devices to access the mobile internet at the same time.
WHAT WILL IT ENABLE US TO DO?
"Whatever we do now with our smartphones we'll be able to do faster and better," says Ian Fogg from OpenSignal, a mobile data analytics company.
"Think of smart glasses featuring augmented reality, mobile virtual reality, much higher quality video, the internet of things making cities smarter.
"But what's really exciting is all the new services that will be built that we can't foresee."
Image copyrightGETTY IMAGES
Driverless cars will be able to "talk" to each other and traffic management systems
Imagine swarms of drones co-operating to carry out search and rescue missions, fire assessments and traffic monitoring, all communicating wirelessly with each other and ground base stations over 5G networks.
Similarly, many think 5G will be crucial for autonomous vehicles to communicate with each other and read live map and traffic data.
More prosaically, mobile gamers should notice less delay - or latency - when pressing a button on a controller and seeing the effect on screen. Mobile videos should be near instantaneous and glitch-free. Video calls should become clearer and less jerky. Wearable fitness devices could monitor your health in real time, alerting doctors as soon as any emergency arises.
HOW DOES IT WORK?
There are a number of new technologies likely to be applied - but standards haven't been hammered out yet for all 5G protocols. Higher-frequency bands - 3.5GHz (gigahertz) to 26GHz and beyond - have a lot of capacity but their shorter wavelengths mean their range is lower - they're more easily blocked by physical objects.
So we may see clusters of smaller phone masts closer to the ground transmitting so-called "millimetre waves" between much higher numbers of transmitters and receivers. This will enable higher density of usage. But it's expensive and telecoms companies are not wholly committed yet.
IS IT VERY DIFFERENT TO 4G?
Yes, it's a brand new radio technology, but you might not notice vastly higher speeds at first because 5G is likely to be used by network operators initially as a way to boost capacity on existing 4G (LTE - Long-Term Evolution) networks, to ensure a more consistent service for customers. The speed you get will depend on which spectrum band the operator runs the 5G technology on and how much your carrier has invested in new masts and transmitters.
SO HOW FAST COULD IT BE?
The fastest current 4G mobile networks offer about 45Mbps (megabits per second) on average, although the industry is still hopeful of achieving 1Gbps (gigabit per second = 1,000Mbps). Chipmaker Qualcomm reckons 5G could achieve browsing and download speeds about 10 to 20 times faster in real-world (as opposed to laboratory) conditions.
Imagine being able to download a high-definition film in a minute or so.
This is for 5G networks built alongside existing 4G LTE networks. Standalone 5G networks, on the other hand, operating within very high frequencies (30GHz say) could easily achieve gigabit-plus browsing speeds as standard. But these aren't likely to come in until a few years later.
WHY DO WE NEED IT?
The world is going mobile and we're consuming more data every year, particularly as the popularity of video and music streaming increases. Existing spectrum bands are becoming congested, leading to breakdowns in service, particularly when lots of people in the same area are trying to access online mobile services at the same time. 5G is much better at handling thousands of devices simultaneously, from mobiles to equipment sensors, video cameras to smart street lights.
WHEN IS IT COMING?
Most countries are unlikely to launch 5G services before 2020, but Qatar's Ooredoo says it has already launched a commercial service, while South Korea is aiming to launch next year, with its three largest network operators agreeing to kick off at the same time. China is also racing to launch services in 2019.
Image copyrightGETTY IMAGES
China is experimenting with ultra high definition live drone broadcasts using 5G
Meanwhile, regulators around the world have been busy auctioning off spectrum to telecoms companies, who've been experimenting with mobile phone makers on new services.
WILL I NEED A NEW PHONE?
Yes, I'm afraid so. But when 4G was introduced in 2009/10, compatible smart phones came onto the market before the infrastructure had been rolled out fully, leading to some frustration amongst consumers who felt they were paying more in subscriptions for a patchy service.
SMARTPHONES WILL NEED NEW COMPUTER CHIPS TO HANDLE 5G
This time, says Ian Fogg, phone makers are unlikely to make the same mistake, launching 5G handsets only when the new networks are ready, probably towards the end of 2019. These next generation phones will be able to switch seamlessly between 4G and 5G networks for a more stable service.
WILL IT MEAN THE END OF FIXED LINE SERVICES?
In a word, no. Telecoms companies have invested too much in fibre optic and copper wire fixed line broadband to give those up in a hurry. Domestic and office broadband services will be primarily fixed line for many years to come, although so-called fixed wireless access will be made available in tandem.
However good wireless connectivity becomes, many prefer the stability and certainty of physical wires.
Think of 5G mobile as a complementary service for when we're out and about, interacting with the world around us. It will also facilitate the much-heralded "internet of things".
WILL IT WORK IN RURAL AREAS?
Lack of signal and low data speeds in rural areas is a common complaint in the UK and many other countries. But 5G won't necessarily address this issue as it will operate on high-frequency bands - to start with at least - that have a lot of capacity but cover shorter distances. 5G will primarily be an urban service for densely populated areas.
PEOPLE IN RURAL AREAS ARE UNLIKELY TO BENEFIT FROM 5G IN THE SHORT TERM
Lower-frequency bands (600-800Mhz typically) are better over longer distances, so network operators will concentrate on improving their 4G LTE coverage in parallel with 5G roll-out.
But commercial reality means that for some people in very remote areas, connectivity will still be patchy at best without government subsidy making it worthwhile for network operators to go to these places.
INTERNET OF EVERYTHING
OpenLearn The Open University
The Internet of Everything free course is brought to you by The Open University. This course was originally developed by Cisco Systems Ltd and adapted for OpenLearn by The Open University. The collaboration of The Open University and Cisco Systems to develop and deliver this course as part of OpenLearn’s portfolio will provide and extend free learning in this important and current area of study.
The internet of everything, and all of the connected things on the internet, are here to stay. There is considerable hype in the media – good and bad – that makes it difficult to work out if this connectedness is a good thing or a bad thing. Or, should we be indifferent about the internet of everything? What is clear from media coverage is that the internet of everything has already been associated with global security scares, while for many it is the cool technological must-have.
So, what it is and why should I care?
You may not yet have a smart watch, an Amazon Echo or a refrigerator that has become hackable. However, as we have seen the explosion of smartphones, video technologies and Pokemon, we are witnessing another technology that is set to become a part of everyday life for everyone.
Welcome to the free course Internet of everything. The internet of everything (IoE) is the networked connection of people, process, data and things. As more people, data and things come online, we need to develop skills and technological processes to harness the vast amounts of information being generated by all these connected people and things.
The goal of this course is to introduce you to fundamental concepts and technologies that enable the IoE and help you understand its benefits as well as potential risks. The course presents introductory material and is intended to be easily completed by anyone with a basic appreciation of computer technologies. By completing this course, you will not become an IoE expert, but you will become an informed individual.
As part of this course, and to check your understanding of the concepts explained, there is a brief quiz at the end of each session. There is also an end-of-course assessment quiz. By enrolling you can track your progress and gain a free statement of participation for completing the whole course.
A SEASON EXPLORING WHAT IT MEANS TO BE HUMAN WHEN TECHNOLOGY IS CHANGING EVERYTHING
Barbican Centre, London
Throughout 2019, we'll be investigating the impact of the pace and extent of technological change on our culture and society, looking at how we can grasp and respond to the seismic shifts these advances will bring about.
Life Rewired will interrogate how artists are responding to a time when technology is simultaneously enhancing our lives and challenging our identity by creating machines with human characteristics. It will explore how scientific breakthroughs can affect us at every stage of our life; from expert and first-person perspectives on IVF to the personal and societal impact of lengthening life expectancy.
The season will demonstrate how artists are finding imaginative ways to communicate the human impact of unprecedented technological shifts, as well as finding creative new uses for artificial intelligence, big data, algorithms and virtual reality.
See also What is AI, Barbican Centre, London, Arts and Future, Google
SOLVING 21ST-CENTURY CHALLENGES
University announces unprecedented investment in the Humanities
University of Oxford
At a time when significant investments are being made in scientific and technological research and development, this gift recognises the essential role of the Humanities in helping society confront and answer fundamental questions of the 21st century.
One of the most urgent of these questions relates to the impact of Artificial Intelligence, which will challenge the very nature of what it means to be human and transform most aspects of our lives. From our health and wellbeing to the future of work and manufacturing, AI will redefine the way we live, work and interact.
Just as the Humanities helped guide the debate on medical ethics 30 years ago, so they will be even more essential in providing an ethical framework for developing machine intelligence, for responding to the increasing automation of work, and the use of algorithms in all walks of life. The planned Institute for Ethics in AI, which would be housed within the Faculty of Philosophy, allows Oxford to deploy its unique resources and expertise towards these issues.
Sir Tim Berners-Lee, inventor of the World Wide Web, said: ‘It is essential that philosophy and ethics engages with those disciplines developing and using AI. If AI is to benefit humanity we must understand its moral and ethical implications. Oxford with its rich history in humanities and philosophy is ideally placed to do this.’
What is AI? / Basic Questions Stanford University
Artificial Intelligence: What it is and why it matters SAS
What is AI? Everything you need to know techradarpro
The Cyber Security Battlefield AI Technology Offers Both Opportunities and Threats
Robert Fay/Wallace Trenholm Centre for International Governance Innovation
An Executive’s Guide to Real-World AI Harvard Business Review
Impacts of Artificial Intelligence in everyday life Geeks for Geeks
For Jewish Links go to AI and Judaism Links
STORY OF THE JEWISH PEOPLE