The realistic scope of AI
Everyone’s talking about AI and looking for opportunities to leverage its benefits to advance their application. But is AI really the golden egg that it’s made out to be? We’ve taken a look at some of the emerging opportunities and current limitations surrounding AI.
Self-driving cars still need more intelligence
One of the sectors to get very excited about the possibilities of AI was the automotive one. The ability for a self-driving car to read, interpret and react to a dynamic environment suggested that fully autonomous vehicles were within reach.
Carnegie Mellon University integrated neural networks into image processing and steering controls in the early 1990’s, introducing the concept of self-driving cars as we know it. However, in 2004, DARPA held a competition challenging vehicles to self-navigate a stretch of 150 miles of desert roadway. No cars completed the challenge, suggesting that true autonomy was still idealistic. Even today, Tesla has been requested by the German government to no longer refer to their Full Self-Driving package as autonomous.
While we are beginning to see the emergence of automated shuttle taxis, such as that announced by Amazon’s Zoox recently, fully self-driving private cars are still a little way off as systems must become even more intelligent and reliable to offer the level of safety required to take over our streets.
AI can add value to autonomous vehicles
There is hope, however, driven by developments in sectors such as mining and agriculture. John Deere is well-known for its work towards automating farming and is currently developing autonomous tractors, drones and sprayers. They have deployed high-resolution cameras and AI in their See & Spray technology to recognize the difference between weeds and crops, which, combined with AutoTrac satellite guidance and their 8RX autonomous tractor, proves that AI is adding value to autonomous vehicles – developments which will inevitably trickle down to more commercial trucks, buses and cars. The See & Spray system relies on 36 cameras and NVIDIA GPUs to leverage the deep learning AI tool, but initial reports suggests that it saves 66% on herbicides[1].
Why AI couldn’t do more in the war against COVID-19
AI was engaged in many of the battle fronts, but in the end the war is still rumbling. Early in the pandemic, we wrote about the role of imaging in understanding the virus and creating a vaccine. But advanced AI was being to put to use too, for example, in predictive modelling to try to understand how and where the virus would spread, and in machine learning algorithms to help doctors build treatment plans. But a report[2] published by The Alan Turing Institute determined that AI had little influence on the overall course of the pandemic, and the reasons why can be applied to other sectors too where AI implementation has been slow.
Firstly, successful AI platforms rely on huge amounts of data being readily available. With COVID-19, scientists were dealing with a lot of unknowns and reliable data really only became available once the pandemic was already widespread. This was particularly true in less-developed economies.
The report also highlighted the role of communication between researchers, policy makers and the public. It concluded that better communication would have reduced the likelihood of data and research being misused or misinterpreted. This could also be applied to industrial projects where strong communication between system developers, integrators, operators and senior management can make the difference between successful deployment of AI or just expensive oversights.
AI is still valuable for medical developments
That’s not to say that AI can’t be an extremely valuable tool in life science – we know that it is. We wrote about AI in medical diagnostics back in 2018, and more recently, AI and imaging techniques for robot assisted surgery. These cover just some of the numerous vision and AI tools which have played an immense role in understanding disease and improving surgical outcomes across a plethora of fields.
Where algorithms can be created based on the huge datasets available in medical research, AI has enormous scope to rapidly speed up and enhance detection, treatment and recovery.
Does AI spell the end of writing?
Chatbots are in all the headlines at the moment. OpenAI’s ChatGPT has successfully launched and is creating unprecedented challenges for teachers while enabling almost instant editorial for content curators. With 175 billion parameters, it’s one of the largest language processing AI models currently available.
Depending on how we feel about AI, we may have been delighted or alarmed to hear of the less successful introduction of Google’s Bard. This language tool unfortunately published incorrect information in its promotional video, proving there’s still a need for sense-checking machine-generated text – a task probably still best done by a human!
Microsoft is also in the race to create the perfect chatbot, although recent reports suggest that its Bing AI (originally called Sydney) can be provoked into being rude, untruthful and even refusing to give answers. In fact, New York Times journalist Kevin Roose described the bot as “a moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine.”[3]
AI language models such as these chatbots do, of course, have promise. For example, ChatGPT can be used to find errors in code, create more natural translations from one language to another, and even write songs.
But there are limitations (ChatGPT itself admits to having limited knowledge of the world and events after 2021). Inaccurate or inappropriate answers will inevitably creep in, and there’s still the argument that AI engines are inherently biased. The advice around AI-driven chatbots is very much to use it and enjoy it, but don’t rely on it to be accurate, truthful or well-balanced!
Imaging and AI complement each other
Because AI depends so heavily on data, and image data is one of the most prevalent and trustworthy sources, imaging and AI work together to enable each one to do more. Where a system relies on classification rather than prediction, for example, image-based machine learning can increase the scope of a project immensely. Industrial inspection, surveillance, intelligent transport systems and medical imaging, as examples, will all benefit greatly from advancements in AI. It’s an exciting arena – as long as we remember it has limitations too. We couldn’t resist having a dabble with ChatGPT, and here’s what it thinks is the realistic scope of AI:
The realistic scope of AI is vast and ever-expanding, but it is important to recognize that AI is still in its early stages of development and has limitations. AI has the potential to improve and automate many tasks that are currently performed by humans, such as data analysis, language translation, and image recognition. It can also enhance decision-making by providing insights and predictions based on large sets of data. However, AI is not capable of replicating the full range of human intelligence and abilities, such as empathy, creativity, and intuition.
Seems like a pretty good summary to us!
[1] Machine vision drives robot farming’s cutting edge | Imaging and Machine Vision Europe (imveurope.com)
[2] Data science and AI in the age of COVID-19
[3] Microsoft Bing AI Chatbot “Sydney” Tells Test User To Leave His Wife, and Be With It Instead | Tech360.tv