Skip to content
Digital & Technology

Artificial Intelligence (AI): Risks and Opportunities

The #HereForBusiness team summarise the major debates about artificial intelligence (AI) and work


Calls for stricter regulation of artificial intelligence (AI) technologies have grown louder in the past few months, amid concerns that they could destabilise the economy and undermine workers’ rights.

So, what is AI? Why are so many people and organisations worried about its development and application? How might such fears be quelled, if at all? And what can small and medium-sized enterprises (SMEs) do to benefit from AI, while avoiding its potential pitfalls?

This article offers some tentative answers.

Artificial intelligences (plural)

Put very simply, an artificial intelligence is a computer system that can perform tasks which commonly require human intelligence. AI relies on algorithms to categorise and analyse large data sets, making predictions based on the emergent patterns. AI is different from traditional computer programmes because its software and hardware allow it to learn from data and thus improve its core functions over time.[i]

The terms ‘machine learning’, ‘deep learning’, and ‘neural networks’ are often used as synonyms for AI, even though they are distinct technologies.[ii] Likewise, when many people think about AI, they imagine something akin to the sentient electronic minds of mid-century science fiction. This, too, is a misconception. So called ‘strong AI’ is a distinct – and, at present, speculative – concept consisting of two related archetypes – artificial general intelligence (AGI) and artificial super intelligence (ASI).

AGI describes a theoretical machine that could equal human intelligence and “would have a self-aware consciousness that has the ability to solve problems, learn, and plan for the future”. ASI refers to a hypothetical supercomputer that could surpass human intelligence and capabilities.[iii]

AI in action

Well-established examples of AI include technologies such as facial recognition (i.e., a way to unlock one’s phone), optical character recognition (i.e., a way to search digitised print text), and search engines (i.e., a way to find information). Perhaps less well known is the fact that AI has long been used to provide corporations with detailed information about staff and customers, as well as to devise sophisticated performance targets and advertising campaigns.

However, it appears the rapid rise of large language model chatbots such as OpenAI’s ChatGPT and image generators such as Midjourney has done more than anything to thrust AI into the spotlight.

The AI debate

Although often side-tracked by conjecture about AGI and ASI, the mainstream AI debate largely revolves around perceived cost-to-benefit ratios and oversight (or lack thereof).

Proponents argue that AI is comparable to, and should be treated in the same way as, contemporary technical advancements such as the internet and social media. They contend that by automating various tasks – completing them faster and more accurately with less resources – AI greatly increases productivity and efficiency. Such improvements, it is claimed, promise to revolutionise every aspect of society, from architecture and banking to healthcare and journalism – with smaller business able to benefit from enhanced economies of scale.

AI’s detractors dispute this characterisation to varying degrees, pointing to its capacity for disruption and misuse. They have raised concerns about privacy safeguards, code and data transparency, and the impact of AI systems on the labour market, particularly in the knowledge economy. They have also highlighted the possibility that the technology will be used to facilitate plagiarism in education, increase the effectiveness of online phishing scams, and challenge the authenticity of creative content.

Still, both sides generally agree on the need for regulation. 

AI in the workplace and domestic regulation

In a bid to highlight the risks of “management by algorithm”, the Trades Union Congress (TUC) hosted a conference on Tuesday to present its findings on the uses of AI in the workplace.

Mary Towers, an employment lawyer who advises the TUC on AI, explained: “Making work more rewarding, making it more satisfying, and crucially making it safer and fairer: these are all the possibilities that AI offers us.

“But what we’re saying is, we’re at a really important juncture, where the technology is developing so rapidly, and what we have to ask ourselves is, what direction do we want that to take, and how can we ensure that everyone’s voice is heard?”

One response to these questions was the UK government’s white paper on AI, published in March, which suggested that existing regulators could ensure that the technology is fair, transparent, and explicable. Nevertheless, critics like Adam Cantwell-Corn from Connected by Data dismissed this approach as a series “intentions with no firepower behind it”.

In addition, Mr Cantwell-Corn criticised the separate Data Protection and Digital Information Bill: “Most people’s experience of [General Data Protection Regulation] GDPR is annoying pop-ups, but if we understand it in the context of increasing datafication and artificial intelligence in the workplace in particular, it’s got really important provisions that the bill is weakening.”

For its part, the TUC said that stronger legislation was needed to protect workers from the improper deployment of AI. This would encompass the right for employees to know how the technology is being used and a statutory duty for employers to consult them before introducing new artificial intelligence systems.[iv]

AI, economic transformation, and international treaties

Meanwhile, on the other side of the aisle, and across the Atlantic, Google’s chief executive revealed the prospect of unregulated AI keeps him awake at night.

On Sunday, Sundar Pichai – whose company recently launched the Bard chatbot to compete with ChatGPT – told CBS’s Scott Pelley that AI can be “very harmful if deployed wrongly and we don’t have all the answers there yet – and the technology is moving fast.”

After outlining the ways in which artificial intelligence might “impact every product across every company”, help spread disinformation, and change the nature of “knowledge [work]”, Mr Pichai went on to say that the technology requires an international treaty framework similar to the one governing nuclear weapons.

He added: “Compared to any other technology, I’ve seen more people worried about it earlier in its life cycle. So, I feel optimistic.”[v]

Many of Mr Pichai’s peers do not share his confidence.

Indeed, the Google chief’s intervention came just weeks after around 1,800 developers and engineers, including Elon Musk and Steve Wozniak, signed an open letter demanding a six-month pause on research into systems “more powerful” than the current iteration of ChatGPT. While primarily concerned with scenarios that give birth to AGI or ASI, the epistle nonetheless echoed the anxieties of, and recommendations made by, others:

“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.”[vi]

AI and SME’s

High level discussions about artificial intelligence are often fraught, complicated, and lacking in different perspectives – not to mention prone to exaggeration. As a result, it can be difficult for SMEs to grasp the ways in which AI might impact their bottom line, workforce, and long-term strategy.

AI may ultimately prove transformative for some and irrelevant or counterproductive to others. How can decisionmakers determine into which category their firm falls, then? Well, in the absence of comprehensive laws, clear guidelines, and years of experience, they may have little choice but to place their trust in nature’s hitherto most powerful computer – the human brain.

The truth is, running a business is hard enough without having to contend with Copernican revolutions and entirely novel ways of working. Even so, in a global, interconnected, tech-driven economy, it is more important than ever for business leaders to make sure they understand emerging opportunities and risks so they can protect customers, workers, and commercial interests.

Find out more about AI

Fortunately, help is on hand. The Growth Company is hosting an event to explain ChatGPT and its possible uses in Manchester on 9th May. The workshop will be run by digital marketing strategist and futurist Tanya Hemphill, and will cover:

  • Introduction, H&S, Aims & objectives of the session
  • Overview of CHATGPT and the pros and cons of using AI
  • Practical exercise – using own computers start to generate content and searches on AI
  • How to embed CHATGPT into your business
  • Q&A

Find out more: Chat GPT | GC Business Growth Hub

In the meantime, if you run a small or medium-sized business in Greater Manchester and want to understand how AI might impact your operations and revenue, GC Business Growth Hub’s #HereForBusiness package provides practical guidance and expert advice on a range of topics to help you manage the increasing cost of doing business.

If you have any questions, get in touch now.

----------

[i] IBM. ‘What is artificial intelligence (AI)?’. International Business Machines Corporation (IBM), 2023, What is Artificial Intelligence (AI) ? | IBM

[ii] Eda Kavlakoglu. ‘AI vs. Machine Learning vs. Deep Learning vs. Neural Networks: What’s the Difference?’. International Business Machines Corporation (IBM), 27th May, 2020, AI vs. Machine Learning vs. Deep Learning vs. Neural Networks: What’s the Difference? | IBM

[iii] Op.Cit. IBM. ‘What is artificial intelligence (AI)?’

[iv] Heather Stewart. ‘Calls for stricter UK oversight of workplace AI amid fears for staff rights’. The Guardian, 16th April, 2023, Calls for stricter UK oversight of workplace AI amid fears for staff rights | Employment law | The Guardian

[v] Dan Milmo. ‘Google chief warns AI could be harmful if deployed wrongly’. The Guardian, 17th April, 2023, Google chief warns AI could be harmful if deployed wrongly | Artificial intelligence (AI) | The Guardian

[vi] Chris Vallance. ‘Elon Musk among experts urging a halt to AI training’. BBC News, 30th March, 2023, Elon Musk among experts urging a halt to AI training - BBC News

--

#HereForBusiness is funded by the UK government through the UK Shared Prosperity Fund.

Share this post

GenAI-Powered Chatbot