Generative AI has taken the world by storm with an innovative platform that has caused more hype and buzz than any other: ChatGPT. However, with over a million people testing it in its first week of launch, many are left wondering: what makes ChatGPT so special?
In episode #5 of our DRUID Talks podcast, we talked with Tom Allen, founder of The AI Journal, about the origins and workings of ChatGPT, how it compares to other conversational AI systems, its applicability to the enterprise world, and the key challenges and potential risks and benefits of such systems to the fabric of our society. Because with great technology comes great responsibility, and unfortunately, "The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions and godlike technology" (Dr. E.O. WILSON, sociobiologist). It means that although we might be willing to entrust AI with decision-making, it isn't an option – AI still bears the flaws and biases of humankind. So, to mitigate its possible threats, it is essential to develop and adopt responsible AI practices, invest in research to address AI-related challenges and establish policies and regulations that guide the development and deployment of AI technologies in a manner that prioritizes human welfare and ethical considerations.
Large language models such as ChatGPT and DALL-E 2 have made creativity and knowledge work accessible to everyone, even novices. These tools allow users to create illustrations, generate marketing pitches, and write computer code with a level of quality typically attributed to human experts. It's a fantastic opportunity for humans to open up to the possibilities and opportunities these advancements bring.
Old Jobs Will Be Pushed Away, and New Jobs Will Emerge
Many people fear that AI will "steal their job, " which is not without reason. As automation, AI and digital workers play an increasing role in people's everyday lives, their potential impact has become the subject of much research and debate. For example, the McKinsey Global Institute report postulated that advances in AI and robotics would have a far-reaching impact on the everyday lives of workers. They quoted, "In about 60% of occupations, at least one-third of the constituent activities could be automated, implying substantial workplace transformations and changes for all workers." Moreover, the new mantra of the techno enthusiasts in Silicon Valley is: "You won't be replaced by an AI, but you might be replaced by someone who knows how to use AI".
Nevertheless, according to Accenture, companies will use these models to reinvent the way work is done. Every role in every enterprise has the potential to be reinvented as humans working with AI copilots become the norm, dramatically amplifying what people can achieve. In any given job, some tasks will be automated, some will be assisted, and some will be unaffected by technology. There will also be many new tasks for humans, such as ensuring the proper use of new AI-powered systems or a new, yet simpler, kind of human creativity in the form of text prompts to get the results the human user seeks. Through iterative prompting, the AI system generates successive rounds of outputs until the human writing the prompts is satisfied with the results. AI, automation, and intelligent virtual assistants are actually better at augmenting the current employees' roles rather than replacing them. Ensuring that people understand this and that they know how AI can make their jobs easier is a really important step in generating the excitement around AI we want and need to see.
Although technology may impact jobs that involve generating coherent text, it can also lead to new and unimagined job opportunities. Large language models have limitations such as requiring human cleverness to craft prompts, generating nonsensical output, and lacking abstract understanding of truth, falsehood, and common sense. Despite these limitations, there will still be a need for human judgment in content creation, and many types of specialized language will remain out of reach for machines. Introducing large language models will disrupt the job market, but it also presents valuable opportunities for those who can adapt and integrate these tools.
Potential Loss of Skills, Inaccuracies, Biases, and Plagiarism – But Creativity for All?
However, downsides to using these AI tools include the potential loss of important human skills such as writing. Educational institutions need to establish policies on the appropriate use of large language models, and there are also questions about intellectual property protections. Tools like GitHub Copilot and ChatGPT have the potential to raise the bar on creativity and help users generate coherent code and text. However, there are concerns regarding inaccuracies, biases, and plagiarism. Users must remain critical of the output generated by these tools, as inaccuracies and biases can pollute the internet with false information.
Additionally, language models can learn from biases in data, and plagiarism detection tools need to catch up when it comes to detecting paraphrasing. Potential solutions to these limitations include fact-checking generated text against knowledge bases, detecting and removing biases, and using more sophisticated plagiarism detection tools. However, as these tools are in their infancy, there is room for improvement to address these concerns and unlock their full potential.
Towards the Great LLM-ization?
Reskilling and upskilling have become essential for employees to remain competitive and adapt to new roles and industries. By 2025, half of all employees will need to be reskilled due to the growing use of technology, according to the World Economic Forum's Future of Jobs Report.
It’s critical to prepare ourselves for the Great LLM-ization as AI becomes the interface to the Internet – and to physical businesses. LLMs are going to shake things up in the world of call centre services and back-office business processes, rendering their traditional business models obsolete as technology and behaviour continue to evolve.
Nevertheless, Large Language Models are quick and easy to learn. Although many jobs will “die”, a lot of others will emerge. For example, prompt engineers are the fastest-emerging class of digitally fluent business/tech designers. Learning how to “prompt” will become a standard skill that all of us are expected to have. Employees will also need reskilling and constant self-improving in:
- Iterating - asking the same question in different ways, exploring multiple responses to the same prompt, and then comparing the results, detecting bias, and being aware of it.
- Evaluating responses. By asking questions in different ways, discovering contradictions, and asking to self-assess is a key aspect of making sure the information is correct.
- Eradicating bias by constantly expanding the understanding of bias in LLMs. ChatGPT, for example, is biased based on the underlying approach used to build the LLM and the data used to train it.
- Generative Thinking - the big challenge as we approach the Great LLM-ization is to constantly seek new ideas beyond the constraints of current LLMs. For example, asking ChatGPT to summarize, synthesize and find the contradictions in the result it creates is only a starting point.
Overcoming Job-Related Risks
Despite these concerns, the public is ready to embrace these new AI tools, and there is untapped potential for creativity and knowledge work. And although generative AI promises to make 2023 one of the most exciting years yet for AI, business leaders should proceed with eyes wide open because it presents many ethical and practical challenges.
Just as the skills for finding information on the internet changed with the advent of Google, the skills necessary to draw the best output from language models will centre on creating prompts and prompt templates that produce desired outputs.
As with many technological advances, generative AI is rapidly growing with great innovation and value-creation potential. Businesses are right to be optimistic about the potential of generative AI to radically change how work gets done and what services and products they can create. But, on the other hand, how people interact with it will make the difference: will this moment be used to advance equity or exacerbate disparities?