You can ask in a few months a virtual assistant to transcribe meeting notes during a work call, summarize long email threads to quickly compose suggested replies, quickly create a specific chart in Excel, and turn a Word document into a PowerPoint presentation in seconds.
And that’s only on Microsoft’s 365 platforms.
Over the past week, a rapidly evolving artificial intelligence landscape seemed to take another leap forward. Microsoft and Google each unveiled new AI-powered features for their signature productivity tools, and OpenAI introduced its next-generation version of the technology that underpins its viral chatbot tool, ChatGPT.
Suddenly, AI tools, which have long been running in the background of many services, are now more powerful and more visible in a wide and growing array of workplace tools.
For example, Google’s new features promise to help “brainstorm” and “proofread” written work in Docs. In the meantime, if your workplace uses the popular chat platform Slack, you can have the ChatGPT tool talk to colleagues for you, possibly ask to write and reply to new messages, and condense conversations into channels.
OpenAI, Microsoft and Google are at the forefront of this trend, but they are not alone. IBM, Amazon, Baidu and Tencent are working on similar technologies. A long list of startups are also developing AI writing assistants and image generators.
The pitch from tech companies is clear: AI can make you more productive and eliminate the tedious work. As Microsoft CEO Satya Nadella put it during a presentation on Thursday, “We believe this next generation of AI will unleash a new wave of productivity growth: powerful copilots designed to remove the grind from our daily tasks and jobs, making we can rediscover the joy of creation.”
But the sheer number of new options coming to market is both staggering and, as with so many others in the tech industry over the past decade, raises the question of whether they will live up to the hype or have unintended consequences, including enabling cheating and eliminating the need for certain roles (although that may be the intention of some adopters).
Even the promise of increased productivity is unclear. For example, the rise of AI-generated emails can increase sender productivity, but decrease it for recipients who are inundated with longer-than-necessary computer-generated messages. And of course, just because everyone has the option of using a chatbot to communicate with colleagues doesn’t mean everyone chooses to do so.
Incorporating this technology “into the foundational pieces of productivity software most of us use every day will have a significant impact on the way we work,” said Rowan Curran, an analyst at Forrester. “But that change won’t sweep everyone and everything tomorrow — it takes time to learn how best to use these capabilities to improve and adapt our existing workflows.”
Anyone who has ever used an autocomplete option when typing an email or sending a message has already experienced how AI can speed up tasks. But the new tools promise to go much further.
The renewed wave of AI product launches began nearly four months ago when OpenAI released a limited version of ChatGPT, stunning users with generating human-sounding responses to user prompts, passing exams at prestigious universities, and writing engaging essays on a range of topics.
Since then, the technology – in which Microsoft made a “billion dollar” investment earlier this year – has only improved. Earlier this week, OpenAI unveiled GPT-4, a more powerful version of the technology underlying ChatGPT that promises to blow previous iterations out of the water.
In early testing and a corporate demo, GPT-4 was used to draft lawsuits, build a working website from a hand-drawn sketch, and recreate iconic games like Pong, Tetris, or Snake with very little to no previous coding experience.
GPT-4 is a large language model trained on massive amounts of online data to generate responses to user prompts.
It’s the same technology that powers two new Microsoft features: “Co-pilot,” which allows editing, summarizing, creating, and comparing documents across all platforms, and Business Chat, an agent that essentially rides with the user while it works and tries to understand and understand their Microsoft 365 data.
For example, the agent knows what’s in a user’s email and calendar for the day, as well as the documents they’ve been working on, the presentations they’ve created, the people they’ve talked to, and the chats that are happening on their Teams platform, the company said. Users can then ask Business Chat to perform tasks such as writing a status report by summarizing all documents across platforms on a particular project and then composing an email to send to their team with an update .
said Curran simply how much these AI-powered tools will change work depends on the application. For example, a word processing program can help generate sketches and drafts, a slideshow program can help speed up the design and content creation process, and a spreadsheet app should help more users interact with and make data-driven decisions. The latter, he says, will have the greatest impact on the workplace, both in the short and long term.
The discussion about how these technologies will affect jobs “should focus on tasks rather than jobs as a whole,” he said.
While OpenAI’s GPT-4 update promises solutions to some of its biggest challenges — from the potential to perpetuate prejudice, sometimes be factually incorrect, and react in an aggressive manner — there’s still the potential for some of these issues make their way into the workplace, especially when it comes to interacting with others.
Arijit Sengupta, CEO and founder of AI solutions company Aible, said a problem with any major language model is that it tries to please the user and typically accepts the premise of the user’s statements.
“If people start gossiping about something, it will accept that as the norm and then start generating content [related to that]said Sengupta, adding that it could escalate interpersonal issues and turn into office bullying.
In a tweet earlier this week, OpenAI CEO Sam Altman said wrote that the technology behind these systems is “still flawed, still limited, and it still seems more impressive on first use than after you’ve spent more time with it.” The company reiterated in a blog post that “great care should be taken when using language model outputs, especially in high-stakes contexts.”
Arun Chandrasekaran, an analyst at Gartner Research, said organizations need to teach their users what these solutions are good at and what their limitations are.
“Blind faith in these solutions is just as dangerous as complete lack of faith in their effectiveness,” Chandrasekaran said. “Generative AI solutions can also fabricate facts or present inaccurate information from time to time – and organizations need to be prepared to mitigate this negative impact.”
At the same time, many of these applications are not up to date (the GPT-4 data on which it was trained stops around September 2021). The burden of proof will have to be on the users to do everything from double-checking accuracy to changing the language to reflect the tone they want. It will also be important to get buy-in and support across all workplaces to get the tools off the ground.
“Training, education and organizational change management are very important to ensure that employees support the effort and that the tools are used in the way they are intended,” said Chandrasekaran.