Artificial Intelligence (AI) has played an increasing role in optimizing business processes nowadays. Companies are advancing administrative processes and business operations with the help of AI—including recruiting, market research, customer services, and internal knowledge management. Looking at the competitive advantage, the usage of AI keeps on expanding itself wrt manual human tasks that would take hours to finish. However, should we fully trust machines and technologies? How do we validate the data and content? And what’s the human side of AI as it applies to knowledge management?
"The first thing to realize about AI is to validate data, it’s tempting to go from one source. But the moment you only have one data point, you are vulnerable to trickery,” said innovation expert Claus Raasted, also the director at The College of Extraordinary Experiences during a recent Lynk Webinar on human pitfalls of AI for knowledge management.
As new AI technologies keep on coming and making our searches more efficient, we have a tendency to take it for granted as the information is readily available at a faster pace. We tend to believe whatever is shown to us would be high quality. When so much information is presented quicker, ethical matters often go unnoticed. AI lacks the ethical aspect when it comes to spreading knowledge among people.
“In terms of personalization, I think so much of AI is about making things more efficient and easier with the taken-for-granted assumption that being easy and fast is always better, which is not necessarily true,” said AI ethics specialist Cathrine Bui, the founder of Bui Consulting during the webinar.
“We should be aware of these sorts of things so we don’t get caught up in this love of data because it doesn’t always tell us what we’re looking for and it’s so tempting to go deep into them. And then we have an AI that can tell us ‘everything’, and we don’t even know what the AI is doing, but it’s telling us this is good,” Raasted stressed that data validation is important to AI processes.
Remote work has added challenges to most organizations, and finding the right person with the right knowledge has not been easy. “The knowledge of trying to figure out who knows what, how we can trust the information, and then really what we can do to verify that content is the central and key element,” said Brian MacDonald, Head of AI and Data Products at Lynk during the webinar.
Organizations can build practical knowledge hubs that are updated in real-time with each use, making the process of finding the right knowledge much quicker. With a robust knowledge hub, organizations can:
With the increasing AI services in business, data misuse is also a threat. Therefore, a regulatory framework is needed to control the misuse of data and take precautionary measures for users' digital privacy safety. Here, the EU General Data Protection Regulation (GDPR) comes into play, and also ISO creates certain standards and criteria for companies to abide by the policy changes. “ISO is actually creating – working on standards for certifying ethics and a new version of GDPR is coming in a few years. It’s called the EU AI Act, and it’s really much more extensive than GDPR and it will affect all companies who want to operate in the EU. If you have customers in the EU, you have to follow the new regulations that are coming,” said Bui.
Any kind of knowledge or information belongs to a source first. When it comes to passing or acquiring new pieces of knowledge, companies should validate the sources of information, according to Raasted. Verifying the source assures credibility and creates a stronger base for knowledge. Meanwhile, companies should understand what knowledge they want to see and optimize, and consider if they are holding the right kind of information, said Bui.