Artificial intelligence and machine learning technologies have made impressive strides in the last several years, and thanks to platforms like cloud, AI, and machine learning capacities now are widely available to organizations of all kinds and dimensions. But as any seasoned technology leader knows, having technologies on the shelf does not mean it'll get accepted or used. It might simply just wind up remaining on the shelf.
I recently caught up with Jack Berkowitz, vice president of products and information science to get Oracle Adaptive Intelligence, whose occupation it is to be certain the technology does not wind up as shelfware but is put to great use for the organization. He says that he sees AI and server learning are now being embedded into an assortment of functions and applications, from the supply chain and manufacturing programs for ERP, finance, procurement, human resources management, and customer expertise programs for sales, support, advertising, and commerce. The effect is just nothing less than transformative, he adds, even as some AI or system learning-driven applications now in use today "are helping unearth rich small business insight and create greater efficiencies across the whole organization."
Driven by these hopes of transformation to the positive side, the speed of artificial intelligence and machine learning adoption is accelerating. "AI is possible today due to the plethora of information, sophisticated algorithms, and lightning-fast computing ability," Berkowitz says. At precisely the identical time, he cautions that because the tech is there, folks will visit it. He provides some advice to drive acceptance of AI and machine learning:
Gradually introduce attributes -- even smallish ones -- useful to end-users
Berkowitz says that his own goods at Oracle, for example, are designed "to send users subtle clues -- slide-outs and alerts -- to tell them of new AI attributes within the user interface of the program." He predicts other vendors will follow suit.
Ensure the ideal data has been applied to the right small business problem
Along with user comfort with AI, there's another consideration that has to be addressed: getting good, quality information from the perfect sources. "The information that's required to get AI and machine learning is determined by the problem being solved," Berkowitz explains. "For instance, if you are attempting to spot customers to target a digital advertising, then you may split this problem into various places. To begin with, how can I figure out who are the most trusted clients, can I create a customer value score using their past purchases, lifetime value, or their actions on social networks? The next area is exactly what data do I have concerning the effort, including metadata and target demographics. Once you've the data, then you can use techniques like machine learning, profound learning and learning to rank to locate and rank the customers most suitable for this advertising campaign."
Make sure data is of the best quality, and can be double-checked
"The battle starts with bringing in the fixed and transient data from several disparate resources into a frequent platform," says Berkowitz. "Not only do you need to bring in the information, but you need to be certain the information is clean, modifications are dispersed, and records denormalized. Maintaining the data quality and getting checks at the data ingestion phase is crucial." He exemplifies where things could go awry without tests on what information is being fed into the machine: "Let us say you are bringing in ages into the training dataset. You generally have ranged between 23 to 68, however, following the pipeline has been pushed to a generation that there was a bug in the data collection and now you've got values at 220. Having checks set up would prevent the units by tuning its weights into those extreme circumstances."
Evaluation reliably
"Exploratory data analysis and feature technology" is an increasingly significant part AI and machine learning achievement, Berkowitz says. "These approaches play a big part in driving important key attributes and insight that aren't accessible from the data in its raw form. Various steps of feature scaling and attribute transformations have to be done before the information could be passed into the algorithms. Among the major things the version does is generalize the routines from past data to be able to forecast patterns in future data points"