THE HOME OF THE GLOBAL SOURCING STANDARD
SourcingTech Series - Artificial Intelligence
The latest in the GSA SourcingTech Series, all about Artificial Intelligence. A collaboration of speakers focused on demystifying the use of AI and how to implement this technology.
An Introduction to AI
AI vs Traditional Computer Programming, Big Data, The Rise of AI, How does AI work?, The Model as a Black Box
AI Lead & Solicitor Advocate at Simmons & Simmons
AI is an increasingly talked about topic, and there is a misconception to what it is, with there often being confusion between AI and Automation. Manish contrasts AI with traditional computer programming. We have been trusting computers to make decisions for decades, where rules are set by humans and logic based to guide the computer to a decision. The limitation of traditional computer programming is that humans need to understand the features of the data. If humans don’t know data well, they are not in a position to set accurate rules for the computer.
BIG data – Additional data is constantly being produced. Data is the fuel that drives AI and we have so much data in a digital form. As humans make so many transactions, its impractical for humans to understand data and then set the rules for a programme. The solution for this is AI. We no longer set the rules, we let the powerful computer processor review the information and set the rules itself.
Machine learning is the main form of AI used. We feed digital data to the computer, give it instructions on how to review the data and the required output. This starts with an algorithm, which is a set of instructions to the AI system. Then we decide what output we want which can be numerical or a category. Apply this to the data and this becomes the AI model. The model has to be tested to make it more accurate. The most time-consuming part is the testing data. An AI system is only as good as the data it has been trained on.
Humans can tweak systems themselves or more complicated systems can adjust themselves based on human feedback. The machine learns from its own mistakes. Once the model is ready to be deployed you feed it live data in the real world.
We are currently at a time of weak or narrow AI – the models we have are capable of one task but doing it well. Sometimes the models are so complicated the human doesn’t understand how the machine has come to a decision.
The value of AI including ROI, Sourcing Success and Achieving the potential of AI
Meet the AI family, AI in the mix, Make or Buy AI, AI ROI, Sourcing Success
Global Lead of RPA and Smart Automation at The Hackett Group
Meet the AI Family – AI is not a single lump, there are many different aspects in the form of computer science. There’s a wide range of techniques, which need to be recognised if you want to add value to AI.
AI in the mix – The strongest digital initiatives use the reinforcing and overlapping power of 3 A’s – Automation, Analytics and Artificial Intelligence. Its is rare that AI stands alone, you need to use automation and analytics to get value from it. All the As are different and have different origins but come together most powerfully when they overlap one another.
Make or buy AI – If you are looking to move forward with AI there are different ways to do this. You can use AI to create value by making, configuring or buying AI. The economics of each route are very different.
The rubber needs to hit the road – AI is a new way of doing work and approaching the value chain, the way you sell and the way you manage or train employees. The key is to identify the specific granular opportunities for AI. Its important to understand where AI can make an impact within your company, it cannot work everywhere.
AI ROI – All digital projects have a risk, some will generate revenue quickly and some will fail. A third of AI projects to date fall into the failure zone. 5-10% really excel to achieve a high return. Its important to think through precise targeting. The Hackett Group have found there is a very clear correlation between achieving strong AI and strong other digital investments. Where AI can be connected to a business outcome or decision, there is potential for a huge return on investment. There are 5 considerations for the business case: AI spectrum, investing in data first, ongoing cost for training and maintenance, outcomes to improve business decisions and digital enablement, where value depends on a complete sequence of capabilities.
Sourcing Success – Digital routes - what are the options? Can you use a service provider to implement AI? Working alone, you retain control and simplicity but loose the specialist view of a supplier. If you’re going to work with a third party for AI, there needs to be an agreement for both parties to regain control of the data.
Contracting in AI
Use rights and ownership, key areas of difference, top tips for successful contracting.
Senior Associate at CMS
Different elements of AI are important when thinking about contracting for AI. AI tool, training data, training instructions, production data, AI developments and finally the output all need to be considered.
Use rights and ownership - The rights in the AI tool are most likely going to be owned by supplier and licenced down to the customer. When it comes to training, its important to make it clear in the contract who will be training the tool, who will be providing the training data, who will be preparing the data and who will model and tune the algorithm for live use. If you are providing data, third party rights need to be thought about and written into the contract. How do you deal with sensitive information? E.g. if the data has personal information in it, GDPR will need to be complied with.
The following use and ownership elements need to be considered when forming the contract:
The Key areas of difference when contracting for AI:
Top Tips for successful contracting:
Adoption of New Technology: a framework for AI success
What laws apply to AI and how do we make sure it works? How do we deploy the complex technology as part of an outsourcing project in a way that will work legally for the five year term of the agreement?
Partner at DLA Piper
Partner at DLA Piper
COVID has shone a light on the uptake of technology, however, trust in technology has been dented. Research from the EU observer shows that only 20% of those surveyed trusted the current rules of AI. Any successful AI strategy is going to need to be sustainable and future proofed and anticipating future regulation is key. Key guidance and regulatory proposals have been reviewed by DLA Piper and there is a great variation in perspectives. Common themes are
All underpinned by good corporate governance and result in trust of the application and how it’s used.
But what does this mean in practice? You must ensure your organisation has a framework in place to ensure ethical thinking is deployed at every level of the process. Most of the guidance recommends a risk-based approach.
A flexible risk-based approach considers the following areas:
Although a risk-based approach is recommended, there is no one size fits all and you should choose the technology that best suits you.
Questions from the floor
‘To what extent are you seeing granular service levels and performance obligations; and also wider contract obligations a around transparency and explainability, accountability [inc liability] etc - all aspects addressed in many guidance docs out there? It seems these terms do not tend to match up with that guidance etc and are just v high level with no real acceptance of risk by suppliers’
Daniel Gallagher suggests that in terms of level of obligations or the way the contract deals with transparency, accountability and explainability will depend on what the AI is used for and how it affects the individual. If you’re using a tool which will impact humans, ethics regulations need to be in place. You will need to seek contractual obligations and you might see some parts of the contract dealing with the ability of the customer to get the supplier to help them to understand how the decisions have been made. This may be less of an issue if you are using AI for predictive maintenance. You may not need to know how the decisions have been made if you are only interested in the outcome.
Manish Tanna adds that he is not seeing an acceptance of risk by suppliers, purchasers will put in broad and high level proposals for protection e.g. they will propose the supplier should give a broad warrantee of the transparency of the AI system – which won’t be accepted as it is too broad. Manish is trying to encourage purchasers to be more granular on the protection that they’re seeking from suppliers. Its important to insist on compliance with GDPR articles which require meaningful logic to be explained. General protection is not going to be accepted by suppliers.