A: Let’s say you’re generating marketing content, you know, you’re doing a marketing blitz, and you want to generate invitations and emails, and so on and so forth. You can give it [AI] a set of instructions, the prompts, in natural language, and it will generate that content for you. You could also use AI to generate product descriptions as you create your E-commerce websites by letting the model create those for you.
A: AI can be used for code generation. What’s happening behind the scenes with Excel is it’s generating code. As a business owner, you can use AI to create your reporting dynamically. Maybe I want to know which customers have churned in the last three months in a certain region. I can just state that in English. I can say, “Hey, please show me all the customers that churned in the last three months in Miami.” And now, the model turns that into SQL code, hits your CRM system, and then returns that information to you. Then you can present that information dynamically in a report to the end user.
A: Microsoft’s cloud system is called Azur, and what we want to do, while OpenAI builds capabilities like ChatGPT, is to bring these same capabilities to you as a consumer. So we are bringing all of these AI services and offering them as part of our ecosystem because not everybody can afford an army of data scientists to build their own models and do this in house. We are now bringing models from partners like OpenAI and offering them on our cloud platform so that you access these models as if you were outsourcing the data science work to Microsoft. Having said that, AI also provides computing capability for data scientists to build their own models. So fundamentally, I think of building an application as one of two ways. One is you build your own custom AI models because you have the data science expertise and access to a lot of data. And you may want to build models using open-source frameworks and other such technologies. While you can use AI’s computing platform to do that, the second way of building applications is to use the pre-built models that companies like Microsoft are offering, like OpenAI, as an example, and you then just worry about integrating them into your workflows.
A: Think about information mining or knowledge mining of any kind. A lot of people have some basic search applications. You can use AI on your website for semantic search. So you may have a website where people can come in and search for information to find content. But what if we can make that search very human-like, very conversational, exactly like what ChatGPT does in the public application, but doing it on your data set? Now you can have this chatbot that goes and finds answers within your enterprise context.
A: So the first thing is ChatGPT is a research project that was introduced so that people could understand the power of this technology. So that’s more of a playground for you to maybe try and test out the capabilities. But that’s not what you would use in a business context when you’re actually implementing this technology in your organization. What we have done is we have brought the exact same ChatGPT AI models into the Microsoft ecosystem. When you use the Microsoft version of GPT, which is exactly the same, you’re now going to be able to deploy these models in a private environment, meaning that it’s protected from public IP access. Nobody can see what questions you’re sending to the model and what responses it’s generating. It’s all your private data, that only you will know about. None of the data is used by Microsoft or OpenAI for any retraining or making these models better. In fact, we cannot even see those models because it’s in your private system.
A: So one example that we have on the slide is call center analytics. So let’s say you have a call center, or you have customers calling in. And you are a customer service agent responding to those calls. It’ll be great if we can actually pick all of the transcripts of the calls that are coming in, and the AI automatically generates responses back for the agent so that he or she can then use that in their response as a draft as a copilot, right? And the idea here is that the AI model is intelligent enough to know previous call patterns that have happened for the same customer. Or you could also look up other information, like maybe new promotions that are going on, based on some of the keywords the customer is telling you. All of that is very hard for the call agent to understand and respond to instantly. Having the model sort of go and look at all of this information and automatically generate the next set of responses is a way they could enable call agent coaching in that enterprise context.
A: An example is using it to analyze sales data. If I have some information about products and about my company’s sales performance, and I want to ask questions, you know, to GPT, “Hey, compare the performance of each product and determine which one is the most profitable.” So here, again, I’m giving the AI specific questions, specific information, or context about my products and sales in the last two or three months. If you’re a small business and you want to understand your data, you can use AI for data analysis using human-like natural language question answering. So I can simply ask a question. For example, “compare the performance of each product and determine which one is the most profitable.” It goes in and again, looks at it. The AI will come back and generate a response, saying, based on this analysis, product A is the most profitable with so and so information. So all of this data analysis that you’d have to do with custom tools with some reporting applications and data mining applications, you can now get the power of natural language to do that.
A:As we’re working with OpenAI, a lot of the research is about how to make this multimodal so that you can also have video and audio. You’re already seeing images starting to come into play. You’ll have this large, single foundational model that can answer questions no matter what our input content type is, and then be able to generate those output classes as well. So that’s sort of the innovation or the roadmap in terms of how we’re going to think about these generative AI technologies. [We’ll get] more and more focused on these foundational models, because they have so much data in them, they can do a variety of tasks. Today, [in contrast] we’re still talking about individual AI services for a lot of individual things.
A: Let’s talk about customer support. Even if you don’t have a dedicated customer support team, you can really create chatbots and have automated email responses generated for questions that your customers are asking you via email or whatever forum they use it for.
A: It’s very simple to think about this, you can either build your own AI models, and the tool we have called Machine Learning helps you do that. Or you can use a number of different pre-built models that we offer. And OpenAI, which is ChatGPT offered within Microsoft, is just one such model. You can use pre-built models, and we have a family of them, and you can also build your own models using our machine learning platform. We cater to different personas, if you’re a data scientist, you have certain tools you could use. If you’re a developer, we use a certain set of tools. And then, if you’re a business user, we also have a lot of low code, no code tools, as we call it, where you drag and drop and build these applications without having to do any programming or coding.
A: Just to get started, to use our cloud, there is really no cost. It’s all usage-based. If you’ve never used it before, you have to go and create what we call a subscription account. When you create a subscription account, there is no cost. You pay nothing up front, zero dollars. You usually put in a credit card, or if you’re an enterprise, you can have an agreement with us where you’re pre-paying for a set of computing capabilities. The moment you start using, creating, and deploying the services is when you start paying. The AI models will all be based on usage. That’s why this is exciting. We have democratized it so much that not only is it easy to access these models and take advantage of this capability, but also in a very low-cost manner.
A: When we think about hallucinations, and how we make sure that the model is not answering questions that are outside its focus area, we have a certain way to control this. I can give it some additional instructions to say we only stick to a domain that I’m interested in, for instance oncology, which is the control you get when you do this in Azur. So if I ask a question like this, “Who won Super Bowl 25?” it’s basically saying it’s not going to answer that question. It’s not able to provide anything outside oncology. [It says]”Please consult a different source or expert for information on the Super Bowl.” This is very important, because when you’re actually using these technologies internally for your business, you want to get the power of ChatGPT, but at the same time, you don’t want it to go off track and give you answers that are irrelevant or could be factually inaccurate.
A: We have an office of responsible AI that looks on a much broader level at how can we make sure that we are grounded on six principles that we have. Making sure privacy and security is key. Like you said, inclusiveness, making sure that data is used in such a way that there is as little bias as possible when we are building some of these capabilities in house, making sure that we have tools that can let you provide accountability if something goes wrong. Transparency, being able to go in and explain some of the model assessments or the model outputs, and then fairness. we have actually a framework called fair learn that has open source that you can use when you’re building these custom models yourself to assess for model fairness, and then finally, reliability and safety. So those are the six grounding principles that we have from a responsible AI standpoint. Having said that, responsibility is not one individual thing, it’s not a product that somebody can say, hey, go deploy it and you’re responsible all of a sudden. As you know, it’s a set of tools and processes, along with governance rules. And then we need to have education where we are training people that are building these technologies and using these tools, so that they can adopt those practices into their application. So there’s lots of building blocks that we provide to enact these principles. But it’s a shared responsibility of all of us to build these tools in a responsible manner.
Register at bizhack.ai to attend for the full Masterclass series on “AI for Marketing and Sales” and learn from other experts like Sriram how AI can help your business grow and become more profitable. Interested in finding out how a BizHaack AI-Powered Fractional CMO can help you leverage AI tools in your own business? Learn more or sign up for an info session here.