ACMW Celebrating Technology Leaders Episode 15: Generative AI in Enterprise Software

We are at the forefront of a technological revolution where generative AI is redefining the boundaries of what’s possible within enterprise environments. In episode 15 of “ACM-W Celebrating Technology Leaders,” our host, Bushra Anjum, discussed the current trends, challenges, and future prospects of integrating generative AI into enterprise software with our esteemed panel of experts. Our panel, Mamta Suri, Elaine Zhou, and Rebecca Sanchez, represent diverse fields – AI research, software engineering, product management, and enterprise strategy, and bring different perspectives to hotly debated topics emerging from the rapid rise of generative AI. 

In this article, we selected the highlights from the webinar. The session is also available on demand on YouTube.  Previous episodes of “Celebrating Technology Leaders” can be viewed here.

Let’s meet our panel

Mamta Suri is an accomplished technical leader with over 20 years of experience managing global teams across diverse industries. With a background in Computer Science and Molecular Biology from UCSC and Carnegie Mellon University, Mamta has led various AI and ML projects. She also hosts an AI podcast called AI Unfiltered, where she engages with industry leaders on cutting-edge AI-related topics. 

Elaine Zhou is the co-CEO of SageCXO.com, a strategic consulting firm for technology companies seeking to improve business strategy and product development. Elaine served as CTO for Change.org, a leading global platform for social change with over 600 million users, and Vidado.ai, an AI startup later acquired by SS&C. Elaine is a long-term mentor in multiple technology communities and is a frequent speaker on responsible AI and women-in-tech conferences. 

Rebecca Sanchez has spent the past 18 years building products that have helped millions of students become better readers, writers, and communicators. As SVP Product at NoRedInk, she leads a dynamic team of PMs, designers, engineers, and curriculum experts focused on improving writing outcomes for K-12 students. 

Impact of GPT

Bushra: Let’s return to November 2022, when GPT 3.5 launched. What were you working on then, and how did things evolve for you, your team, or your company?

Mamta: AI and GenAI have become the buzzword, but we must realize that they are not new and have been around since the 70s. In the company AI projects, we used to call it modeling or predictive modeling, but it wasn’t the GenAI. What changed is that we have access to LLMs that we didn’t have before. So, I started looking into what it has to offer.  I took courses in deep learning, and specifically on LLMs. I created a couple of projects; I created my code reviewer BFF, and that was before [Github] Co-pilot.  I made an app using the Department of Health Data, combining it with the available API from the different LLMs.  At that time, I was already creating AdmittMe.com, motivated by how hard it is to know what is required for college applications. So, that’s where I was and what I’ve been doing.

Rebecca: I remember two distinct things if I returned to that time. One is the announcement of ChatGPT, which spread like fire over the weekend. Many of my colleagues and I took in as much as we could over that weekend, and it was really interesting to see how different people reacted to this new development when we returned to the office on Monday. I remember the first person who reached out was an engineer on the team who had listed all the ways students could use it to cheat on our product. So, several people reacted with fear and pessimism.

On the other hand, some of us felt that this was game-changing and would change how we teach students and the most vital skills for them to learn.  So, a lot of soul-searching is required to see where we want to be on that spectrum:  Maximally responsive and forward-facing while also making thoughtful decisions and avoiding hype.  That’s all easier said than done, of course. What we ended up doing was similar to what many companies did: We looked at our vision, our strategy, and our use cases and asked where GenAI lets us solve the problems we are trying to solve, but in a better or faster way. Then, we also asked where it allows us to solve new problems that we previously didn’t think we could tackle through our product. That included more significant changes to our vision and strategy now that all these new things are possible.

Elaine: I have a slightly different experience. I started building teams to do deep learning development in 2015. So, my teams have been closely following openAI and ChatGPT. Before 2022, we felt things were not quite there. So, we focused on implementing our own models until when GPT 3.0 was released. We realized we were not as good as GPT 3.0. Last year, when we had an off-site with the product and engineering teams, we focused on social impact. There was this massive fear of potential harm from the mistakes arising from the models to the point of not even wanting to engage with this kind of solution.  I wanted to switch that mindset. So, as a result, we started many discussions among cross-functional teams, not just with product designers and engineers but also with our campaigners on the field. That helped us reshift the development and investment, focusing on not just building features but also tackling some of the internal operations, user safety, and trust.  So, we use these new models, but at the same time, we recognize that ML will not solve all our problems. We still need to continue using different solutions and build our own models. 

Developer Experience

Bushra:  I found it interesting that you said that generative AI was used not only for feature development but also for internal operations. In your experience, how has genAI changed the day-to-day work of developers and engineers? How are they using it in their workflows?  What’s their mindset?

Elaine: I had a lot of discussions with different engineering leaders on this. I advise teams to first focus on coding. There are tools for code generation, code completion, and code suggestions to make developers more efficient.  Automated refactoring, code translations, and unit testing help the QA team automate tests and code reviews for better coverage. I hope everybody will use Co-pilot and other tools as there is no more the excuse that testing can be too much work.  We can automate test case creation based on user stories! These are the basics. Next, you can talk about cyber security and performance improvements. Startups, typically, may need more resources and investment, which may hinder developer productivity and engagement and even impact career growth. With all these tools, it can be quicker to onboard new developers, improving experience and engagement.

Rebecca: To add to some of the things that you mentioned, Elaine, we’ve also been thinking about how to build up the skills and capacity of our team for this new type of development. Making that space for work on the internal tools or more operational work has been a good way to help the team learn and develop these skills. We have teams of product managers, engineers, data practitioners, curriculum specialists, and subject matter experts. It’s been interesting to see how this non-technical role – curriculum specialist – works with this new type of development.  It helped to have them play a key role in thinking about new approaches and making suggestions. So, these tools may democratize technical work, making room for a larger swath of stakeholders to be involved, which has been interesting to see.

Mamta: Elaine and Rebecca, you covered most of the use cases with developer efficiency and productivity. I want to add one additional layer to it. So, these tools are great. Yes, we should not be afraid of them, and we need to be upskilling to ensure we can use them. However, in their current state, these tools are like teenagers. You still have to watch them because they can get drunk (hallucinate) and cause car crashes. Imagine you have a fix that will go out to critical customers. You have to make sure that there is human oversight. So, use the technology as a helper; use it as a calculator to improve efficiency, but don’t rely on it entirely. Also, use enterprise licenses because they provide more protection for your company and your data; still, review these licenses. So, use these tools, but with guard rails.

Experience with GenAI at C-level

Bushra: We were talking about developer experience. So, let’s zoom out and talk about the executive layer. What are some key challenges enterprises face when integrating GenAI into their software ecosystem?

Mamta: From an executive perspective, be clear on the use cases you want AI to tackle. If you are delivering AI tools to other companies or users, there are regulations to comply with, which are also evolving. If you’re using it for internal tools and efficiency within the company, then again, figure out what needs to be optimized – coding, unit testing, as said before. Do it for the right reasons because there’s cost associated with it, and it adds up.  The second thing is that you’re sharing data. Stay away from tools that are just free because nothing is free. They could be a beta given for free, but later, there will be a cost associated with it. Look at the return on investment. There will be training and onboarding costs. What is the protection cost? You have to put guardrails around it.  There are specific regulations to comply with if you’re an HR company or a SAS company with data from other companies. For example, if you’re using some efficiency tool for HR within your company, you will have your employee data to protect. To summarize, consider internal and external use cases and the return on investment.

Rebecca: Let me take this from a different angle. We are developing software for school districts, and when we think of an enterprise/organization, we think of a school district. Some of what we learned has been similar to other enterprise developments. The principles to follow are: think about the different personas in the ecosystem and look at problems from all the different angles. What’s unique for school districts is that the needs of different personas may be conflicting. A school district is everyone from the superintendent down to the teachers and then the students who we care most about, of course. So, cheating has been a hot topic. If you were to focus purely on the teacher’s problem of preventing misuse and cheating, you would develop a more punitive feature. However, if you look at it from the student angle, the question is more about how we prepare students with the skills they need for success. How do we equip them with the necessary knowledge, academic integrity, and the ability to use AI responsibly? So, considering how everyone in the ecosystem will experience different features is important.

Elaine: It goes back to one of my favorite topics. It’s the leadership’s responsibility to set the vision. We all know that AI, or whatever technology before, is just a tool to solve a problem or improve a solution. But, we have a finite amount of resources. We must prioritize and decide where to put the effort into solving a problem. With GenAI, it’s a good idea to tackle some internal operational challenges first to get the team started and ensure we can get an early win, especially now, because everybody can use these tools to some degree. So, getting people to use it to ease their fear is essential. In our case, we have to tackle the legal part first. The user’s trust and safety come first. Traditionally, the biggest pushback comes from the legal, privacy, and executive leadership groups in the company. We have to get everyone on board and figure out how to collaborate. This exercise is a good learning experience for a cross-functional team and allows different team members to contribute to solutions and risk mitigation.

Ethical Considerations of GenAI

Bushra: You talked a little bit about law and compliance. Let’s focus more on ethical considerations when deploying generative AI in software development.

Rebecca: Across all industries, we’re in the early days of establishing policies,  principles, and measures, but that’s especially true in education, which tends to move slower than others. So, the whole team must be vigilant and proactive in addressing challenges because the guardrails aren’t there yet. We want to build safe learning experiences that are unbiased, effective, and fair for all students.  That’s something we’ve always cared about, and it’s even more at the top of our minds with the increased risk of GenAI. In education, we’re not only interested in the accuracy or correctness of a model’s output. We’re more interested in how well it teaches the students, for example, the skills they need to solve a math problem or the skills they need to write an essay on their own. We must consider whether it can teach students without giving away the answer. Is it at the right level for a student who’s in third grade versus 12th grade? Does it work well with students with unique learning needs?  There are all these additional considerations we need to wrap our brains around.  We all feel protective of students, so we’ve been conservative and released features that are teacher-facing. We are being thoughtful in determining the technology-readiness.

Mamta: Excellent point, Rebecca. It’s so important, and there are so many ethical considerations that we need to consider. I can talk about it for hours. Since we don’t have that much time, let’s start with the fundamental building block: data. So, if there is bias,  it will show up in LLMs. The analogy I like to use is the juicer analogy. Just because something is made in a juicer doesn’t mean it’s healthy. If you put fruits and veggies in it, you’ll get a healthy drink, but if you add lots of sugar, it is no longer a healthy drink. So, the data is very important.

You’ve already seen cases in the news where changes had to be rolled back because things were not working properly. So, the responsibility comes down to all of us. Things need to be regulated at three levels: first, the government level, of course. The second is the company level. It doesn’t matter what role you play in the company – HR, legal, accounting –  you need to ask questions: How is the company using the LLM models? How are we using data? How are we keeping it secure and private? Then the third is the personal level.  You should be mindful. Remember when you heard about cybercrime for the first time? That’s what’s happening right now for GenAI –  prompt injections, man-in-the-middle attacks, DoS attacks. So, be mindful of the data you’re sharing. For example, multiple websites have free tools to upload your selfie, giving you a professional-looking picture. But after, your image is used to train their models. If you’re a parent, please educate your kids. Tell your kids nothing is free or lost on the internet.

Elaine: Let me give a different perspective.  We can regulate everything. I remember learning about computer security back in the day – what’s the most secure system? It’s a system that is not running. We just need to unplug it. That’s the only guaranteed way to have a completely secure system. I’m so happy to see the regulation and the discussion on the ethics. Still, I also want us to make sure that as we create these regulations, we also double down on solving good problems and building good solutions that can change people’s lives for the better. That’s the only way because we can be afraid to try and end up not doing much. Then, we end up having regulations without advancement.

The Future with GenAI

Bushra: What are some of the most promising applications of GenAI that might emerge? Have you already seen some promising use cases on the market?

Elaine: Productivity is the number one thing we see being impacted, whether it’s enterprise or personal life.  I feel like my writing in English has improved in the last 18 months. From an industrial standpoint, health care is very promising – the impact on radiology, for example. We can have AI do the pattern matching and have the doctors validate, which is a vast improvement. You can also think about drug discovery. Climate modeling is an area in which I am very interested. Reducing emissions,  sustainability, and energy optimization are all exciting areas. But unfortunately or fortunately, depending on your viewpoint, I see more investments in FX.  If you’re a student or just early in your career,  pay attention to some areas that have yet to attract as much investment as they should.  

Rebecca: Obviously, I’m going to talk about education. We recently released a grading assistant that uses GenAI to help teachers grade student writing and add feedback. The goal here is not just to build an AI feature but to solve problems that have plagued teachers since the beginning. We want students to write as much as possible;  writing volume is extremely important.  That’s how students grow as writers, but writing is time-consuming for teachers to grade. So this is a perfect use case for us, and with this new feature, we saw considerable gains in the outcomes that we had been trying to impact previously and had been doing only incrementally. This time, we saw significant improvements: students were 50% more likely to receive helpful feedback on their writing, teachers spent  50% less time grading, and overall writing volume increased by 70%. It was very motivating to achieve these outcomes with a relatively small amount of work, and there’s so much more we can do with it. 

Mamta: Pretty much all the industries  –  name an industry where there isn’t a use case, there’s always a use case – from healthcare to medical to education, energy to aircraft manufacturing. But again, we need to consider the return on investment. If you want to start on your own and you’re asking about what not to do, try not to do a wrapper over openAI or any of those tools because, as you’ve seen, they keep coming up with new features, and your work gets disrupted. However, there is enough market to do something very specific.

The Cost of GenAI

Bushra: What could be some ways to manage the costs associated with generative AI?  How have you managed or how have you seen enterprises manage the costs?

Elaine: This is a very hard problem to answer. I’m doing some advisory work on some startups; many of them see how quickly the cost can go up. The key is to see if there is a concrete, GenAI-specific solution. Many companies or teams have experience in optimizing costs in the cloud. However, instead of focusing on the cost, you need to focus on the return.  Figure out what your comfort zone is and what stage you are in the business or what particular project you’re dealing with. Is it an optimization in a mature product, or is it a blue sky or moonshot project?  Understanding that will help you figure out how to optimize the cost.  Try to experiment – you can always try different architectures and see what will get you the best result, and based on that, build your capacity planning and cost prediction models and use AI/ML to solve your AI development problems. 

Learning GenAI

Bushra: What one or two suggestions would you give our audience to help them continue learning and growing in this field?

Mamta: First, ask questions, be curious, and second, go to reputable sources. For me, it’s deep learning AI by Dr Andrew Ng.   Many of their courses are free, and there are some certifications on Coursera for deep learning. Try to separate the hype from what AI can do in reality.  It’s important to upskill yourself.  Previously, you would write Word and Excel as part of your skill set in your CV, but pretty soon, everyone was expected to have those skills. So, it’s going to be the same with prompting. The next generation is already growing up with it.

Rebecca: I’m a big fan of podcasts and newsletters. For AI in the education space, I have two recommendations: (1) there’s a podcast called Edtech Insiders, which has been doing a great job with keeping up with AI advancements and bringing them to education, and there’s a newsletter called GSV, which is a venture firm that tracks more significant developments within AI.  Claire Zhu is an expert there, and then there is the education-specific AI news. That’s been my Bible every week when it comes out. 

Elaine: After listening to podcasts and reading research articles, try to practice them yourself. That’s the only way to learn, and the practice can be finding that project at work or trying it for your personal use. I find that I don’t get something unless I can explain it. I explained LLM to my mom and helped her plan her trip last year. So, if I can get my 80-year-old mom to understand what LLM is,  it’s probably a good start.  Second, for those students at school, focus on your fundamentals.  The basics are still needed to get to the cutting edge. To understand what is behind the technology,  look at all the published papers. You will see it’s all math, so go back to the basics. That is what really will help all of us in the long run.  

We thank our host and panelists for their brilliant advice. Explore the resources below to start or continue your learning journey in GenAI!

Further Reading

AI Unfiltered: Practical Applications for Business & Life Interviews with real people using AI (podcast)

AI Governance and Risk Management (article)

AI In Education: A collection of popular articles from Inspiring Minds from Harvard Publishing. How Generative AI Is Reshaping Education Practical Applications for Using ChatGPT and Other LLMs (article)


  • Facebook
  • Twitter
  • LinkedIn
  • Email
  • Print