The adoption of Artificial Intelligence (AI) across various industries heralds a transformative era, significantly enhancing efficiency, fostering innovation, and reshaping the future workplace. This technological revolution is not confined to a single sector; instead, it spans the entirety of the global economic landscape, impacting roles, skill requirements, and operational methodologies. As AI-driven tools and platforms become integral to business processes, they offer unparalleled opportunities for problem-solving, creativity, and strategic planning. Embracing AI technologies equips organizations with the means to streamline operations, engage in innovative practices, and remain competitive in a rapidly evolving digital world.
To address the sweeping changes brought about by AI and prepare the workforce of the future, a roundtable discussion was held with notable leaders and educators who are at the forefront of integrating AI into educational curricula and workplace strategies. The session, moderated by Kashyap Raibagi, Associate Director – Growth at AIM along with panelists Anurag Minocha, Sr Managing Director – Head of Data Products and Reg Data Strategy at Webster Bank, Erum Manzoor, Executive Technology Leader, Rachael Chudoba, Senior Strategist, Planning & Research at Performance Art, Virag Masuraha, Data Science and AI Leader, Satheesh R, Head of AI and Analytics Product at Charles Schwab and Jason Cooper, Chief Technology Officer at Paradigm.
AI’s Influence on Future Workforce Dynamics and Demands in the Banking Industry
In the banking sector, the integration of AI began, even before the term ‘AI’ became widely recognized. One of the major banks in the US developed a contract intelligence system that automated a slew of mundane tasks, such as reviewing commercial loan agreements and were able to save ~360000 hours / year for their lawyers and loan officers .
Although this technology was revolutionary and led to job displacements within the industry, it highlighted the fact that technology cannot entirely replace human effort. Instead, it showed the necessity for a collaborative approach between humans and AI, leading to rehiring and a new focus on training employees to work alongside technology.
This example is just one of many from various sectors, emphasizing an important point: the emergence of AI does not signify the end of human jobs. Instead, it indicates a shift in how we work, necessitating adaptations in our skill sets and the way we use technology. The introduction of the internet in the 1990s sparked similar fears about job loss, but instead, it created a vast new industry and more jobs than we could have imagined. Whether AI will have a similar impact is yet to be seen, but what is clear is that humans will always be necessary.
AI does not aim to replace humans; rather, it requires us to evolve our skills and learn how to leverage this technology effectively. The demand for new roles, such as prompt engineers, in every industry, paying over $150,000, illustrates this point.
Although it might seem straightforward, the role requires domain expertise in addition to understanding how to use AI. This necessitates interdisciplinary knowledge, combining expertise in specific fields with proficiency in AI technology. We cannot stop the progress of AI; we must embrace it and become part of this evolving landscape.
– Anurag Minocha, Sr Managing Director – Head of Data Products and Reg Data Strategy at Webster Bank
Addressing the Impact of AI and Generative AI on Institutions – Strategies and Frameworks for Effective Response
I believe it will have a significant impact. Many computer science and data courses, which have traditionally focused on teaching technical skills, are beginning to integrate mindset skills. This includes combining digital literacy with digital citizenship behaviors, which emphasizes productive decision-making, group dynamics, and communication skills. This is a shift away from an individualistic approach to technical programs. The focus is now expanding beyond technical skills to encompass expertise in privacy, ethics, and collective digital communication.
Furthermore, this aligns with the adoption of inquiry-based teaching methods, where students are encouraged not only to learn how to ask pertinent questions—taking prompt engineering as an example of this skill—but also to critically evaluate the responses provided by AI and machine learning. This involves recognizing issues related to hallucinations or bias and questioning the validity of the information presented, thereby challenging both the nature of the questions we pose and the answers we receive.
– Rachael Chudoba, Senior Strategist, Planning & Research at Performance Art
University Leadership Priorities – Digital and Curriculum Innovation
I want to expand on what we’ve observed in programming and general modeling analytics, which is also relevant to Gen AI and AI as a whole. Students emerging from academia are well-versed in the basics and technical aspects but often lack insight into real-world applicability. They’re accustomed to working in highly controlled environments. For instance, they might receive a pristine database for modeling exercises, oblivious to the extensive preparation required beforehand, such as data cleansing, joining, and ensuring data integrity.
To address this gap, educational programs should emphasize the preliminary steps, often regarded as the “grunt work.” This includes handling raw, unstructured data, grappling with complex join operations, and data join operations. Such exposure would broaden their understanding, highlighting the importance of starting with the basics of analytics and modeling and their usefulness for effective AI applications.
Technology education tends to focus narrowly on specific skills, ignoring the broader context. It’s crucial to teach students not only about data preparation but also about its practical applications—how to apply technology solutions in real-world scenarios, collaborate across disciplines, and view problems from multiple perspectives. This comprehensive approach, covering everything from data preparation to application, will significantly improve students’ readiness for the professional environment, ensuring they are equipped to handle the complexities of real-world data and its implications for AI deployment.
– Virag Masuraha, Data Science and AI Leader
Balancing AI Technical Proficiency and Human Skills in Education Across Fields
Teaching students extends beyond technical skills to include crucial non-technical aspects. Reflecting on Satish’s insights, one key area is understanding fundamental biases. Biases can be found in various forms, including in the data itself, a point both Rafe and Anurag have touched upon, highlighting the need for individuals capable of discerning inaccuracies and exhibiting critical thinking.
Human bias is another significant concern, given that much of what underpins artificial intelligence, excluding the creative aspects, involves human-curated data, such as document tagging. This introduces the risk of human curation bias. Additionally, interpretation bias at the final stage of application is crucial; how data, devoid of initial biases, is interpreted and implemented in the business realm is of paramount importance.
Traditionally, ethics might not have been a focal point in the hard sciences. However, ethical considerations, especially the distinction between what can be done and what should be done, are increasingly vital in the context of human data and the potential for human consequences.
Another important recommendation is the value of internships. They offer students practical experience and a taste of the corporate world’s realities, underscoring that data is never perfect. Real-world exposure helps students appreciate the complexity of managing projects and the importance of navigating imperfections in data. These considerations are also important in the ongoing career development of data science professionals.
– Jason Cooper, Chief Technology Officer at Paradigm.
Lessons from Industry Collaboration in an AI-Driven Workplace
Many corporations, especially those within what is commonly referred to as corporate America, have mandates to hire from local schools and invest in education. However, your question goes beyond these mandates to explore the characteristics of successful industry-academia collaborations from my own experiences.
For me, the measure of a project’s success is its ability to be deployed for consumers who then actively use it. My primary focus has always been on reducing time to market, which has influenced my career decisions, including my tenure at Citi, which has been my longest.
The key to enabling students to engage in meaningful work, encountering and overcoming real-world challenges, lies in balancing their interest in pursuing “cool” projects with the necessity of understanding foundational concepts. This echoes the sentiments of both Virag, who emphasized the importance of basics, and Anurag, who highlighted cutting-edge advancements.
In leading and mentoring young talent, it’s essential to introduce them to innovative technologies and techniques while stressing the importance of hard work and foundational knowledge. Successful projects often require a specific focus group, as I’ve seen in one of my digital transformation projects. We targeted participation from Gen X, Alphas, and Gen Z to leverage their knowledge of current trends and language, despite my personal decision to avoid social media until recently.
The biggest learning opportunity for these young participants was understanding the significance of navigating legal approvals and regulatory compliance, processes they initially viewed as time-consuming. This experience was invaluable, not only for expediting projects to market but also for teaching them the importance of these procedures. Engaging with their feedback early on was among the most enlightening aspects of these collaborations.
– Erum Manzoor, Executive Technology Leader
Driving AI Innovation and Culture with Open Source in a Company
Navigating the landscape of regulated industries (financial sector, healthcare, department of defense, etc.) presents a substantial challenge due to their inherently risk-averse nature. The deployment of Gen AI within these sectors is often restricted, primarily utilized internally rather than in client-facing applications. This cautious approach is evident in various tightly controlled environments, including the Department of Defense, where the use of tools like ChatGPT is restricted due to concerns over information security. Such restrictions hinder innovation as Al/MI practitioners and researchers lack access to experiment with the latest advances .
To foster innovation within these regulated confines, quickly establishing clear guardrails and guidelines that allow for its safe, secure, compliant and ethical use is essential. This framework would allow for practitioners to operate freely and securely within well delineated boundaries. If one were to view GenAl work as a continuum from purely out-of-box use cases to more foundational work like fine-tuning and custom models, the challenge for foundational innovation gets harder in regulated environments. Foundational GenAl research is still within corporations that have both the mix of risk-appetite, well understood guardrails, access to compute resources, which in turn attracts strong AI talent.
One leveler across all industries has been the open source movement in democratizing access to Gen Al research and development. Hugging Face and other open source LLM projects expand the possibilities for engagement in foundational Gen AI work beyond the walls of the few companies traditionally dominating this space. This development for secure local work is crucial for maintaining a diverse and vibrant ecosystem for Gen Al innovation, ensuring broader access to the tools and platforms necessary for advancing the field.
– Satheesh R, Head of AI and Analytics Product at Charles Schwab